Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2010 November 25

From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by MalnadachBot (talk | contribs) at 14:40, 25 February 2022 (Fixed Lint errors. (Task 12)). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)
Mathematics desk
< November 24 << Oct | November | Dec >> November 26 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 25

[edit]

QR

[edit]

If I know the QR decomposition of X - is there a good way to find the Q and R of X' from the results. Shyamal (talk) 04:07, 25 November 2010 (UTC)[reply]

What's X'? Is it the transpose of X, or a derivative of X, or what? --Tardis (talk) 19:25, 26 November 2010 (UTC)[reply]
Yes transpose of X. Shyamal (talk) 06:19, 29 November 2010 (UTC)[reply]

Triangle Numbers

[edit]

I ma attempting to prove that, given a natural numbers n, one can wirte it as the sum of three, not necessarily distinct, non-negative integers in ways. Induction seems like the best (/only?) way forward. We assume the result for n=k. Then we need to show that there are k+2 more ways for n=k+1 but I can't quite see the approach I need to take.

My original thought was to write all the solutions for n=k as ordered triples (a,b,c) and then add one to each element of the triple, one at a time, which would give me at least ways of writing k+1. This is clearly wrong if we consider the case n=1, for which we have (1,0,0), (0,1,0) and (0,0,1). Adding one to the second element of (1,0,0) and adding one to the first element of (0,1,0) gives us the same combination twice for n=2, which I don't want. Can someone give me a better way of thinking about it? Thanks asyndeton talk 16:18, 25 November 2010 (UTC)[reply]

No induction is needed. The triples (a,b,c) such that a + b + c = n can be uniquely represented as a sequence of n marks (say, "|"), divided into three groups using two marks of a different kind (say, "+"). In other words, you have n + 2 positions, and you need to count what is the number of possible choices of a 2-element subset. This number is .—Emil J. 16:37, 25 November 2010 (UTC)[reply]
...but you can use induction if you want to. As you say, you need to show that (1) each solution for n = k can be used to construct a unique solution for n = k+1 and (2) there are a further k+2 solutions for n = k+1 that cannot be constructed from a solution for n = k. Consider what happens when you add 1 to the first element in a triple for n = k. Which solutions for n = k+1 cannot be reached in this way ? How many of them are there ? (Also, don't forget to check that the formula is correct for n = 1). Gandalf61 (talk) 16:54, 25 November 2010 (UTC)[reply]

Thank you both. Both helpful answers and it's nice to see a way of doing it that doesn't require induction. Cheers. asyndeton talk 17:30, 25 November 2010 (UTC)[reply]

If you want a kind of visualization, consider an equilateral triangle of height n, and all points in it with integer distances resp. from the edges. This makes the bijection you want. --pma 23:32, 25 November 2010 (UTC)[reply]
Whew, I was afraid for a moment you'd ask us to prove that any natural number can be written as the sum of three triangular numbers. That one is a hard theorem. – b_jonas 19:36, 26 November 2010 (UTC)[reply]

Simultaneous equations Puzzle

[edit]

I have a maths puzzle here that goes: f(x)=x^4 + ax^3 + bx^2 + cx + d, and includes the points (1,1), (2,3), and (3,5). What is f(0)+f(4)? SInce I have 4 unknowns but only 3 points I cannot solve for a b c and d. I thought to write out f(0)+f(4) by inputting 0 and 4 for x in f(x), and then do the same for the known points. I then had 64a+16b+4c+2d+256, a+b+c+d = 0, 8a+4b+2c+d=-13, and 27a+9b+3c+d=-76. I thought that I could multiply each equation by a different constant (m, n, p) to get m+8n+27p = 64, etc., etc., then solve the system, multiply each equation by the constant, and add 256 However this gives me 384 which does not seem to be the correct answer. How do I solve this? THanks. 24.92.78.167 (talk) 21:15, 25 November 2010 (UTC)[reply]

Let g(x) = f(x) - (2x-1). Then g(1)=g(2)=g(3)=0, so g(x)=(x-1)(x-2)(x-3)h(x) and since g is degree 4 and the coefficient of x^4 is 1, h(x) must be of the form x+k. Then g(0)+g(4)=-6k+6(4+k)=24 and f(0)+f(4)=g(0)+g(4)+(-1)+7=24+6=30.--RDBury (talk) 21:42, 25 November 2010 (UTC)[reply]
Direct, though long approach:
subtract a+b+c+d=0 from 8a+4b+2c+d=−13 and subtract 8a+4b+2c+d=−13 from 27a+9b+3c+d=−76 — you get 7a+3b+c=−13 and 19a+5b+c=−63;
subtracting those two results gives 12a+2b=−50.
Now the answer is: f(0)+f(4) = d + 64a+16b+4c+d+256 = 2(32a+8b+2c+d+128) = 2((24a+4b+112) + (8a+4b+2c+d+16)) = 2(2(12a+2b+56) + (−13+16)) = 2(2(−50+56) + (−13+16)) = 2(2×6 + 3) = 30. --CiaPan (talk) 17:03, 30 November 2010 (UTC)[reply]

Proof of e

[edit]

Hello. Why is ? I don't want to use ln due to circular reasoning. Thanks in advance. --Mayfare (talk) 21:22, 25 November 2010 (UTC)[reply]

In some treatments, the right-hand side of the equation you give is actually the definition of e. There's no general answer to how you prove the equality; it depends on how you define e in the first place. --Trovatore (talk) 21:24, 25 November 2010 (UTC)[reply]
See Characterizations_of_the_exponential_function#Equivalence_of_the_characterizations. Bo Jacoby (talk) 23:43, 25 November 2010 (UTC).[reply]

My math professor provides a nice proof of this limit at the very end of this pdf handout: http://staff.aub.edu.lb/~kmakdisi/201/growth.pdf 89.211.116.63 (talk) 07:30, 26 November 2010 (UTC)[reply]

This item explains why 2 is too small to be the base of the natural exponential function and 4 is too big. Once you understand that, you're well on your way to appreciating the limit you mention. Michael Hardy (talk) 16:18, 26 November 2010 (UTC)[reply]

Testing if a symmetric rational matrix is positive semidefinite

[edit]

Hi.

I have symmetric matrices over Q that I want to test for positive semidefiniteness. If they have negative eigenvalues, these are likely to be of very small magnitude compared to the spectral radius. The rational coefficients may have large numerator or denominator. The following methods do not work, because they are too slow:

  • Compute the characteristic polynomial (even with a multimodular approach), count positive eigenvalues as sign alternations in coefficients.
  • Test for positive definiteness using Sylvester's rule (leading minors > 0). Determinants of such matrices are slow to compute even using a multimodular approach.
  • Compute a Gaussian decomposition mod various p, try to reconstruct the rational coefficients using Chinese remainder and rational reconstruction.

The only method that seems workable is Gaussian decomposition over Q.

Any other idea? (I have to look at Siegfried's Rump papers)

David.Monniaux (talk) 21:57, 25 November 2010 (UTC)[reply]

I assume the matrix is too large to just compute the eigenvalues? O(n^3) seems decent when the minimum possible is O(n^2). 67.158.43.41 (talk) 23:40, 25 November 2010 (UTC)[reply]
The rational coefficients have large numerator and denominator. Computing the eigenvalues in a sound algebraic way is then too costly (at least with the methods implemented in Sage). I'm however giving a shot at trying to estimate the eigenvalues of least norm; it could be a useful heuristic. David.Monniaux (talk) 14:57, 26 November 2010 (UTC)[reply]
To be clear, I was talking of numerically computing eigenvalues, using something like this algorithm (or one of many others). I didn't mean you should symbolically find eigenvalues through, say, algebraic manipulation of the characteristic polynomial. I'm no expert on numeric linear algebra, but I wonder how much better you're going to get than an O(n^3) algorithm. For instance, Coppersmith-Winograd is the best matrix multiplication known and is O(n^2.376), and I imagine eigenvalue computation is harder than matrix multiplication. 67.158.43.41 (talk) 18:08, 26 November 2010 (UTC)[reply]
You seem to assume unit cost for arithmetic operations, which is correct in the case of fixed-precision floating-point, but which is incorrect in the case of rational numbers with possibly large coefficients. What kills is the increase in the size of the numerators and denominators. David.Monniaux (talk) 10:16, 27 November 2010 (UTC)[reply]
Point taken. Since the numerators and denominators are "large" I suppose the magnitudes are delicate enough that the matrix can't be approximated practically with floating point entries for use in numerical computations? (I figure if there's even a hint of a chance, it's worth suggesting.) 67.158.43.41 (talk) 17:54, 27 November 2010 (UTC)[reply]
Where did the matrix come from? If it is the square of anything, that would do to establish positive semidefiniteness. If the matrix entries are determined by some sort of formula, you might be able to use Bochner's theorem. Robinh (talk) 08:38, 26 November 2010 (UTC)[reply]
It is not a square. The thing is, I have to test for psd inside an algorithm, because I have to take a certain action iff the matrice is not psd. David.Monniaux (talk) 14:57, 26 November 2010 (UTC)[reply]
If it's a symmetric positive-definite matrix with real entries, then it is a square. Michael Hardy (talk) 15:53, 26 November 2010 (UTC)[reply]
Maybe I was not clear enough. I have a symmetric rational matrix, which may or may not be psd. I need to determine whether it is so. I don't know in advance whether it is a square or not. (And, yes, indeed, any psd matrix is the square of another matrix; this is obvious by diagonalization.) David.Monniaux (talk) 10:14, 27 November 2010 (UTC)[reply]
You are probably best off using floating-point arithmetic. You may need precision greater than the machine precision, but I'm sure there are implementations of large-precision floating point arithmetic. My reasoning is that (speaking as an outsider here) at least half of numerical analysis appears to be clever tricks for computing eigenvalues and eigenvectors, and my impression is that they're very good at it. If the algorithm is numerically stable and if you have enough precision, then this is guaranteed to give you the correct result.
You could use a Cholesky decomposition; the article says it is O(n3) floating-point operations, but I don't know how exact computation with rationals affects this. If the Cholesky decomposition fails to exist then you've shown that the matrix is not positive definite.
You may be able to use the Gershgorin circle theorem as a preliminary screening step: If none of the Gershgorin discs meet the region of the the complex plane having non-positive real part, then the complex eigenvalues of the matrix all have positive real part; which in your case shows that you are positive definite. You may be able to do even better than that by using Householder reflections to make the matrix tridiagonal first (in fact, tridiagonalization applies to any technique, including the ones you mention above as being too slow; it may speed them up enough to be useful). If the Gershgorin circle theorem is inconclusive, then you can apply a more expensive technique; the total improvement may be enough to make your computation practical. Ozob (talk) 15:39, 28 November 2010 (UTC)[reply]
This is probably old news to you, but for the chance that it isn't: Consider the simpler problem of finding the determinant of a size-n diagonal matrix with rational coefficients of size k. If you do it the naive way of multiplying the entries successively, it will cost (M(n) is the cost of multiplying n-digit integers). If you find the product recursively by multiplying matched pairs, most of your multiplications will be with small numbers so I think you'll have only . So for your problem, if you find a divide-and-conquer algorithm which likewise only multiplies numbers of comparable sizes, I think you won't be taxed too much for doing exact arithmetic. -- Meni Rosenfeld (talk) 14:21, 30 November 2010 (UTC)[reply]