Wikipedia:Reference desk/Mathematics: Difference between revisions
Line 152: | Line 152: | ||
A new editor has written an article [[Multivariate kernel density estimation]] and requested [[Wikipedia:Requests_for_feedback/2010_September_17#Multivariate_kernel_density_estimation|feedback]]. The editor seems to have basic Wikipedia style under control; it needs the review of someone familiar with the field. (I'll also crosspost a couple specific editors.)--<font style="font-family:Monotype Corsiva; font-size:15px;">[[User:Sphilbrick|<span style="background:#002868;color:#fff;padding:0 4px">SPhilbrick</span>]][[User talk:Sphilbrick|<span style="background:#ADD8E6;padding:0 4px;color:#fff;">T</span>]]</font> 21:12, 17 September 2010 (UTC) |
A new editor has written an article [[Multivariate kernel density estimation]] and requested [[Wikipedia:Requests_for_feedback/2010_September_17#Multivariate_kernel_density_estimation|feedback]]. The editor seems to have basic Wikipedia style under control; it needs the review of someone familiar with the field. (I'll also crosspost a couple specific editors.)--<font style="font-family:Monotype Corsiva; font-size:15px;">[[User:Sphilbrick|<span style="background:#002868;color:#fff;padding:0 4px">SPhilbrick</span>]][[User talk:Sphilbrick|<span style="background:#ADD8E6;padding:0 4px;color:#fff;">T</span>]]</font> 21:12, 17 September 2010 (UTC) |
||
:Looks like a very nice article. The main improvement I can think of is that the exposition could use a few more wikilinks. [[Special:Contributions/67.119.14.196|67.119.14.196]] ([[User talk:67.119.14.196|talk]]) 05:42, 18 September 2010 (UTC) |
|||
= September 18 = |
= September 18 = |
Revision as of 05:42, 18 September 2010
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
September 12
Laplace's Equation
So, I have a function satisfying in V and on S where V is a region in and S is the bounding surface and I have to prove that everywhere in V. Not sure what I've done in airtight so just want to check it here. Basically, I've used a combination of Green's First and Second Identities to deduce that and since there are no restrictions on , it is a completely arbitrary function and so the only way this is always 0 is if . Is this a solid argument or does it need tightening up, or is it just plain wrong? Thanks asyndeton talk 13:34, 12 September 2010 (UTC)
- I'm not sure I follow your argument. The integral is over S, but we want to know what φ does on V. We already know φ is zero on S. Rckrone (talk) 15:04, 12 September 2010 (UTC)
- You're right, that was a stupid thing to say. The only other point of significance I can see in my, almost indecipherable, working is that . This seems like an even weaker argument but can you again apply the arbitrariness of and deduce that must be 0? asyndeton talk 15:11, 12 September 2010 (UTC)
- Your argument based on the `arbitrary' nature of seems shaky to me. The function is a solution of Laplace's equation, so it is by definition a Harmonic function, which is a fairly restricted class of functions that have some nice properties. You can probably use Harmonic_function#Maximum_principle to get your result, but our article doesn't seem to mention how this fact is derived.SemanticMantis (talk) 16:09, 12 September 2010 (UTC)
- It seems clear why it works: ∇φ is divergence free in V so if it's non-zero somewhere we can consider the curve σ that follows the direction of ∇φ, along which φ is strictly increasing. σ can't be a closed curve so it has to meet S at each end and then φ along σ must have a strict maximum at one of those end points. If φ=0 on S, then this would be a contradiction and so ∇φ is uniformly zero on V and φ is constant and so zero. That doesn't really seem rigorous though. I'm sure there must be some way to use the vector calculus theorems to make this argument succinctly but I'm very bad at vector calculus. Rckrone (talk) 16:46, 12 September 2010 (UTC)
- Thanks for your answer. I'll have to think on this for a while because it's slightly more sophisticated than I can cope with at first sight! asyndeton talk 09:00, 13 September 2010 (UTC)
- It seems clear why it works: ∇φ is divergence free in V so if it's non-zero somewhere we can consider the curve σ that follows the direction of ∇φ, along which φ is strictly increasing. σ can't be a closed curve so it has to meet S at each end and then φ along σ must have a strict maximum at one of those end points. If φ=0 on S, then this would be a contradiction and so ∇φ is uniformly zero on V and φ is constant and so zero. That doesn't really seem rigorous though. I'm sure there must be some way to use the vector calculus theorems to make this argument succinctly but I'm very bad at vector calculus. Rckrone (talk) 16:46, 12 September 2010 (UTC)
Polynomial division
Hi. I've got a puzzle that asks me to divide by and find the remainder. What is a concise way of doing that? Obviously polynomial long division and even synthetic division won't help much here. PS: I might have made a minor error in the coefficients or terms, since I'm recalling the puzzle by memory. If I seem to be you can make slight alterations, but I know for sure that the dividend is a trinomial with a first term with a huge exponent, a linear second term, and a constant third term, and that the divisor is the difference between a quadratic first term and a constant. Thanks. 24.92.78.167 (talk) 17:17, 12 September 2010 (UTC)
- We have , and we want to find the rational numbers a and b, preferably without finding q. Try substituting appropriate values for x. Algebraist 17:26, 12 September 2010 (UTC)
- You can do it in you head by working mod . Basically just pretend x2 is equal to 3, so x100 is equal to 350. That reduces the expression to linear and you're done.--RDBury (talk) 18:42, 12 September 2010 (UTC)
- If any further hint is needed, notice that what RDBury is saying is really the same thing as what Algebraist is saying. But if you have further questions, keep talking. Michael Hardy (talk) 22:54, 12 September 2010 (UTC)
- You can do it in you head by working mod . Basically just pretend x2 is equal to 3, so x100 is equal to 350. That reduces the expression to linear and you're done.--RDBury (talk) 18:42, 12 September 2010 (UTC)
I had some time and started the long division looking for a pattern, and I found that the exponent decreases by two every time and the coefficient increases one exponent. However I'm intrigued by RDBury's comment that 'you can do it in your head by working mod x^2-3'. What is meant by this? Why is it possible to pretend x^2 is equal to 3? Thanks. 24.92.78.167 (talk) 01:05, 13 September 2010 (UTC)
- So the basic idea is that to find the remainder of when divided by , we want to throw out anything that's a multiple of and see what's left. One way to do this is to say that now we are operating in an environment where is congruent to zero, and any multiple of it is zero as well. We can write this Now any two polynomials are congruent if their difference is a multiple of So for example since their difference is and since their difference is , which is a multiple of . In particular we have which is to say that in this environment and 3 are interchangeable. So we can freely turn all the s into 3s to get our polynomial into a congruent one of the form we want, namely one with only a linear and constant term. This congruent polynomial differs from by a multiple of so it's the remainder we want. This idea is pretty central to dealing with quotient rings of polynomial rings but that might not mean much if you haven't seen any abstract algebra. Modular arithmetic is also relevant, although note that most of that article deals with setting some number congruent to zero while here we are setting a polynomial congruent to zero. Still the idea is the same. Rckrone (talk) 01:46, 13 September 2010 (UTC)
Summation Convention
Following on from my question on LaPlace's equation, I have to use Cartesian coordinates to show that for a function , which depends only on the radial coordinate
and
Normally, I can work quite well with the summation convention and the first falls quite nicely into place but I seem to have a fundamental problem when it comes to finding the second one. If I show my working can someone point out the mistake? Caveat lector, my chain rule and use of total/partial derivatives may be a bit sloppy.
Where's it going wrong? Thanks asyndeton talk 20:02, 12 September 2010 (UTC)
- Here's how it should go:
- I couldn't figure out what the mistake was for the longest time until I realized the formula in question had to be dimension dependent. Specifically in Rn and so the derivation needed to reflect that somehow. Rckrone (talk) 04:34, 13 September 2010 (UTC)
- Urgh, now you say it, it's so obvious that I should have had a in there. My punishment for trying to be clever and skipping steps out. Many thanks. asyndeton talk 08:58, 13 September 2010 (UTC)
Probability Question
I am usually good in solving probability problems, but I stumbled upon the following question online and I have no idea how to approach it.
"You pick two cards (one after the other with no replacement) from a deck of 52 playing cards. What is the probability that the second card has a higher rank than the first one".
Please note that I am curious how to approach such a problem - I am not interested in a numerical answer.
Thanks Help Desk - where we get a chance to brush up on our math, after graduating from college :)
78.101.151.192 (talk) 20:44, 12 September 2010 (UTC)
- Looks like 24/51. For the lowest ranked card, the chance of something higher is 48/51. For the highest ranked card, chance of higher is 0/51. For a mid-ranked card it would be 24/51, and as it's linear, that would be the overall probability. David Biddulph (talk) 21:12, 12 September 2010 (UTC)
- First, compute the probability of drawing two cards of equal rank. Then compute the conditional probability of the second card having a higher rank under the contition that the ranks are not equal. Then combine these using Bayes' theorem. – b_jonas 21:50, 12 September 2010 (UTC)
- I do not understand how Bayes' theorem comes in. There is a 48/51 chance of drawing two cards with unequal rank, and if you draw two cards with unequal rank then there's a 1/2 chance the first one will be lower, since it's completely symmetric among the two. Then you multiply. Rckrone (talk) 22:54, 12 September 2010 (UTC)
- Rckrone: You're right, it's not Bayes' theorem, but the Law of total probability. Sorry. – b_jonas 11:26, 13 September 2010 (UTC)
- I do not understand how Bayes' theorem comes in. There is a 48/51 chance of drawing two cards with unequal rank, and if you draw two cards with unequal rank then there's a 1/2 chance the first one will be lower, since it's completely symmetric among the two. Then you multiply. Rckrone (talk) 22:54, 12 September 2010 (UTC)
- Generally, to approach such a problem, begin with a similar problem which can be solved by counting rather than by computing. Solve this one: "You pick two cards (one after the other with no replacement) from a deck of 8 playing cards. What is the probability that the second card has a higher rank than the first one". Draw an 8 by 8 table like a chessboard where the rows represent the first card (i) and the columns represent the second card (j). Each of the 64 fields represent the drawing of two cards. The 8 fields i=j are discarded because the same card cannot be picked twice. The 28 fields in the upper right triangle represent that the second card has a higher rank than the first one, i<j. The 28 fields in the lower left triangle represent that the second card has a lower rank than the first one, i>j. (Assuming that different cards have different rank. I am not sure what you mean by the rank of a card. If different cards may have equal ranks, then mark the corresponding fields on the chessboard). The probability you are seeking is the number of favorable cases divided by the number of possible cases. Now you can approach your original problem. Bo Jacoby (talk) 07:44, 13 September 2010 (UTC).
September 13
exponents
Hi, I am supposed to solve this and show it in exponential form
(3x)^2 (9x^-1)
So, so far I have done is:
9x^2 (9x^-1)
9x^2 (1/9x)
x (1/1)
x
But I have a feeling I didn't do this right, since I'm supposed to answer it in an "exponential" form. Can anyone help please? --174.5.131.59 (talk) 04:12, 13 September 2010 (UTC)
- Is the expression (3x)2(9x)-1 or (3x)2(9x-1)? If it's the second one, then you should check carefully what you did. Rckrone (talk) 04:40, 13 September 2010 (UTC)
Thanks for showing me the superscripts! It is the second one, (3x)2(9x-1) I did check carefully over and over but the solution seems to be escaping me. Can you give me some tips please? --174.5.131.59 (talk) 05:18, 14 September 2010 (UTC)
- In 9x-1, the exponent is only applying to the x, not to the 9. In your second line, you applied it to the 9.--203.97.79.114 (talk) 05:22, 14 September 2010 (UTC)
- Yes abc means a(bc), not (ab)c. E=mc2 means E=m(c2)=c2m. It is common practice to put the constant, which here is c2, before the variable, which here is m, but Einstein did not do that in this case, and everybody follow Einstein and write E=mc2. Note the silly exception: cm2 means square centimeter, not centi squaremeter, so cm2 = (cm)2. Bo Jacoby (talk) 06:03, 14 September 2010 (UTC).
Inverse function
How do I invert
to get —Preceding unsigned comment added by 118.208.129.11 (talk) 08:58, 13 September 2010 (UTC)
- First step is to swap the x's and y's. Then make y the subject of the formula by multiplying out by the denominator and grouping like terms. Can't give any more hints than that without actually doing it for you. Ask if you're still stuck. Zunaid 10:21, 13 September 2010 (UTC)
Be careful with notation. y−1 usually means the reciprocal of y. But apparently that's not what you mean here. Michael Hardy (talk) 03:53, 14 September 2010 (UTC)
Math
I'm not so good at maths. It math something you learn, or something you're always good at? Mediaiiwk (talk) 16:09, 13 September 2010 (UTC)
- It is something you learn, and then you enjoy it, and so you learn even more, and so on. Be patient and have fun doing math, then you too will be good at it. Bo Jacoby (talk) 16:31, 13 September 2010 (UTC).
- Agree with Bo: the biggest obstacle to learning math, for most people, is that they are afraid they won't be good at it. You just have to remember that no one is good at it to start, but if you keep working at it you will get it, and the moment you get it is a really satisfying feeling. --Ludwigs2 19:20, 13 September 2010 (UTC)
- It is similar to the situation with English literature. There are many writers who try very hard and go to all sorts of courses about good writing but only a small percentage write anything you'd want to read. And even if you're content to read novels rather than try and write them there are some that can be quite heavy going and yet are acknowledged as great literature. And then there's people who don't read. And then there's people who can't read and even with good help have trouble getting what they want in a supermarket. There's quite a few good books in popular maths that are accessible to most people with some interest like novels are to those who like reading. And like writing reasonable English so you can say what you think is helped by a bit of training so most people could do better with their maths and get their way past interest rates, builders estimates,, sizing a room for painting and suchlike tasks with reasonable ease. Dmcq (talk) 10:11, 14 September 2010 (UTC)
September 14
Vector as an integrand
I'd like to know is it possible for the integrand to be a vector value? for example ∬n dσ , where n is a unit normal vector. Re444 (talk) 12:44, 14 September 2010 (UTC)
- Absolutely it's possible to integrate vectors. I'd only be guessing at what your notation means, though. Why is it a double integral? What are you integrating over? n is normal to what? And what's dσ?
- But if I had to guess, I'd guess that you're integrating over a surface, and dσ is an element of area. Sure, you can integrate that. I probably wouldn't write it as a double integral, though, because that suggests that you're doing two integrations, whereas here you're doing only one (albeit in two dimensions). --Trovatore (talk) 09:54, 14 September 2010 (UTC)
- Yes in my example integration is over a surface in 3D. Till now I've ran into some vector integral equations whose integrands reduce into scalars, like divergence and stokes theorem. In the case of vector integrands, where usually they appear and how we should deal with? Treating by decomposing them to their components? And one more question: I’ve read in a fluid mechanics text that according to Gauss theorem, my example integral, for any surface equals zero. But the integrand in the Gauss theorem is a scalar not a vector. Re444 (talk) 12:44, 14 September 2010 (UTC)
- Certainly you can decompose to components: if , then by linearity. I don't immediately see how Gauss' theorem applies to the case of integrating a closed surface's normal vector, but it does seem that it should be by a projection argument (per dimension). --Tardis (talk) 14:46, 14 September 2010 (UTC)
- Something like:
- But can it be done without decomposing the vectors? -- 1.46.187.246 (talk) 16:31, 14 September 2010 (UTC)
- I think the obvious generalization of the definition of the Riemann integral should work in this context, without ever having to give your vectors a coordinate system. There might be some picky details to work out as to what is meant by "refinement", but I expect it should all work.
- If you want to do the (mostly) more general Lebesgue integrals, I expect that to get even pickier, but again, still work. --Trovatore (talk) 18:10, 14 September 2010 (UTC)
- Beautiful! thanks everybody. Re444 (talk) 10:19, 15 September 2010 (UTC)
- Certainly you can decompose to components: if , then by linearity. I don't immediately see how Gauss' theorem applies to the case of integrating a closed surface's normal vector, but it does seem that it should be by a projection argument (per dimension). --Tardis (talk) 14:46, 14 September 2010 (UTC)
- Yes in my example integration is over a surface in 3D. Till now I've ran into some vector integral equations whose integrands reduce into scalars, like divergence and stokes theorem. In the case of vector integrands, where usually they appear and how we should deal with? Treating by decomposing them to their components? And one more question: I’ve read in a fluid mechanics text that according to Gauss theorem, my example integral, for any surface equals zero. But the integrand in the Gauss theorem is a scalar not a vector. Re444 (talk) 12:44, 14 September 2010 (UTC)
Vector Problem
Hi. If I have two known vectors, n1 and n2, and I know scalars d1 and d2, can I find a given the following equations: dot( a, n1 ) = d1 and dot( a, n2 ) = d2.
I feel that this should be very simple - perhaps by taking the dot product of each equation?
Incidentally, it is to find the perpendicular distance from the origin to a line intersecting two planes, with normals n1 and n2 and perpendicular distances to origin d1 and d2. Just stuck on this point - any help appreciated!
Thanks. 128.16.7.149 (talk) 16:33, 14 September 2010 (UTC)
- I assume that a is a vector that you are attempting to find. Then, a⋅n1 = a1n11+a2n12+a3n13... There isn't a lot you can do to reduce it. However many elements there are in the variables is how many unknowns you will have.
- I think you will find more luck looking at eigenvectors. It appears that you want d1 to be an eigenvalue of n1 that will produce the eigenvector a. But, I could be reading this question completely wrong. -- kainaw™ 17:46, 14 September 2010 (UTC)
- What's "an eigenvalue of n1" mean, since is a vector (not a matrix)? --Tardis (talk) 17:55, 14 September 2010 (UTC)
- That is my confusion of the question. Related to your answer below, how can arbitrary length vectors produce the results the questioner is looking for? Either the questioner is misusing the term "vector" or a is not a vector (which makes the "dot" implication of "dot product" confusing) or some other issue that I am not reading correctly. He states that he is looking for a perpendicular vector (distance), which made me think of eigenvectors. -- kainaw™ 18:09, 14 September 2010 (UTC)
- You can always find the dot product of two vectors if they are of the same dimension. Without Emil's addition below, the OP has two equations, and a number of unknowns equal to the dimension of the vectors. Thus, if the vectors are in 2 dimensions, there will probably be a unique solution. In a higher dimensions you will have more variables and thus many solutions. -- Meni Rosenfeld (talk) 18:31, 14 September 2010 (UTC)
- That is my confusion of the question. Related to your answer below, how can arbitrary length vectors produce the results the questioner is looking for? Either the questioner is misusing the term "vector" or a is not a vector (which makes the "dot" implication of "dot product" confusing) or some other issue that I am not reading correctly. He states that he is looking for a perpendicular vector (distance), which made me think of eigenvectors. -- kainaw™ 18:09, 14 September 2010 (UTC)
- By the rowwise interpretation of matrix multiplication, your two equations are equivalent to . Whether that matrix has a unique solution or not depends first on the dimension of your vectors: if they're 3-vectors, you can get only a parameterized family of solutions because the matrix is 2×3 (thus not square, thus not invertible). If they're 2-vectors, then for almost all there is a unique solution . (If they're 1-vectors (scalars), then for almost all there is no solution, but that's probably not the case you had in mind.) --Tardis (talk) 17:55, 14 September 2010 (UTC)
- If I understand it correctly, the OP is talking about 3-dimensional space, and his a is also orthogonal to n1 × n2, which gives the third linear equation needed to make it regular: . The system can then be solved by any method mentioned in the system of linear equations article.—Emil J. 18:26, 14 September 2010 (UTC)
- Another way to state the extra condition is that a is linear combination of n_1 and n_2. If I write , where u_1 and u_2 are scalars, then (using the fact that n_1 and n_2 are normals, which I take to mean they have norm 1) the equation above simplifies to the 2-dimensional system .—Emil J. 18:52, 14 September 2010 (UTC)
- If I understand it correctly, the OP is talking about 3-dimensional space, and his a is also orthogonal to n1 × n2, which gives the third linear equation needed to make it regular: . The system can then be solved by any method mentioned in the system of linear equations article.—Emil J. 18:26, 14 September 2010 (UTC)
Thanks all for your helpful contributions. There are still clearly things I need to learn. The deceptively simple trick here was decomposing the vectors into their components, then simply solving through inverting the matrix (thanks Emil). I think I was distracted by trying to solve the problem using vector operations (dot, cross etc) only. Thanks again. 128.16.7.149 (talk) 09:35, 15 September 2010 (UTC)
Orthogonality of Fourier basis for L2[R]
The Integral transform article says that any basis functions of an integral transform have to be orthogonal, and then says that the basis functions of the Fourier transform are . But these basis functions don't seem to be orthogonal (the inner product of two distinct basis functions, over R, doesn't seem to be 0. I know that you have to take the conjugate of one of the basis functions for inner products in a Hilbert space).
So is the kernel of the Fourier transform actually an orthonormal basis for L2[R]? Is there a different definition for orthogonality when operating over the entire real line? (The basis functions don't seem to be square integrable, for instance).
-- Creidieki 17:06, 14 September 2010 (UTC)
- (Since no one actually familiar with the subject gave an answer, I'll try.) I'm afraid the article is oversimplifying the picture. Fourier transform on R does not directly define an operator from L2 to L2, but from L1 to L∞. The "basis functions" are also only L∞ (moreover, notice that L2 has countable dimension as a Hilbert space, whereas there are continuum many these "basis functions"). In order to get an operator on L2, you take the Fourier transform on , and (uniquely) extend it by continuity to all of L2. This works because is a dense subset of L2, the transform takes -functions to L2, and it is a bounded linear mapping. This kind of construction by extension from a dense subspace works in general for any locally compact abelian group in place of R, see Pontryagin duality.—Emil J. 10:49, 15 September 2010 (UTC)
- The Plancherel theorem is relevant to EmilJ's answer. You said that: "But these basis functions don't seem to be orthogonal (the inner product of two distinct basis functions, over R, doesn't seem to be 0." If the "u" you quote is allowed to range over all integers, then the fact that the basis functions form a (countable) orthonormal set is an easy computation. I think that the mistake you are making is to say that the inner product is defined by integration over the entire real line - it is not. The inner product is defined by integration over the the interval [0,2]. (Of course, when I say "the inner product is defined by integration over" I refer to the "integration" in the following rule for the inner product of two functions:
- where f and g are usually taken to be complex-valued, measurable, -periodic functions. (Equivalently, they are complex, measurable functions defined on the unit circle in the complex plane.).) Hope this helps! PST 23:59, 19 September 2010 (UTC)
Octahedral Symmetry
OK, so the group G of both orientation preserving and orientation reversing symmetries of a regular octahedron has order 48. If we then consider the set D that consists of lines adjoining pairs of opposite vertices (so there will be three such lines) and want to find the stabiliser of one of the lines, do we just bash out the orbit stabiliser theorem and say that the size of the stabiliser of a line is the size of G (48) divided by the size of the orbit (3) giving us 16? And is a question asking you to 'identify the stabiliser' likely to mean just find the order of the stabiliser or something more complex? Thanks asyndeton talk 19:15, 14 September 2010 (UTC)
- They probably want you to actually say which group of order 16 it is. (There aren't that many to choose from.) Rckrone (talk) 03:58, 15 September 2010 (UTC)
September 15
3rd Axiom of Probability
There is a line in the solution to a homework problem that I don't get.
It reads
Noting that and we can use the third axiom of probability to write
and
I don't see how the 3rd axiom is being used to get from one bit to the next, and I can't see where the second bit comes from —Preceding unsigned comment added by 130.102.158.15 (talk) 03:21, 15 September 2010 (UTC)
- You're missing a few equality signs. It should be
- and
- Where you have clearly used the fact that the probability of a union (of disjoint events) is the sum of probabilities. -- Meni Rosenfeld (talk) 06:49, 15 September 2010 (UTC)
- It looks like you're missing a couple of equals signs - , where the first equality is just using the substitution for A. The second equality is then where the 3rd axiom comes in - since the substitution we're using for A breaks it into two mutually exclusive events (the event where both A and B happen and the event where A happens but B doesn't), the probability of their union (i.e. either "A and B both happen" or "A happens but B doesn't") is the sum of their individual probabilities. The expression for P(B) is similar. Confusing Manifestation(Say hi!) 06:50, 15 September 2010 (UTC)
Polynomial Achieving Infimum
Studying analysis the following question came up. Let be a polynomial function of two real variables. Suppose that for all x and y real. Does every such function attain its infimum? Prove or disprove. My intuition tells me that it is true (because you can't have something like a decaying exponential which will always decrease and get arbitrarily close to zero). A polynomial must have some zeros (maybe not real). But I don't have any idea how to even begin. Help? 174.29.63.159 (talk) 04:25, 15 September 2010 (UTC)
- Here's how I would do it, but there may be better ways to approach it. First it should be clear how to show a positive polynomial in one variable achieves it's infimum over R. Consider the all the lines through the origin in R2. On each of these lines L, p can be expressed as a polynomial in one variable, so it attains its infimum bL. Let f: S1 → R map each point x on the circle to bL where L is the line that passes through x. Then show that f achieves its infimum. Rckrone (talk) 05:24, 15 September 2010 (UTC)
- Outside a sufficiently large closed disk, {(x,y) : x2+y2≤R2}, you have p(x,y)>p(0,0). The disk is a compact set and a continuous function on a compact set attains its infimum. Bo Jacoby (talk) 08:33, 15 September 2010 (UTC).
- I don't think the first claim is true. For example consider p=x2+1. I'm also having serious doubts about the strategy I suggested. I thought it would be relatively easy to show that the function f I constructed is continuous, but the more I think about it, the less obvious it is. Rckrone (talk) 12:56, 15 September 2010 (UTC)
- does not attain its infimum. (It took me quite a while to find; on retrospect, it looks like something I would have seen sometime). -- Meni Rosenfeld (talk) 14:15, 16 September 2010 (UTC)
- Ah, the Rabinowitsch trick! How could I miss that (I spent some time trying to fix the arguments above). BTW, also works.—Emil J. 14:36, 16 September 2010 (UTC)
- Wait....can someone explain how the above example(s) work? How does it not attain an infimum? What's happening in the above function? I chucked it into Wolfram Alpha to see what was happening but a 3D plot is none too enlightening... Zunaid 18:31, 16 September 2010 (UTC)
- Oh okay...I worked it out. Choose y as close to zero as you like (so that y^2 term is minimized), then choose x=1/y to make the first term zero. You can therefore make the entire expression as close to zero as you like, however you can't actually attain zero because once you set y=0 the first term automatically jumps back up to 1. Hence the infimum is zero but can never actually be attained. Clever! Zunaid 18:36, 16 September 2010 (UTC)
- Exactly. Also, Alpha can give a better visualization if you help it a bit (note that I interchanged x and y). -- Meni Rosenfeld (talk) 19:35, 16 September 2010 (UTC)
- Oh okay...I worked it out. Choose y as close to zero as you like (so that y^2 term is minimized), then choose x=1/y to make the first term zero. You can therefore make the entire expression as close to zero as you like, however you can't actually attain zero because once you set y=0 the first term automatically jumps back up to 1. Hence the infimum is zero but can never actually be attained. Clever! Zunaid 18:36, 16 September 2010 (UTC)
Box-Muller Transformation
On a completely different note, I am trying to show that the Box-Muller transform takes two independent uniform random variables (from [0,1] each) to two independent standard normal random variables. So when I was trying to invert the transformation, I came across the expression . I have a problem at this point taking the inverse tangent of both points. My reasoning is this. Since U2 can take on any value in (0,1], 2*pi*U2 can take on any value in (0,2*pi] but tangent is not invertible in this domain (as a function). So how/why can we do this? I took the forward Jacobian and I know that it is never zero on (0,1]x(0,1]. So the entire transformation is invertible. But when I was trying to invert it, I couldn't convince myself that this is valid. How does this work? Thanks! 174.29.63.159 (talk) 04:36, 15 September 2010 (UTC)
- Your expression is correct but not sufficient to define the transformation which is (quoting the Box-Muller transform article that uses Z0, Z1 rather than X1,X2)
- Bo Jacoby (talk) 08:17, 15 September 2010 (UTC).
Riemann Zeta Function
On yet another completely different note, I am doing some reading on Riemann and his zeta function and the prime number theorem. I am just curious where (what journals, good papers, books, or websites) I can find some up to date info on this. I don't know much about complex analysis so something that is easy to follow would be appreciated (like the most recent list of the computed zeros or something). I am just wondering what the most cutting edge research is in this field. Maybe a popular book for the general (slightly math oriented) audience? Thanks! 174.29.63.159 (talk) 04:41, 15 September 2010 (UTC)
- How about Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics by John Derbyshire (ISBN 978-0452285255). --Qwfp (talk) 08:41, 15 September 2010 (UTC)
- The last paragraph of the introductory section of the Riemann hypothesis article actually gives a great list of sources. -- Creidieki 10:57, 15 September 2010 (UTC)
Synthetic division
please why the synthetic division can work? the article uses too many terms I have not gotten to in class yet. THank you. —Preceding unsigned comment added by 24.92.78.167 (talk) 23:02, 15 September 2010 (UTC)
- Dividing a polynomial by x-a amounts to multiplying that polynomial by
1/(x-a). Consider expanding 1/(x-a) in a geometric series as follows:
- If you multiply your polynomial by this and consider the coefficients of the largest power of x, the next largest power etc. etc. you'll see the synthetic division method emerge. Count Iblis (talk) 00:26, 16 September 2010 (UTC)
- I am not surprised that you are having difficulty understanding our synthetic division article - it is a very poor explanation of the process. Synthetic division is really quite straightforward. Here is an example. Suppose we want to divide by :
- To match the term we need to start by multiplying by
- This gives us . Subtract this from and we are left with .
- So
- To match the term we now need to multiply by
- This gives us . Subtract this from and we are left with
- So
- To match the term we now need to multiply by
- This gives us . Subtract this from and we are left with
- So, putting all this together, we have
- If you know long division they this should look familiar - it is like long division but without any carries. You can think of it as long division in an "infinite" base.Gandalf61 (talk) 15:22, 16 September 2010 (UTC)
September 16
Semisimple modules completely determined by character trace map?
Hello,
I have been learning quite some representation theory, but I have stumbled upon some problems, since most authors state the results for group algebras over the complex numbers, while I am interested in semisimple algebras and modules in general.
Therefore:
let A be any algebra over any field F, and let M_1,M_2 be semisimple modules over A. Is it true that M_1 and M_2 are isomorphic if and only if the two trace maps from A to F are equal?
I know this is true if $F$ is the field of complex numbers, but I am interested in the most general case.
- How about this? Let A=F=the field of order two, M1=A⊕A (trace always 0) and M2=A⊕A⊕A⊕A (trace also always 0), but M1 and M3 are not isomorphic. The reason for starting with modules over the complex numbers is the theory is much simpler in that case and you at least get Burnside's theorem. There is an entire field of modular representation theory which is more general.--RDBury (talk) 16:29, 16 September 2010 (UTC)
Rado numbers
Consider the integer linear equation where . Supposing it is given that there is a natural number N such that, if {1,2...N} is partitioned in two sets, one of these always contains a solution of the equation. The minimal such N is called the Rado number of the equation. I am looking for general bounds on such N, in the cases where it exists. Where can I possibly find such results. Thanks-Shahab (talk) 11:32, 16 September 2010 (UTC)
Surface Normal
Given that C is closed cure that is the boundary of the triangle T with vertices (1,0,0), (0,1,0) and (0,0,1), I have to directly evaluate, without using Stokes' Theorem, , with , by showing that the normal can be written in the form (1,1,1)dy dz but I don't know how to find the normal. I think I'm just being stupid and should know it but I just can't think of it. Can someone help please? Thanks. asyndeton talk 16:29, 16 September 2010 (UTC)
- The equation of the plane through the three given points is x+y+z=1, so the normal is ∇(x+y+z).--RDBury (talk) 16:33, 16 September 2010 (UTC)
- So simple. Cheers. asyndeton talk 16:36, 16 September 2010 (UTC)
September 17
Probability distribution with kurtosis parameter
Is there a probability distribution which is similar to the normal distribution but with the parameters being mean, standard deviation and kurtosis? Yaris678 (talk) 13:24, 17 September 2010 (UTC)
- Pearson type VII distribution might fit the bill. --Salix (talk): 13:46, 17 September 2010 (UTC)
- Spot on. Thanks! Yaris678 (talk) 18:46, 17 September 2010 (UTC)
Countability
If there exists a bijection between a set X and the natural numbers, making the set X countable, can the same bijection be used to show that any subset of X is countable? Thanks. asyndeton talk 17:35, 17 September 2010 (UTC)
- I assume by countable you mean countably infinite. And my naive answer is no, if u change the domain or range of a function (e.g. by considering a subset of either), then it is no longer the same function. Zunaid 17:43, 17 September 2010 (UTC)
- I think the obvious approach would be to use to the bijection to define functions satisfying the conditions of the Schroeder-Bernstein theorem. AndrewWTaylor (talk) 17:54, 17 September 2010 (UTC)
- The restriction of the original function will define a bijection of the subset and a subset S of natural numbers. Compose it with an increasing enumeration of S to get a bijection with all natural numbers (or initial segment, if the subset is finite).—Emil J. 18:03, 17 September 2010 (UTC)
Let X = {x1, x2, x3, ....} and suppose A is a subset of X. Then let
and let
Keep going as long as A has not been exhausted. Then
Therefore A is either finite or countably infinite. You don't need the Cantor–Bernstein theorem or the like. Michael Hardy (talk) 21:05, 17 September 2010 (UTC)
- It's not even that complicated. The usual definition of a countable set X is that there exists a bijection between X and some subset of the natural numbers. So yes, Asyndeton, the same bijection can be used directly to show that any subset A of X is countable—just restrict the domain of the bijection to A and the codomain to the image of A. —Bkell (talk) 02:58, 18 September 2010 (UTC)
A proof that in euclidean space pi is a constant.
I understand of course that in euclidean space (geometry) pi, the ratio between the circumference of a circle and its diameter is a constant. Further, I have been told that in some noneuclidean geometries that the value of pi varies with the size of the circle. I am looking for the proof, which I'm sure exists, that in Euclidean geometry (space) pi is a constant. Your help will be most appreciated. 71.112.10.31 (talk) 18:02, 17 September 2010 (UTC)
- If I was going to construct my own proof, I would start with the fact that Euclidean spaces are vector spaces. Yaris678 (talk) 18:52, 17 September 2010 (UTC)
First try to show that for any two circles in the Euclidean plane, there's a mapping from the space to itself that multiplies all distances by the same constant that takes one of them to the other. (Actually, saying "same" and "constant" is redundant.) Michael Hardy (talk) 21:08, 17 September 2010 (UTC)
- No, saying "same" and "constant" is not redundant; different distances could (in principle) be multiplied by different constants. They aren't, of course, which is the point. --COVIZAPIBETEFOKY (talk) 22:07, 17 September 2010 (UTC)
Request for statistics article review
A new editor has written an article Multivariate kernel density estimation and requested feedback. The editor seems to have basic Wikipedia style under control; it needs the review of someone familiar with the field. (I'll also crosspost a couple specific editors.)--SPhilbrickT 21:12, 17 September 2010 (UTC)
- Looks like a very nice article. The main improvement I can think of is that the exposition could use a few more wikilinks. 67.119.14.196 (talk) 05:42, 18 September 2010 (UTC)
September 18
Finding maximum clique in a largish graph
I have a particular graph with 67,230 vertices and 1,116,605,284 edges, and I'd like to find a maximum clique in it. I realize this is an NP-complete problem. However, maybe there is some algorithm that will run fast enough for me to get an answer for this graph. Does anyone know about the run times of various algorithms for specific sizes of graphs?
On the other hand, for my purposes a "pretty big" clique would probably be good enough. I'm thinking about taking a greedy approach and doing that several times to try to find a "pretty big" clique; since my graph contains 49.4% of the possible edges, it might look similar to the random graph , so a greedy approach might get me within a factor of 2 (based on what I read in the fourth paragraph of Clique problem#Special classes of graphs). Does anyone have any suggestions for this or another approach I should take here? Otherwise I'm probably just going to try something and see how well I can do. —Bkell (talk) 01:25, 18 September 2010 (UTC)
Placing equally spaced vertices in a curve (but not based on arc length)
I'm trying to find the name of this problem so I can read the existing literature on the subject, but I'm not finding anything.
I have a real-valued, continuous function . For a given value , I want to find a value so that the distance from the point to the point equals .
Does this sound familiar to you guys? How can I read more on these kinds of problems? Could you point me to the right direction on which methods I could use to solve it? Approximations are fine.