Wikipedia:Reference desk/Mathematics: Difference between revisions
Line 270: | Line 270: | ||
Just a thought, if the [[ELECTROMAGNETIC SPECTRUM]] was represented by population/people, how many people would see the Light? If the spectrum was so large as to not include Earths current population, how many years would it take for one person to see Visible Light? --[[User:Specialagent777|i am the kwisatz haderach]] ([[User talk:Specialagent777|talk]]) 17:41, 15 June 2011 (UTC) |
Just a thought, if the [[ELECTROMAGNETIC SPECTRUM]] was represented by population/people, how many people would see the Light? If the spectrum was so large as to not include Earths current population, how many years would it take for one person to see Visible Light? --[[User:Specialagent777|i am the kwisatz haderach]] ([[User talk:Specialagent777|talk]]) 17:41, 15 June 2011 (UTC) |
||
:I think it would just be (highest wavelength of visible - lowest of visible) / (highest of spectrum - lowest of spectrum), then multiply by earths population. [[User:KyuubiSeal|KyuubiSeal]] ([[User talk:KyuubiSeal|talk]]) 18:19, 15 June 2011 (UTC) |
:I think it would just be (highest wavelength of visible - lowest of visible) / (highest of spectrum - lowest of spectrum), then multiply by earths population. [[User:KyuubiSeal|KyuubiSeal]] ([[User talk:KyuubiSeal|talk]]) 18:19, 15 June 2011 (UTC) |
||
[http://www.youtube.com/watch?v=kfS5Qn0wn2o The Universe] on an eps on Nebulas, says 'Visible light is represented by only 1 inch on an electromagnetic scale of over 2000 miles'. According to this show, I would take 63,630 inches/1 Mile x 2000 to get my 126.72 Million inches. With 1/126,720,000, w/worlds pop at 6.92 Bil, I take 6.92 Billion and divide by 126.72 Million to get about 55 People seeing the Light. --[[User:Specialagent777|i am the kwisatz haderach]] ([[User talk:Specialagent777|talk]]) 20:48, 15 June 2011 (UTC) |
[http://www.youtube.com/watch?v=kfS5Qn0wn2o The Universe] on an eps on Nebulas, says ''Visible light is represented by only 1 inch on an electromagnetic scale of over 2000 miles''. According to this show, I would take 63,630 inches/1 Mile x 2000 to get my 126.72 Million inches. With 1/126,720,000, w/worlds pop at 6.92 Bil, I take 6.92 Billion and divide by 126.72 Million to get about 55 People seeing the Light. --[[User:Specialagent777|i am the kwisatz haderach]] ([[User talk:Specialagent777|talk]]) 20:48, 15 June 2011 (UTC) |
Revision as of 20:49, 15 June 2011
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
June 9
Total ordering
Can ZF (without choice) prove every set can be totally ordered? Not well ordered, just total. Money is tight (talk) 04:28, 9 June 2011 (UTC)
- No. If every set can be totally ordered, then you can always find a choice function for any set of unordered pairs, which is not provable in ZF. (Suppose you have a set of unordered pairs, then linearly order the corresponding set of ordered pairs, and from a pair {a,b}, choose a if (a,b) appears before (b,a) in the linear order, otherwise b.) --Trovatore (talk) 04:33, 9 June 2011 (UTC)
- Actually I guess it's a bit simpler than that — just linearly order the union of all the pairs, and then from each pair, take the lesser one in the linear order. Of course that just pushes the question back to why you can't prove in ZF that there's always a choice function on sets of pairs. I don't know the answer to that for sure. I want to say that a choice function on pairs of equivalence classes for the Vitali relation will somehow give you something that can't exist in a model of ZF+AD, say a non-measurable set of reals or one without the property of Baire, but the details escape me. --Trovatore (talk) 04:51, 9 June 2011 (UTC)
- Ah, I think I can do it. Suppose there's a choice function on all pairs of equivalence classes of the Vitali equivalence relation. Graph that on the plane, meaning that we put (x,y) into the graph just in case, when you apply the choice function to the pair ([x],[y]), where [x] means the equivalence class of x, the choice function gives you back the equivalence class of x, not the equivalence class of y.
- OK, what can we say about this graph? It has to be invariant under rational shifts: If (x,y) is in the graph, then so is (x+r,y+s) for any rational r and s; that's just by the definition of the Vitali relation. Moreover, it's complemented when you reflect it across the main diagonal, in the sense that if (x,y) is in the graph, then (y,x) is not in the graph (except for the case that x is a rational shift of y, which we should easily be able to argue is a meager set of exceptions).
- So now suppose this set in the plane has the property of Baire. That implies that either (i) it's meager, which it can't be because then the plane would be the union of three meager sets, namely the set, its reflection across the main diagonal, and the set such that x is a rational shift of y. Or (ii), there's some little neighborhood on which the set is comeager.
- But in case (ii), cut that little neighborhood down to something nice, say a circular disk or a square. Then when you reflect across the main diagonal, you have to get a neighborhood on which the set is meager, by the complementing property we discussed.
- But surely there's some rational shift that takes the neighborhood mostly onto the reflected neighborhood. Now you have a contradiction.
- Anyone see any mistakes? --Trovatore (talk) 11:01, 9 June 2011 (UTC)
- Thanks. Money is tight (talk) 00:15, 10 June 2011 (UTC)
How many observations are needed to reach a certain accuracy?
Say you wanted to see what cloud cover was for each month of the year and put up a machine that took 1 reading an hour for 5 years.
So you might get something like Jan 0/10 38%, 1/10 5% ... 9/10 4%, 10/10 25%; Feb 0/10 34% etc. or whatever.
You'd be 50% sure of being how close to the real value? What about 95%? What formula do I use? Is it 1/√(trials)? Then again, I've seen months when a weather system or high pressure stays stuck over the area for 10, 14 days in a row when days are supposed to be close to a random bag here. Get one of those in your month and your accuracy would be pretty much thrown off, right? Maybe you really need 30 years like the climatologists use?
- The observations are not independent, that is if you have clouds on day one this changes the probability of clouds on day two, compared to no data for day one or no clouds on day one. So before you can say anything about the accuracy of your observations you would need a model for that dependency. Taemyr (talk) 19:36, 9 June 2011 (UTC)
- I don't know what the dependency is but from about 1 decade of noticing I'd say streaks are usually 1, 2, sometimes 3 days, within the same day gets more correlated, 5 days in a row on the weather forcast and I start noticing, last month there were 10 days in a row when clouds rolled in maybe twice, and only for a few hours (no rain that long makes allergies really bad), it was pretty much overcast 10/11 days in a row in June of '09, and I saw almost no sun for a half month in 2005. Dependency is like this year round, except in early summer much of the rain comes from quick thunderstorms so an otherwise sunny day will be dark+pouring and back again in 3 hours. Sagittarian Milky Way (talk) 22:40, 9 June 2011 (UTC)
- I don't completely understand your question. What do you mean by "what cloud cover was for each month of the year"? What does "Jan 0/10" mean? --Tango (talk) 19:36, 9 June 2011 (UTC)
- They list cloud coverage for all 12 months in "percent of the time that the sky is 0/10ths covered", "percent of the time it's 1/10ths", "percent of the time that it's 2/10ths", 3/10ths, etc. up to 10/10ths., presumably because the instrument that asseses area is not that accurate. Sagittarian Milky Way (talk) 22:40, 9 June 2011 (UTC)
- Ok, so where does the error come in? If you are measuring it, then surely you are getting it right... What is it you are actually trying to work out? --Tango (talk) 23:09, 9 June 2011 (UTC)
- They list cloud coverage for all 12 months in "percent of the time that the sky is 0/10ths covered", "percent of the time it's 1/10ths", "percent of the time that it's 2/10ths", 3/10ths, etc. up to 10/10ths., presumably because the instrument that asseses area is not that accurate. Sagittarian Milky Way (talk) 22:40, 9 June 2011 (UTC)
- Ignoring the details, our articles on standard error (statistics) and confidence intervals probably address your question. Looie496 (talk) 22:52, 9 June 2011 (UTC)
- ...if you can make sense of them, that is. The quality of writing in those articles is pretty pathetic. Looie496 (talk) 22:55, 9 June 2011 (UTC)
Triangular calculation equation
Probably due to lack of sleep, I just can't wrap my brain around what I am sure is a simple problem. I have an array indexed 1, 2, 3, 4... Each index in the array has an integer such that the value in the index is less than the index before it. An example: {5,4,3,2,1}. That is optimal. Each index is exactly one less than the one before it. However, this program sometimes has one (and only one) occurrence where one index will be two less than the one before it: {6,5,4,2,1}. In this case, I want to remove the far right value and distribute it among the other values to get the optimal distribute: {6,5,4,2+1} = {6,5,4,3}. Sometimes, it won't work easily: {6,5,3,2} = {6,5,3+2} = {6,5,4} with a remainder of 1. I am doing this with a loop, removing one column on the right at a time. Isn't there a way, given the number of columns, value of the left column, and the column in which the value is 2 less than the previous column, that I can perform a single calculation to see how many columns I need to remove and distribute? -- kainaw™ 12:41, 9 June 2011 (UTC)
- I think it depends on the factorization of the sum, call it p say. If p is a prime then there are only going to be one or two possible arrays. For example with p=23 there are only {23} and {12,11}. If you have a factorization of p then there is a calculation, but without it it's probably not possible.--RDBury (talk) 15:55, 9 June 2011 (UTC)
- 2n has only a single representation as a sum of consecutive integers: {2n}. Gandalf61 (talk) 16:11, 9 June 2011 (UTC)
- Thanks, but the problem I was working on doesn't require a perfect distribution. It looks for an optimal distribution in which there is a remainder left over that, compared to the size of the columns, is negligible. As I showed with {6,5,3,2}, the optimal solution is {6.5.4} with one left over. So, given the number of columns (C=4), the size of the left-most column (S=6), and the column in which the error occurs (E=3 if we count them 1, 2, 3, 4), how many columns will be left in the optimal solution (O=3) - and the remainder can be included (R=1). This is an easy problem to solve by iterating over "remove the far-right column and place them on E, E+1, E+2, E+3... until all columns have been filled. If E!=O, do it again." I am trying to get O=f(C,S,E) without the iteration because the size I'm working with is around C=1,000,000, S=1,000,000, and E=250,000. After calculating O and R, I have to go to the next step of grouping the R's together for another task. -- kainaw™ 17:18, 9 June 2011 (UTC)
- Now that I'm awake, I can explain why this is hard for me to think about... Assume you look at it and you realize that you need 10 to fill the top row (and also assume the far right column has a value of 1). So, it is easy to calculate that 1+2+3+4 = 10. Therefore, removing the right 4 columns will give you the necessary amount to fill the top row - but that won't work. You just removed 4 columns. You only need 6 to fill the top row now. The correct answer was that removing 1+2+3 = 6. But, you only removed 3 columns this time. You need 7 to fill the top row. The original answer was the correct one. Remove 4 columns and have a remainder of 4. It is the removal of columns that makes it harder than simply calculating 1+2+3+...+n = X for some value X. -- kainaw™ 19:08, 9 June 2011 (UTC)
- Take your case {6,5,3,2}. Why can you not go {6,5,3,2}->{6+1,5,3+1}->{7+2,5+2}->{9+7}->{16} with no remainder? — Preceding unsigned comment added by Taemyr (talk • contribs) 19:29, 9 June 2011 (UTC)
- The goal is not to trick the problem into a trivial solution. The goal is to find out how many columns are required to fill the incomplete top row. -- kainaw™ 19:36, 9 June 2011 (UTC)
- OK. So you can only add points below the gap? (Or rather what you are trying to optimize is maximizing the number of colums, the remainder is irrelevant). Take the case where you need 10 points. This means that your set goes {...12,10,9,8,7,6,5,4,3,2,1}? you remove the one so you are left with needing 8(the value of the one+ one colum less to fill). then the 2, at which point you need 5. After the 3 you need 1, and when you remove the 4 you are done and have a remainder of 4.
- If this is correct then:
- (n+1)+(n+2)+...+(n+m)=n*m+(1+m)m/2 which will be the value you redistribute if you remove m colums where the least has value (n+1).
- To this we must add the number of colums removed so (n+1)*m+(1+m)m/2.
- We want to find the lowest m such that (n+1)*m+(1+m)m/2>=target.
- 0.5m^2+(1.5+n)m-target>=0.
- Switching n such that we look at the least value beeing n, rather than n+1, we get
- 0.5m^2+(.5+n)m-target>=0
- So roof(.5+n+sqrt((.5+n)^2+2*target)) columns needs to be removed.
- Baring misunderstandings or outright mistakes on my part. Taemyr (talk) 20:19, 9 June 2011 (UTC)
- Thanks. I was having extreme difficulty thinking yesterday. I just couldn't make the jump to reducing the columns required to fill as the columns were being used. -- kainaw™ 12:38, 10 June 2011 (UTC)
Tetrahedrons and triangular graphs
I've used triangular graphs before, and the requirements for one (as I know the term, seems there's another) to be valid could be said to be {x,y,z} where x+y+z=100%. I've been trying to picture, without success, a hollow tetrahedron (OK so far!). Then there's some point within the tetrahedron with lines coming from it perpendicular to each of its sides - four of them. At the point they touch the sides, that point is read like a triangular graph. How many unique values does it create, I can't think how many of the 12 values are duplicates? If so, how does that change the requirement for it to be valid? Thanks, Grandiose (me, talk, contribs) 19:23, 9 June 2011 (UTC)
- I don't understand this question. Could you explain what you mean by "triangular graph"? —Bkell (talk) 18:12, 10 June 2011 (UTC)
- Something like this. So each point in the triangle has three components that sum to 100%. Grandiose (me, talk, contribs) 09:22, 11 June 2011 (UTC)
- You reference is an awful graph. The shadows on the font makes things very hard to read. Anyway your triangular graph could just as easily be draw as a standard x,y graph (at right angles) and you can come along afterwards and mark in the derived z coordinate lines. Similarly a hollow tetrahedron could be drawn as a standard x,y,z 3-D graph. Everything else is derived from that - any u v or w coordinates you may choose to invent. My "tetrahedron" will not have the "right" sold angle (0.55 steradians), but will have 0.5pi (1.57)steradians and again you can come along afterwards and mark angled planes of constant u, v, or w. Does this help at all? -- SGBailey (talk) 20:34, 11 June 2011 (UTC)
- Other than explaining why no concept occurs in mathematics, not really (it's a bit over me :)). It's the "Similarly a hollow tetrahedron could be drawn as a standard x,y,z 3-D graph." I'm struggling, mostly as to the consquences, as with the original tetrahedron. If our original triangle was actually two values, and a third derived one, and you think the tetrahedron can be shown to be a 3D one, does that mean that the tetrahedron is actually based on three values, and therefore of the 12, it is possible to identify a point with only three? That would be a very useful answer to my question. Grandiose (me, talk, contribs) 20:51, 11 June 2011 (UTC)
- A hollow tetrahedron is surface. There are simple functions that will map this surface onto a flat plane - just unfold it. Once on the flat plane, then x,y is sufficient to define a point. Thus two coordinates is still sufficient to define all coordinates on the tetrahedron. However there will now be a mixup of unfolded x and y against whatever the parameters you have on the tetrahedrons edges are - you have 6 edges, so you have 3 triangular graphs abc on the base, ade on one side, bef on another side and cfd on the third side. I've no idea what such a shape could meaningfully plot and I don't know where you get "12" from. -- SGBailey (talk) 23:15, 11 June 2011 (UTC)
- Other than explaining why no concept occurs in mathematics, not really (it's a bit over me :)). It's the "Similarly a hollow tetrahedron could be drawn as a standard x,y,z 3-D graph." I'm struggling, mostly as to the consquences, as with the original tetrahedron. If our original triangle was actually two values, and a third derived one, and you think the tetrahedron can be shown to be a 3D one, does that mean that the tetrahedron is actually based on three values, and therefore of the 12, it is possible to identify a point with only three? That would be a very useful answer to my question. Grandiose (me, talk, contribs) 20:51, 11 June 2011 (UTC)
- You reference is an awful graph. The shadows on the font makes things very hard to read. Anyway your triangular graph could just as easily be draw as a standard x,y graph (at right angles) and you can come along afterwards and mark in the derived z coordinate lines. Similarly a hollow tetrahedron could be drawn as a standard x,y,z 3-D graph. Everything else is derived from that - any u v or w coordinates you may choose to invent. My "tetrahedron" will not have the "right" sold angle (0.55 steradians), but will have 0.5pi (1.57)steradians and again you can come along afterwards and mark angled planes of constant u, v, or w. Does this help at all? -- SGBailey (talk) 20:34, 11 June 2011 (UTC)
- Something like this. So each point in the triangle has three components that sum to 100%. Grandiose (me, talk, contribs) 09:22, 11 June 2011 (UTC)
- The triangular graph is an example of barycentric coordinates. This can be extended to three dimensions. Gandalf61 (talk) 21:27, 11 June 2011 (UTC)
Thanks both, but I don't think I've quite managed to explain clearly enough, although Gandalf's link is the right thing I'm still struggling on the impact. Each of our data points would be a point inside the tetrahedron. If there were lines from that point to each of edges (perpendicularly), now we have four intersection points; we could read off the two values and a third derived one from each. So superficially that makes twelve. However, we could define every point in usual three dimensional coordinates, so it's clear that there are only 3 underlying numbers behind the twelve readings, just as there are actually two behind each point on a triangular graph. Putting that aside from the moment, how many of the 12 are necessarily the same? How do they relate to each other? If we can say that a+b+c=100% for a triangular graph, what similar things can we say about the tetrahedron version? Thanks Grandiose (me, talk, contribs) 15:17, 12 June 2011 (UTC)
Uncountable Godel Theorem
As I understand it, Godel's theorems only apply to logical systems in which it is possible to enumerate all the theorems. Suppose we had a system of uncountably many mutually independent axioms for arithmetic. Then, it would be impossible to enumerate all theorems, or even all axioms, simply because there would be too many of them. Would Godel's theorem, or some modern variant, still apply to this fattened system? (I think that such a system could be constructed as follows: Let 0 stand for PA, and N stand for "for M<N, all the axioms of M plus 'M is consistent.'" Then by induction up the countable ordinals, there must be an uncountable such system, in which all axioms are independent and in some sense self-evident. Is this correct?) Black Carrot (talk) 23:20, 9 June 2011 (UTC)
- To clarify, you're asking about Godel's Incompleteness Theorem. Godel has another famous theorem confusingly known as the Completeness Theorem.
- To answer your question, yes, the incompleteness theorem requires in an essential way that the theorems can be effectively enumerated. You don't need to go uncountable to see this, though; just let T be the theory consisting of all sentences which are true in . Then it's trivially complete and consistent.
- Your method of building an uncountable system doesn't work, though; just by counting, there are only countably many sentences expressible in the language of PA, so it can't go uncountable. The reason your recursive construction breaks is that at some countable ordinal, there's no way to code "M is consistent", since M isn't a computable set. I suspect this happens at , but I'm not sure.--121.74.111.168 (talk) 04:53, 10 June 2011 (UTC)
- You can go way past ε0. It's pretty clear what to do up to , the first non-recursive ordinal. My understanding is that if you iterate to , you get a (non-computable) theory that proves all true statements.
- I imagine someone has tried to push it further, but I don't know exactly in what way. Some new idea would be required. --Trovatore (talk) 20:23, 10 June 2011 (UTC)
- Interesting. Do you know where I could look to find out more about that? Is that a new result, or has it been known for a long time? Black Carrot (talk) 23:08, 10 June 2011 (UTC)
- It seems like I saw a paper, or at least a citation of a paper, that mentioned it, sometime in the last year, but I can't remember the details and I'm not sure even what search terms to use. If I come across it I'll try to let you know. I think it might be a reasonably standard result in the theory of proof-theoretic ordinals, which is something I don't know too much about. --Trovatore (talk) 10:20, 11 June 2011 (UTC)
- Try Beklemishev, L. D. (2003). "Proof-theoretic analysis by iterated reflection". Arch. Math. Logic. 42: 515–552. doi:10.1007/s00153-002-0158-7. --Trovatore (talk) 01:21, 12 June 2011 (UTC)
- It seems like I saw a paper, or at least a citation of a paper, that mentioned it, sometime in the last year, but I can't remember the details and I'm not sure even what search terms to use. If I come across it I'll try to let you know. I think it might be a reasonably standard result in the theory of proof-theoretic ordinals, which is something I don't know too much about. --Trovatore (talk) 10:20, 11 June 2011 (UTC)
- Interesting. Do you know where I could look to find out more about that? Is that a new result, or has it been known for a long time? Black Carrot (talk) 23:08, 10 June 2011 (UTC)
- If you feel like trying to prove it yourself (almost always more enlightening than reading someone else's proofs), you might first learn about ordinal notations, if you don't already know about them, and think a bit about how you might do your iteration relative to an ordinal notation. Then think about whether two notations for the same ordinal give you the same theory if you iterate along them. I'm not sure whether they do or not. If they don't, maybe they give the same theory; try to prove that. --Trovatore (talk) 10:24, 11 June 2011 (UTC)
June 10
Functional powers
Beginning with the usual definition of f−1 for a function f and
led me to the idea of functional roots (i.e. if g2 = f, then g = f1/2) and then to rational powers of functions (i.e. fa/b = (f1/b)a). From here, I had several questions:
- How could you extend the definition to real or complex powers of functions? Is it even possible, given the duplicity of things like f1/2 (i.e. if f(x) = g(x) = x and h(x) = 1/x, then g(g(x)) = h(h(x)) = f(x), so g and h are both "f1/2" for the right domain)? If it is possible, you could consider taking functions to the power of other functions, which is an interesting concept (to me, anyway).
- How do you go about graphing or finding formulaic approximations for functions like rin = sin1/2 (see here)? How was it determined that as n goes to infinity, sin1/n goes to the sawtooth function?
- Do these notions have any particular application?
Thanks in advance for taking the time with these loaded and naïve questions. —Anonymous DissidentTalk 12:44, 10 June 2011 (UTC)
- Your questions will take you into the area of functional equations. One complication you will encounter is that the functional square root of a function is usually far from unique - for example, the function
- has functional square roots
- for any real (or complex) number a. Gandalf61 (talk) 14:13, 10 June 2011 (UTC)
- The set of all "square roots" of the identity function on an arbitrary set is in natural one-to-one correspondence with the set of all partitions of into sets of size 1 or 2. --COVIZAPIBETEFOKY (talk) 15:00, 10 June 2011 (UTC)
You can define a generator of a function as follows. If:
then we can consider g(x) to be a generator of f(x). Count Iblis (talk) 15:44, 10 June 2011 (UTC)
- I get a one-to-one correspondence between the complement of a projective algebraic variety in the complex projective plane and the Möbius transformations that are functional square roots of g(z) = z. Let [a : b : c] be in CP2, with a2 − bc ≠ 0, and define
- we see that ƒ has the property that (ƒ ∘ ƒ)(z) = z. — Fly by Night (talk) 11:32, 11 June 2011 (UTC)
- I get a one-to-one correspondence between the complement of a projective algebraic variety in the complex projective plane and the Möbius transformations that are functional square roots of g(z) = z. Let [a : b : c] be in CP2, with a2 − bc ≠ 0, and define
If you're just interested in constructing the functional square root, then the method of Kneser (referenced in our article) seems to be worth studying. This paper constructs a functional square root of the exponential function, and the method seems like one could work it out without a great deal of specialized knowledge. If you're interested in looking at the whole semigroup (if there is one—which seems to me a little unlikely) of functional roots , then some basic papers on this subject appear to be Erdos and Jabotinsky (1960) [1] and Szekeres (1958) "Regular iteration of real and complex functions", Volume 100, Numbers 3-4, 203-258, DOI: 10.1007/BF02559539. It seems to be a theorem that there is no way to define non-integer iterates of the exponential function so that the semigroup property holds (maybe along with analyticity in the iteration parameter, it's not clear to me what the rules are). The following paper also seems to be worth looking at: Levy, (1928) "Fonctions à croissance régulière et itération d'ordre fractionnaire", Annali di Matematica Pura ed Applicata Volume 5, Number 1, 269-298, DOI: 10.1007/BF02415428. I haven't found any (reliable) papers that specifically address the sine function. There's this, which I find a little dubious. Sławomir Biały (talk) 13:55, 12 June 2011 (UTC)
June 11
Proof of Inequality
How to prove the following inequality?
If is a real number, then there exists a constant dependent only on such that for all real numbers , we have the inequality:
This isn't homework. I was reading a proof in a paper and I think this inequality is used and I'm wondering how to prove it. Can anyone enlighten me? — Preceding unsigned comment added by 49.2.4.186 (talk) 08:47, 11 June 2011 (UTC)
- As stated, it isn't true. For , , there's no that works for all .--203.97.79.114 (talk) 12:32, 11 June 2011 (UTC)
- Sorry, ignore that. I just had a really dumb moment.--203.97.79.114 (talk) 12:43, 11 June 2011 (UTC)
- Okay, trying again. First, if , then will suffice: the inequality clearly holds for , and the derivative of the right (with respect to ) is always at least as much as the derivative of the left.
- If , then will suffice. Let's assume by symmetry that . If , then the inequality becomes , and since , this holds. Now we again take the derivatives with respect to . The left gets us , while the right gives .--203.97.79.114 (talk) 12:55, 11 June 2011 (UTC)
- Excellent. Thanks! It didn't occur to me to differentiate. But do you think that this can be proven without differentiation. This is just out of curiosity since I understand your proof and am content with it but would like to know whether there is a nice trick that solves it. — Preceding unsigned comment added by 49.2.4.186 (talk) 02:04, 12 June 2011 (UTC)
- I think perhaps an easier approach is to divide both sides by , which converts the expressions to functions of a single variable equal to . Looie496 (talk) 16:44, 11 June 2011 (UTC)
Alternative Proof
Does anyone know of a way to evaluate the following:
that does not involve Fourier series? — Fly by Night (talk) 21:48, 11 June 2011 (UTC)
- By evaluating the double integral . Sławomir Biały (talk) 22:33, 11 June 2011 (UTC)
- Sławomir, how does that double integral relate to the sum? I don't see how the dilog function relates to the sum in question in a simple, non-convoluted way. Could you show the steps for getting from the sum I gave to that double integral please? — Fly by Night (talk) 00:56, 12 June 2011 (UTC)
- Expand the integrand as a geometric series and integrate it term-by-term. That gives your series. The integral itself can be evaluated by rotating the coordinate system through an angle of , which transforms the integrand into something of the form . The resulting integral can be dealt with by elementary means. Sławomir Biały (talk) 02:00, 12 June 2011 (UTC)
- Here is a link. Sławomir Biały (talk) 02:28, 12 June 2011 (UTC)
- There is another elementary proof at our article Basel problem (along with Euler's proof, mentioned below, and a proof using Fourier series). Sławomir Biały (talk) 02:47, 12 June 2011 (UTC)
- Excellent. How very cunning. Thanks for that. — Fly by Night (talk) 14:27, 12 June 2011 (UTC)
- There is another elementary proof at our article Basel problem (along with Euler's proof, mentioned below, and a proof using Fourier series). Sławomir Biały (talk) 02:47, 12 June 2011 (UTC)
- Sławomir, how does that double integral relate to the sum? I don't see how the dilog function relates to the sum in question in a simple, non-convoluted way. Could you show the steps for getting from the sum I gave to that double integral please? — Fly by Night (talk) 00:56, 12 June 2011 (UTC)
- There's a description of Euler's proof here. AndrewWTaylor (talk) 23:25, 11 June 2011 (UTC)
- Thanks. I'll take a look in the morning. The non-LaTeX font makes it hard to read at this time of night. — Fly by Night (talk) 00:56, 12 June 2011 (UTC)
- See question 8 here: http://www.boardofstudies.nsw.edu.au/hsc_exams/hsc2010exams/pdf_doc/2010-hsc-exam-mathematics-extension-2.pdf . (I don't really know anything about Fourier series, but the proof which the student is led through doesn't explicitly mention them; judge for yourself whether they are "involved" implicitly.) —Anonymous DissidentTalk 23:27, 11 June 2011 (UTC)
- Thanks a lot for taking the time to find that link. It looks to me that Question 8 is basically asking people to compute the integrals involved in calculating the Fourier series. — Fly by Night (talk) 00:56, 12 June 2011 (UTC)
What the hell? Have you heard of Parseval's identity you guys? — Preceding unsigned comment added by 49.2.4.186 (talk) 02:08, 12 June 2011 (UTC)
- The OP wants a proof that does not involve Fourier series. Sławomir Biały (talk) 02:23, 12 June 2011 (UTC)
- Quite! — Fly by Night (talk) 14:27, 12 June 2011 (UTC)
Sure but I'm just wondering whether FBN (= "Fly by Night" are you an owl?) here knows Parseval's identity. Because he "claims" the integrals being computed in Question 8 are "basically those involved in calculating the Fourier series". I looked at that and it sure as hell doesn't look like that to me dudes. Maybe FBN needs to brush up on his theory of integration. BTW, see my PDF http://empslocal.ex.ac.uk/people/staff/rjchapma/etc/zeta2.pdf for some cool methods on how to sum this series. — Preceding unsigned comment added by 49.2.4.186 (talk) 00:58, 13 June 2011 (UTC)
- Ask yourself this: If I didn't know that Fourier series could be used to evaluate the sum, and ergo know of Parseval's identity, then why would I ask for a proof that doesn't involve Fourier series? As for Question 8, I looked at that at 3 o'clock in the morning. The question involves terms of the form An and Bn, while integrating x2 multiplied by a trigonometric terms. Sound familiar? Thanks for the offer, but you can keep your "cool methods". According to the Exeter website, Robin Chapman lives and works in Exeter, but your IP is from Sydney, Australia[2]. Are you sure it's your PDF? — Fly by Night (talk) 14:34, 13 June 2011 (UTC)
- I think it's a valid point that the Question 8 argument is elementary in a way that an argument using properties of Fourier series is not. I'm not sure how one motivates this approach, though. Sławomir Biały (talk) 14:45, 13 June 2011 (UTC)
- I agree. I just didn't read it properly, saw all the An's, Bn's, and trigonometric and thought it was Fourier series. — Fly by Night (talk) 17:00, 13 June 2011 (UTC)
- I think it's a valid point that the Question 8 argument is elementary in a way that an argument using properties of Fourier series is not. I'm not sure how one motivates this approach, though. Sławomir Biały (talk) 14:45, 13 June 2011 (UTC)
I'll tell you Parseval's identity. It's more conceptual to think of L^2 the space of measurable square-integrable functions on [-pi,pi] as a complex inner product space (Hilbert space). The idea is that the space has an orthonormal basis (general orthornormal basis can be found by wavelets) such as {e^{-in}} for n an integer. And if f is a function in L^2 you can take its inner product against each of these functions and get "components" of f. The point is, as you'd expect in any Hilbert space, the components are the only relevant data when it comes to taking inner products. So you can find the inner product (an integral) of two functions just by knowing their Fourier coefficients. Pretty cool math dude? So basically I advise you to find the Fourier coefficients of the function "x" on [-pi,pi] and take the inner product of this function with itself and apply Parseval. The PROOF of Parseval is probably well beyond the scope of your education; you'll see it when you get to university. It involves proving the completeness of the trigonometric system (in the L^2 case) which can be done nicely with convolution but I'll save the details to your lecturers at first year undergrad. uni. — Preceding unsigned comment added by 49.2.4.186 (talk) 09:31, 14 June 2011 (UTC)
- Thanks, but you seem to miss the point of my question. I was interested in alternative proofs that did not involve Fourier series and, therefore, did not involve Parseval's identity. I'm fully aware of Parseval's identity; I first learnt it as a second year undergraduate in 2002. It's the only way I know of summing the sum I gave; that's why I titled the section Alternative Proof and went on to ask for a proof without Fourier series. I know you're new around here, so I recommend that you read WP:CIVIL before you make any more posts. Thanks for taking the time, but there's really no need to write lengthy posts about topics the OP has explicitly said they are not interested in. Thanks again. — Fly by Night (talk) 12:17, 14 June 2011 (UTC)
- P.S. Don't forget to sign your posts with four tildes, like this: ~~~~. — Fly by Night (talk) 12:18, 14 June 2011 (UTC)
- Your civility is unwarranted, Fly by Night. --72.179.51.84 (talk) 15:31, 14 June 2011 (UTC)
- P.S. Don't forget to sign your posts with four tildes, like this: ~~~~. — Fly by Night (talk) 12:18, 14 June 2011 (UTC)
Here's a nice computation (sketch). Recall that for complex . Define polynomials and note that Factorize in linear factors. Keep track of the first coefficients of in terms of their roots, and equate the limits (this way you can also compute in elementary way etc.) Also, by the theorem of dominated convergence for products, the factorizations converge to an infinite product, so you also find the Euler product for . --pma 19:27, 13 June 2011 (UTC)
- Nice, thanks pma. — Fly by Night (talk) 12:24, 14 June 2011 (UTC)
June 12
Normal Deviation
The Menstrul Cycle of female is on an average, of 28 days and has standard deviation of 2 days A)in what % of women will the cycle differ by more than 3 days from mean ? B)mark out symmetrically around the mean the range in which 80% women will lie — Preceding unsigned comment added by 49.200.54.220 (talk) 16:40, 12 June 2011 (UTC)
- This is obviously a homework question. We don't do people's homework for them here. Could you show us what you have done so far and then we will help you to understand what you need to do next. As it says at the beginning of this page: "If your question is homework, show that you have attempted an answer first, and we will try to help you past the stuck point. If you don't show an effort, you probably won't get help. The reference desk will not do your homework for you. — Fly by Night (talk) 17:49, 12 June 2011 (UTC)
- I'll just point out that from a fully technical position, the question is underspecified. The answer depends *highly* on which probability distribution you use. Average and standard deviation are really only enough to fully specify a distribution if you're assuming a normal distribution (and some other, simple distributions). While it's possible that the length of the menstrual cycle is normally distributed (Central Limit Theorem and all that), as the problem is stated we can't be sure if the assumption is completely valid. -- 174.31.219.218 (talk) 19:14, 12 June 2011 (UTC)
- Well, the fact that the OP entitled this section Normal Deviation is a bit of a clue, perhaps. Looie496 (talk) 23:18, 12 June 2011 (UTC)
- The distribution can't be normal because that would imply the possibility of a negative time. The question also seems to assume (wrongly, as far as I know) that the cycle time is constant for any particular woman. AndrewWTaylor (talk) 19:36, 13 June 2011 (UTC)
- A negative value would be 14 standard deviations away from the mean. I think we can safely ignore that problem. Few things are actually normally distributed, but a great many things are well approximately by the normal distribution within about 3 sd of the mean, which is what makes it such a useful distribution. --Tango (talk) 23:23, 13 June 2011 (UTC)
- The distribution can't be normal because that would imply the possibility of a negative time. The question also seems to assume (wrongly, as far as I know) that the cycle time is constant for any particular woman. AndrewWTaylor (talk) 19:36, 13 June 2011 (UTC)
Newton's Second Law
"The second law states that the net force on a particle is equal to the time rate of change of its linear momentum p in an inertial reference frame:
where, since the law is valid only for constant-mass systems the mass can be taken outside the differentiation operator by the constant factor rule in differentiation. Thus,
- "
This is an extra from your article on Newton's laws of motion, specifically the section on the second law. I don't understand why it says that the law is valid only for constant mass systems, not least of all because on Isaac Newton, in the section entitled 'Laws of Motion', the second law is written as , the form that I am familiar with.
I'm partly asking what the subtlety is that demands we treat the mass as constant in the first case, assuming it's not a mistake, and partly suggesting that someone who knows what they're doing clarifies the issue. I have studied mathematics and physics at university and if I am left confused by this then the layman will have no idea what's going on. Thanks. meromorphic [talk to me] 16:44, 12 June 2011 (UTC)
- Can I just say, as a the layman of your words, I can follow the first version with mass constant, but would have more trouble with the rule if the rule is equally true in changing-mass systems. (I do understand the product rule, but still.) I couldn't say which is more familiar to people with physics or mathematics degrees, but I'm certainly more aware of f=ma than one adapted for use on changing-mass systems. I don't know which Newton used, but if the second is more "true", then it may still be preferable to keep the first and say "if mass is constant" rather than "since mass has to be constant", at least introductory-wise. Grandiose (me, talk, contribs) 16:52, 12 June 2011 (UTC)
- The problem with is that if the mass of an object is changing it must be picking up or expelling mass (or energy) from outside. The mass that's entering or leaving has momentum of its own, and the formula doesn't tell us how to accurately deal with that. For example if I have a moving object and a piece breaks off, but the original object and the piece both keep moving off at the same velocity, then I end up with a non-zero , but can we really say there's a force acting on the object? Rckrone (talk) 18:24, 12 June 2011 (UTC)
- I think the term for non-constant mass comes in when you take relativity into account. You can apply all the force you want but the velocity will always be less than c. So when you get close to c what the force does mostly is increase mass, which is where the dm/dt comes in. But this is the math helpdesk, we don't normally deal with soft science like physics.--RDBury (talk) 22:53, 12 June 2011 (UTC)
- While you can do relativity using relativistic mass (and I often do when trying to give people an intuitive feels for how it works), it isn't the normal way of formulating it mathematically. You can see a derivation of the more common relativistic equivalent of F=ma here: Special_relativity#Force. --Tango (talk) 23:30, 13 June 2011 (UTC)
- I think the term for non-constant mass comes in when you take relativity into account. You can apply all the force you want but the velocity will always be less than c. So when you get close to c what the force does mostly is increase mass, which is where the dm/dt comes in. But this is the math helpdesk, we don't normally deal with soft science like physics.--RDBury (talk) 22:53, 12 June 2011 (UTC)
June 13
Ogden's lemma examples
Our article on Ogden's lemma (and many other pages I've found on the web) use the example {aibjckdl : i = 0 or j = k = l} but this language can be proven non-context-free using the pumping lemma and closure of CFLs under intersection by regular languages (take ab*c*d*). Are there examples of languages which can be proven to be non-CFLs with Ogden's lemma, but not with the pumping lemma and closure of CFLs under regular intersection and gsm mappings? --146.141.1.96 (talk) 10:51, 13 June 2011 (UTC)
nilpotent cone of a Lie algebra
The article nilpotent cone claims that "The nilpotent cone...is invariant under the adjoint action of on itself." Take the example of as in the article, so the nilcone N is spanned by E and F. Then which is not in the nilcone, so N is not ad-invariant. Have I misunderstood something, or is the article wrong? Tinfoilcat (talk) 11:15, 13 June 2011 (UTC)
- If g is a Lie algebra and x is an element of g, then the adjoint action adx : g → g is defined by adx(y) = [x,y] for all y in g. To prove that the nilpotent cone, say N, is invariant under the adjoint action you just need to show that for all x and y in N, the Lie bracket [x,y] is again in N. — Fly by Night (talk) 14:47, 13 June 2011 (UTC)
- FbN, I know what the adjoint map is and what it means to be ad-invariant (it's not quite what you say - ad-invariant means a Lie ideal, not a Lie subalgebra). The example I gave in my post seems to show that the nilcone is not ad-invariant since which is not in the nilcone, whereas E and F are. Tinfoilcat (talk) 14:54, 13 June 2011 (UTC)
- I don't recall mentioning subalgebras or ideas. I'm sorry I wasn't able to help. — Fly by Night (talk) 15:50, 13 June 2011 (UTC)
- FbN, I was very bitey above. Sorry. When you say "for all x and y in N, the Lie bracket [x,y] is again in N " that's equivalent to it being a subalgebra. Being an ideal (= being ad-invariant) is that for n in N, x in the Lie algebra, - a little stronger. Tinfoilcat (talk) 15:59, 13 June 2011 (UTC)
- No problem. Proving it's a subalgebra means that it's automatically an ideal too, by definition (because of the way the nilpotent cone is defined in the first place). Or at least I think so… — Fly by Night (talk) 16:06, 13 June 2011 (UTC)
- FbN, I was very bitey above. Sorry. When you say "for all x and y in N, the Lie bracket [x,y] is again in N " that's equivalent to it being a subalgebra. Being an ideal (= being ad-invariant) is that for n in N, x in the Lie algebra, - a little stronger. Tinfoilcat (talk) 15:59, 13 June 2011 (UTC)
The statement is wrong. The nilpotent cone is invariant under the action on Int(g) on g (interior auromorphisms). Sławomir Biały (talk) 15:53, 13 June 2011 (UTC)
- Thanks. I'll fix the page (if you haven't got there first). Tinfoilcat (talk) 15:59, 13 June 2011 (UTC)
Normalizing Associated Legendre Polynomials to get Spherical Harmonics
Hi all. I've been working on this problem for like weeks, and I can't seem to figure it out. I'm trying to normalize Associated Legendre polynomials to turn them into Spherical harmonics. The integral comes out to:
where is the normalization constant. can be found in Spherical harmonics#Orthogonality and normalization. I know that it involves integrating by parts times, and that the boundary terms vanish in each case, but I'm not sure why they vanish. Can anyone point me to a (very detailed) discussion of how to actually do the integral, or maybe a better way than by parts? Thanks!--Dudemanfellabra (talk) 23:32, 13 June 2011 (UTC)
- Hi, I had a quick look in Boas - 'Mathematical Methods in the Physical Sciences'. She has a problem (prob 10.10 in second edition) about the normalisation, first some results:
- a)
- This comes from substituting in Rodriques' formula (see Legendre_polynomials) for the Legendre polynomials. Next Boas says derive
- b)
- Multiply these two results together (this forms your normalisation integrand) then integrate by parts repeatedly until both l+m and l-m derivatives are just l derivatives. Then use
- - I assume because you end up having an integrand that looks like two Legendre polynomials multiplied together.
- Now, the derivation of result (b) is a task in itself. Apparently one can show that
by starting with and finding derivatives using Leibniz' rule. Good luck, I'd find a copy of Boas and work from that if I was you. Christopherlumb (talk) 19:55, 14 June 2011 (UTC)
- I found a copy of Boas, but unfortunately the method for part b) is not given. After applying Leibniz' rule, I get
- I have no idea where to go from here...--Dudemanfellabra (talk) 23:13, 14 June 2011 (UTC)
June 14
Software
WHAT IS THE COMPUTER SOFTWER? — Preceding unsigned comment added by 117.197.165.191 (talk) 05:38, 14 June 2011 (UTC)
- Try asking at The Computing Helpdesk. Perhaps a little more detail about your question would help. ―JuPitEer (talk) 06:50, 14 June 2011 (UTC)
- Or read our article on computer software. Gandalf61 (talk) 09:20, 14 June 2011 (UTC)
- I think the OP might be wondering how Wikipedia works. I don't know if Wikipedia has a software per se. But, as JuPitEer said: the computer reference desk would be a better place to ask. Maybe try the Wikipedia article too. — Fly by Night (talk) 14:28, 14 June 2011 (UTC)
- Or read our article on computer software. Gandalf61 (talk) 09:20, 14 June 2011 (UTC)
Evaluating the matrix exponential of a tridiagonal matrix
I am working on a classification problem based on Bayesian inference, where I observe the discrete state of an object at non-equidistant points in time (the evidence). My objective is to keep updated probabilities that the object belongs to different classes Ck, k=1,...m using Bayes' theorem. The evolution of the state can be modelled as a continuous-time Markov process. The states can be ordered such that the state only jumps to a neighboring state. The rates at which the jumps occur are a characteristic of the object class, and I have collected data from objects with known classes and based on this I have good estimates of the class specific rate matrices Rk. The rate matrices are tridiagonal since the state jumps are restricted to nearest neighbours.
The number of states n are in the range 5-20 and the number of classes m is less than 10. For the Bayseian inference update I need to evaluate the state-transition probability conditioned the class hypothesis Ck
where
- Posterior state at time
- Prior state at time
- Time difference
In my problem I need to evaluate the matrix exponential
regularly for each of the m object classes but with different update intervals. One way to do that is to diagonalize the rate matrices as
where is a diagonal matrix containing the eigenvalues (eigenrates) as then
which makes recalculating the matrix exponential for different update times easier, as calculating P and its inverse only needs to be done once per object class, so what I basically need to do is to make n scalar exponential calculations and then make the matrix product, which scales as . From the eigenrates, I also know the characteristic times in the system and I may even precalculate the matrix exponentials for some logarithmically distrubuted update times and simply pick the closest match in update time.
However, I am concerned about the numerical stability in the diagonalization. The largest eigenvalue should always be zero (corresponding to the steady state solution), but this is susceptible to roundoff error. (Until now I has postcorrected the eigenvalue setting the highest one to zero) For the remaining eigenvalues the ratio between min and max can be assumed to be smaller than 1000. I read in the matrix exponential article that doing this right is far from trivial.
Can I use the fact that my rate matrices are tridiagonal to make a more robust or elegant computation of the matrix exponentials?
--Slaunger (talk) 09:06, 14 June 2011 (UTC)
- Try the formula to reduce the matrix exponential to n squarings of the matrix , and not using the fact that M is tridiagonal. Bo Jacoby (talk) 15:56, 14 June 2011 (UTC).
- Thank you for your response. Ok, I think I understand, but if I digonalize once, I only have to do three matrix multiplications for each different , whereas using the approach you suggest I need make n multiplications, where n depends on how fast the expression converge, assuming I understand correctly? I realize it will converge very fast though... Obviously n should be selected such that that , and this is mathematically correct, but will it be more numerically stableerrors than the diagonalization approach? What I am concerned about is the numerical stability and reliability of the diagonalization approach. It is mentioned in Matrix exponential#Computing the matrix exponential that
- Finding reliable and accurate methods to compute the matrix exponential is difficult, and this is still a topic of considerable current research in mathematics and numerical analysis.
- In mathematical libraries, also the Padé approximant is often used as a robust way to do the matrix exponential for general matrices (except if it is ill-conditioned), but it seems like overkill for my case where the rate matrices are constant and I only multiply it with a scalar prior to taking the exponential.
- In the Octave documentation for expm I noticed a reference to Moller and Van Loan, Nineteen Dubious Ways to Compute the Exponential of a Matrix, SIAM Review, 1978, which seems very interesting in this context. Unfortunately I do not have access to that paper, but the abstract seems very interesting:
- And I was thinking that maybe there was a nifty, reliable and elegant method for calculating the matrix exponential if the matrix was tridiagonal and multiplied by some scalar - at least I seem to recall that solving the eigenvalue problem for a tridiagonal matrix is much easier than for a general diagonalizable matrix. --Slaunger (talk) 19:12, 14 June 2011 (UTC)
- Even when a scalar exponential eM is approximated by (1+2−nM)2n , n should be chosen with care. If n is too small, then the approximation has insufficient precision, and if n is too big, then round-off errors make 1 + 2−nM equal to 1. Floating-point numbers have fixed precision, and not everything can be done. Bo Jacoby (talk) 08:49, 15 June 2011 (UTC).
- Yes, that is why the question is not so simple. It is more about the most efficient and reliable algorithm than about the maths (although you need some linear algebra maths to arrive at the best algorithm). Maybe the Computer or science help desk is better place for my question? I was in doubt if numerical methods/algorithms are in scope of this help desk, when I asked the question, but decided to try here as the question was pretty mathematically oriented... In my problem i need to do these caclulation in real time for many objects on a computational platform with rather limited resources, so I need to understand how to find the best processing compromise. --Slaunger (talk) 10:11, 15 June 2011 (UTC)
- As far as I'm concerned, asking at the math desk is fine. I think at the computing desk they're more used to answering questions about programming languages and systems administration than numerical algorithms. --Trovatore (talk) 10:30, 15 June 2011 (UTC)
- Yes, that is why the question is not so simple. It is more about the most efficient and reliable algorithm than about the maths (although you need some linear algebra maths to arrive at the best algorithm). Maybe the Computer or science help desk is better place for my question? I was in doubt if numerical methods/algorithms are in scope of this help desk, when I asked the question, but decided to try here as the question was pretty mathematically oriented... In my problem i need to do these caclulation in real time for many objects on a computational platform with rather limited resources, so I need to understand how to find the best processing compromise. --Slaunger (talk) 10:11, 15 June 2011 (UTC)
- In order to find the most efficient and reliable algorithm you need to test some algorithms. Have you tried my method before you think that something else is better? Bo Jacoby (talk) 15:51, 15 June 2011 (UTC).
- A valid point. I have tried experimenting with the Padé approximant implementation in scipy, which seem to converge very fast for the examples I have tried. I have implemented the diagonalization approach in Python, which also seem to work and which is efficient as the number of computations is fixed once the rate matrices have been diagonalized. I have not tried your approach yet, but my fealing is that it is not as stable as the Padé approximant. However, I need to go to a production language like C to properly profile the different options, and this takes a considerable effort. I think I will try to get hold on the reference I mentioned earlier, where I have noticed there is also a 25 years later updated review article, and study what others have done instead of spending a lot of time of reinventing the wheel. Tak for din tid! --Slaunger (talk) 19:40, 15 June 2011 (UTC)
Dimension and Cardinality
Let M be a smooth, real, n−dimensional manifold. I'd like to know the dimension and cardinality of C∞(M,R), i.e. the space of smooth functions from M to R. I believe that the dimension is at least ℵ1, and maybe even ℵ2. I have no idea what the cardinality might be; although that isn't of paramount importance. Can anyone confirm, or deny, that the dimension is at least ℵ1, and maybe even ℵ2? If someone could supply a reference then that'd be great. — Fly by Night (talk) 15:45, 14 June 2011 (UTC)
- I may be being stupid, but isn't M a disjoint union of as many open balls as you like a perfectly good smooth n-dimensional manifold which has as many such functions as you like? Or is there an implicit connectedness assumption somewhere in that? 128.232.241.211 (talk) 16:23, 14 June 2011 (UTC)
- You're not being stupid at all, in fact you raise an interesting point. I guess you're right. I should have mentioned that it's connected. Although I'm not sure that it matters. You'd get a Cartesian product of the form
- I'm not sure that that would change the dimension. The because k⋅ℵ1 = ℵ1. Or at least I think so… — Fly by Night (talk) 16:42, 14 June 2011 (UTC)
- I think (part of) 128's point was that k need not be finite. AndrewWTaylor (talk) 17:04, 14 June 2011 (UTC)
- I was just thinking of a nice hypersurface like we use in differential geometry all the time. — Fly by Night (talk) 17:37, 14 June 2011 (UTC)
- Okay, so that's at least σ-compact, which still gets you separable (unless I was wrong about compact manifold => separable). --Trovatore (talk) 18:22, 14 June 2011 (UTC)
- For an abstract topological space, compact does not imply separable; for example the so-called Lexicographic order topology on the unit square. It's compact and connected, but it's not separable. As for a manifold, I'm not sure. It's usual to assume connectedness and paracompactness (that makes it a metrizable space). — Fly by Night (talk) 18:47, 14 June 2011 (UTC)
- It's actually pretty trivial; I just didn't see it right away. Given a compact manifold, each point has an open neighborhood that's homeomorphic to Rn, so that neighborhood has a countable dense subset. By compactness, finitely many such neighborhoods cover the whole manifold. --Trovatore (talk) 19:04, 14 June 2011 (UTC)
- Good work. It's easy when you know how… — Fly by Night (talk) 19:53, 14 June 2011 (UTC)
- It's actually pretty trivial; I just didn't see it right away. Given a compact manifold, each point has an open neighborhood that's homeomorphic to Rn, so that neighborhood has a countable dense subset. By compactness, finitely many such neighborhoods cover the whole manifold. --Trovatore (talk) 19:04, 14 June 2011 (UTC)
- For an abstract topological space, compact does not imply separable; for example the so-called Lexicographic order topology on the unit square. It's compact and connected, but it's not separable. As for a manifold, I'm not sure. It's usual to assume connectedness and paracompactness (that makes it a metrizable space). — Fly by Night (talk) 18:47, 14 June 2011 (UTC)
- Okay, so that's at least σ-compact, which still gets you separable (unless I was wrong about compact manifold => separable). --Trovatore (talk) 18:22, 14 June 2011 (UTC)
- I was just thinking of a nice hypersurface like we use in differential geometry all the time. — Fly by Night (talk) 17:37, 14 June 2011 (UTC)
- I think (part of) 128's point was that k need not be finite. AndrewWTaylor (talk) 17:04, 14 June 2011 (UTC)
- You're not being stupid at all, in fact you raise an interesting point. I guess you're right. I should have mentioned that it's connected. Although I'm not sure that it matters. You'd get a Cartesian product of the form
- OK, first of all, I kind of suspect that you're using and to mean and respectively, is that true? That's a bad habit, unfortunately one encouraged by a lot of popularizations. The GCH is not some harmless little notational thing but rather a strong combinatorial assertion. You can write instead and , although these are somewhat less universally recognized and suffer from the problem that and look awfully similar.
- Are we assuming M is compact? Then the cardinality of C∞(M,R) is . The lower bound should be pretty trivial; for the upper bound, note that M is separable (I think that follows from "compact manifold"), and pick a countable dense subset; then any continuous function from M to R is determined by its values on that subset. If you drop "compact" then you can find non-separable (topological) manifolds like the long line and things get a little more complicated, but if I recall correctly there's no way to make the long line into a smooth manifold. But again that's more complicated.
- For the dimension, what sort of dimension are you interested in? Topological dimension? Dimension as a real vector space? --Trovatore (talk) 16:40, 14 June 2011 (UTC)
- Ermm, errr. I wasn't assuming M to be compact; I didn't think that it would have mattered. But it obviously does. I'm thinking of it as a ring, with point-wise addition and multiplication as the operations. So (I guess) that the vector space dimension would be of more useful for me. You obviously know your stuff, so go easy. Try not the fry my mind. — Fly by Night (talk) 16:59, 14 June 2011 (UTC)
Assuming M to be connected and paracompact, , equipped with the topology of uniform convergence of all derivatives on compact subsets, is a separable Frechet space. That's probably more useful to know than the linear dimension (which is .) Sławomir Biały (talk) 17:55, 14 June 2011 (UTC)
- Is there an example of a connected infinitely differentiable manifold that's not separable? Did I remember correctly that there's no infinitely differentiable structure on the long line? --Trovatore (talk) 18:24, 14 June 2011 (UTC)
- I think the Lexicographic order topology on the unit square gives a topology on a manifold (with boundary) that is compact and connected, but which is not separable. I'm not sure if that's what you were asking. — Fly by Night (talk) 18:53, 14 June 2011 (UTC)
- There is a smooth structure on the long line (according to our article long line). The lexicographic order topology is not a manifold with boundary. No neighborhood of a point (x,1) for 0<x<1 is homeomorphic to an interval. Sławomir Biały (talk) 19:21, 14 June 2011 (UTC)
- Also, every connected paracompact manifold is separable. (Paracompactness and local compactness implies that each component is sigma compact, hence Lindeloef, hence separable.) Sławomir Biały (talk) 19:46, 14 June 2011 (UTC)
- Okay, that makes sense. Would it just be a (topological) manifold then? — Fly by Night (talk) 19:55, 14 June 2011 (UTC)
- It's not any sort of manifold. Sławomir Biały (talk) 19:57, 14 June 2011 (UTC)
- The article on topological manifold says that each point in the space must have a neighbourhood homeomorphic to an open subset of Rn. But to define a homeomorphism you need to know the topology in both the source and the target. What must the topology be on Rn to be able to talk about a homeomorphism? The unit square with the lexicographic order topology is compact, connected and Hausdorff. — Fly by Night (talk) 21:20, 14 June 2011 (UTC)
- It's with the usual Euclidean topology. Sławomir Biały (talk) 23:57, 14 June 2011 (UTC)
- The article on topological manifold says that each point in the space must have a neighbourhood homeomorphic to an open subset of Rn. But to define a homeomorphism you need to know the topology in both the source and the target. What must the topology be on Rn to be able to talk about a homeomorphism? The unit square with the lexicographic order topology is compact, connected and Hausdorff. — Fly by Night (talk) 21:20, 14 June 2011 (UTC)
- It's not any sort of manifold. Sławomir Biały (talk) 19:57, 14 June 2011 (UTC)
- Okay, that makes sense. Would it just be a (topological) manifold then? — Fly by Night (talk) 19:55, 14 June 2011 (UTC)
- I think the Lexicographic order topology on the unit square gives a topology on a manifold (with boundary) that is compact and connected, but which is not separable. I'm not sure if that's what you were asking. — Fly by Night (talk) 18:53, 14 June 2011 (UTC)
Back to the Question
My question has blossomed into a very interesting topological discussion. But could we bring it back to the question at hand. What are the main conclusions? It seems that the space of smooth functions on a manifold M to R is a Fréchet space. Does that tell us anything about the dimension? Have we decided that the dimension is 2ℵ0 (which may or may not be the same as ℵ1)? — Fly by Night (talk) 21:23, 14 June 2011 (UTC)
- Any infinite-dimensional separable Frechet space has dimension as a linear space equal to the continuum. However it might be more natural to think of the dimension as countable, since there is a dense sunspace whose linear dimension is countable. This is in much the same way that the space has continuum dimension as a linear space, but its "Hilbert dimension" is countable. Sławomir Biały (talk) 23:57, 14 June 2011 (UTC)
June 15
What's the probability of getting twenty heads in a row if you flip a coin one million times?
Here's my reasoning: the twenty in a row could happen in the first twenty tosses, or between the second toss and the twenty first toss, and so on. So there are 10^6 - 19 ways to get twenty heads in a row. So the probability would be (10^6 - 19)/2^20 ~ 95%. Is this right? Incidentally, this isn't a homework question, just something that came up in a conversation. 65.92.5.252 (talk) 01:12, 15 June 2011 (UTC)
- No, there are many more than ways to get twenty heads in a row. To see this, consider how many ways there are to have the first twenty tosses come up heads: the remaining tosses can come up in different ways! (Of course, some of these possibilities also include other runs of twenty heads in a row, besides the first twenty, so you'll need to correct for that overcounting; see Inclusion–exclusion principle.) Your denominator isn't right either—there are possible outcomes for the coin tosses, which is vastly larger than . —Bkell (talk) 01:25, 15 June 2011 (UTC)
- Alright...so how would you do it? 65.92.5.252 (talk) 01:55, 15 June 2011 (UTC)
- This is actually a hard problem. The solution to it is described on this page from Wolfram Mathworld -- it involves some pretty advanced concepts. Looie496 (talk) 02:25, 15 June 2011 (UTC)
- Well, as you say there are possible places for a run of 20. If we treat each of these possibilities as independent (they aren't, but let's pretend), you get . Of course, as mentioned, they aren't independent: having tosses 1 to 20 all come up heads increases the odds that tosses 2 to 21 all come up heads. So outcomes with multiple runs of 20 should be over-represented compared to what you would expect if they were all independent. Since the expected total number of runs of 20 is the same, independent or not, this means that the actual answer to your question should be less than 61.5%.--Antendren (talk) 02:31, 15 June 2011 (UTC)
- Thanks. 65.92.5.252 (talk) 15:59, 15 June 2011 (UTC)
The average number of 'twenty heads in a row', if you flip a coin twenty times, is 2−20.
The average number of twenty heads in a row, if you flip a coin a million times, is 999981 · 2−20 = 0.954.
The probability that the number is equal to k, is Pk = e−0.954 0.954k / k! = 0.385 · 0.954k / k!
0 1 2 3 4 5 0.385 0.367 0.175 0.0557 0.0133 0.00254
The probability of not getting twenty heads in a row if you flip a coin one million times, is P0 = 0.385.
The probability of getting twenty heads once or more in a row if you flip a coin one million times, is 1−P0 = 0.615. Bo Jacoby (talk) 12:56, 15 June 2011 (UTC).
- Cool, thanks. 65.92.5.252 (talk) 15:59, 15 June 2011 (UTC)
- Um, wrong. The events are not independent (for example, 1-20 all being heads is not independent of 10-30 all being heads), so the calculation is not valid. There simply is no easy answer to this question; the calculation is genuinely hard. Looie496 (talk) 18:40, 15 June 2011 (UTC)
- See our article Perfectionism (psychology). The proper way to criticize a calculation is to improve it. Bo Jacoby (talk) 19:26, 15 June 2011 (UTC).
- Um, wrong. The events are not independent (for example, 1-20 all being heads is not independent of 10-30 all being heads), so the calculation is not valid. There simply is no easy answer to this question; the calculation is genuinely hard. Looie496 (talk) 18:40, 15 June 2011 (UTC)
90!
Greetings, learned amigops (: I took a recreational maths test once that had a question, what are the last two nonzero digits of 90!, in order from the left? (that is, if 90! were equal to 121, they would be 21) I did not get to this question, but looking back, how would I have solved it? WA gives a solution (12) but even simple calculators were not allowed, let alone alpha. I suppose I could have written 90!'s prime factorization out, crossed out 2 and 5 in pairs, and then multipled all the remaining numbers mod 10, but is there a faster and less tedious way to do this? (remembering that all I had at my disposal were literally a pen and paper). Cheers. 72.128.95.0 (talk) 03:36, 15 June 2011 (UTC)
- De Polignac's formula may or may not be helpful-Shahab (talk) 05:53, 15 June 2011 (UTC)
- Since we are only interest in the two LS digits that are non-zero, we can reduce the calculations to products of 2 digit numbers. Implement the following algorithm:
t = 1 FOR i = 2 TO 90 t = t * iIF (t MOD 10) = 0 THEN t = t / 10DO WHILE (t MOD 10) = 0 : t = t / 10 : LOOP t = t MOD 100 NEXT
- which though tedious is doable by hand and gives the answer 52 (not 12). Is this right? -- SGBailey (talk) 11:37, 15 June 2011 (UTC)
- That algorithm fails at i=25, 50, 75. You need to repeat t=t/10 while (t mod 10) = 0 still holds. Algebraist 12:25, 15 June 2011 (UTC)
- Algorithm repaired a la Algebraist and this gives 12. -- SGBailey (talk) 13:01, 15 June 2011 (UTC)
- That algorithm fails at i=25, 50, 75. You need to repeat t=t/10 while (t mod 10) = 0 still holds. Algebraist 12:25, 15 June 2011 (UTC)
In [1]: factorial(90, exact=1) Out[1]: 14857159644817614973095227336208257378855699612846887669422168637049 85393094065876545992131370884059645617234469978112000000000000000000000L
- So, it is 12. --Slaunger (talk) 12:01, 15 June 2011 (UTC)
- I gather you wanted a way to do it by hand without a program. In that case, I would have noticed that there are a lot more powers of 2 than 5 in 90! and hence even after removing all the trailing zeros, the number will still be divisible by 4. Hence, we only need to work out the result mod 25. I would then write out every number with those divisible by 5 having their factors of 5 removed and then write this list out again taken mod 25. Then I would cancel out those which are multiplicative inverses (so 2 and 13 would cancel each other out, 3 and 17, etc (This might take a while to work out but can be done) and finally from what is left it should be easy enough to calculate the product by hand mod 25. Then multiply by 4 and you're done. --AMorris (talk)●(contribs) 14:24, 15 June 2011 (UTC)
- So, it is 12. --Slaunger (talk) 12:01, 15 June 2011 (UTC)
Riemann integral of a circle
Is there a way to finish this sum of rectangles? For a semicircle of radius r cut into n slices, the area of the kth slice is:
The sum of all of them is:
And extending to infinity:
Somehow this has to equal , which means Is it possible to show that this is true? This isn't for homework, just curiosity. I've been able to do this with a parabola, cone, sphere, just about everything but a circle, so it seems weird to me that it's so difficult. KyuubiSeal (talk) 03:53, 15 June 2011 (UTC)
- I haven't taken a look at the math, but there's an easier way to show that the area of a circle is πr^2: http://en.wikipedia.org/wiki/Area_of_a_disk#Onion_proof. 65.92.5.252 (talk) 03:58, 15 June 2011 (UTC)
- Yeah, plus there's the 'chop into sectors and rearrange into almost-parallelogram' method. Also, the integral of the circumference is area? That looks like it works for spheres too. Weird... — Preceding unsigned comment added by KyuubiSeal (talk • contribs) 04:09, 15 June 2011 (UTC)
[wrong answer removed] Ignore me, I confused your sum for an integral ... in my defence it was about midnight when I posted that. 72.128.95.0 (talk) 15:11, 15 June 2011 (UTC)
- Well, if you divided a circle like an onion, each segment would have a width dr. The area between r and r + dr is approximately (circumference at that point) * (width) = 2πrdr. Then, you can just integrate. (To the experts: please don't smite me for ignoring rigor). 65.92.5.252 (talk) 04:16, 15 June 2011 (UTC)
Maybe this has something to do with the Basel problem? That does create a pi from a sum, but I don't know how I would apply it. Or if it's even relevant. — Preceding unsigned comment added by KyuubiSeal (talk • contribs) 17:12, 15 June 2011 (UTC)
Highest number
Suppose there are N balls in a bin, each distinctly labeled with a number from 1 to N. I draw D balls from the bin, and note the highest numbered ball I get, which I call S. Is there a way to calculate the probability distribution of S? I wouldn't mind a general answer, but in the problem I'm dealing with, D<<N. 65.92.5.252 (talk) 17:07, 15 June 2011 (UTC)
- There are different ways that a highest number S can arise. So the distribution is
- --Sławomir Biały (talk) 18:49, 15 June 2011 (UTC)
- Sorry, do you mind explaining how you got ? 65.92.5.252 (talk) 20:12, 15 June 2011 (UTC)
If Visible Light Represented by People/Person, what's the Ratio
Just a thought, if the ELECTROMAGNETIC SPECTRUM was represented by population/people, how many people would see the Light? If the spectrum was so large as to not include Earths current population, how many years would it take for one person to see Visible Light? --i am the kwisatz haderach (talk) 17:41, 15 June 2011 (UTC)
- I think it would just be (highest wavelength of visible - lowest of visible) / (highest of spectrum - lowest of spectrum), then multiply by earths population. KyuubiSeal (talk) 18:19, 15 June 2011 (UTC)
The Universe on an eps on Nebulas, says Visible light is represented by only 1 inch on an electromagnetic scale of over 2000 miles. According to this show, I would take 63,630 inches/1 Mile x 2000 to get my 126.72 Million inches. With 1/126,720,000, w/worlds pop at 6.92 Bil, I take 6.92 Billion and divide by 126.72 Million to get about 55 People seeing the Light. --i am the kwisatz haderach (talk) 20:48, 15 June 2011 (UTC)