Wikipedia:Reference desk/Mathematics
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
March 5
Length of a curve
Can I get some tips on integrating equations such as , which come up when finding the length of a curve? 72.200.101.17 (talk) 02:20, 5 March 2009 (UTC)
- These are usually the differentials of things like sin-1x, iirc. If you've got a table of standard integrals, it's likely to be on that. -mattbuck (Talk) 03:12, 5 March 2009 (UTC)
- This is a classic case of where the solution is found by thinking and working outside the box. See Trigonometric substitution. (And be sure to note that the in that article's examples is not quite the same as the in your problem.) -- Tcncv (talk) 04:31, 5 March 2009 (UTC)
The trigonometric substitution x = (√a) tan; θ will transform that one into the integral of secant cubed. Michael Hardy (talk) 00:46, 6 March 2009 (UTC)
- Why use such a substitution when you have hyperbolic trigonometry? Put (a monotonic function on the whole real line) and use together with , followed by double-"angle" formula for . — Pt (T) 11:10, 12 March 2009 (UTC)
Continued - Random Generator
This is a continuation of the questions I asked above about MATLAB, C++, and a good random number generator. How about a generator that will give me a decimal between zero and one? I know I can always use any generator and then divide the output by the maximum output possible but I really don't want to be dividing by really large numbers 20,000 times.-Looking for Wisdom and Insight! (talk) 05:54, 5 March 2009 (UTC)
- Use J (programming language). Get 20000 random numbers between 0 and 1 by typing
- ? 20000 $ 0
- Bo Jacoby (talk) 06:23, 5 March 2009 (UTC).
- If you're worried about computation overhead, keep in mind that the computation time spent on calculating the next random integer likely far dwarfs a single float division computation. Also, if you are dividing by a constant power of two, the computation time is negligible (depending on how smart your compiler is): a constant just needs to be subtracted from the exponent.
- Although I didn't comment on your discussion above, I've used the GNU Scientific Library's random number generators before, and found them easy to use and flexible. (But I did not have specific requirements for a random number generator, any one would do for me.) The documentation can be found here. Click "Random Number Generator Algorithms" to get the list of high quality RNGs implemented in the GSL. Eric. 131.215.158.184 (talk) 08:49, 5 March 2009 (UTC)
- (ec):Did you check out the Gnu Scientific Library? I assume that what you want is sampling from a uniform distribution in the interval [0..1]. The function gsl_rng_uniform does almost that, it returns numbers in the interval [0..1), i.e. it may return 0, but never 1. Internally, it performs the division that you're reluctant to do. Btw, why are you worried about 20,000 divisions in C++? If you're interested in other languages, the command in R (programming language) is runif(20000), where runif is short for random uniform. --NorwegianBlue talk 08:57, 5 March 2009 (UTC)
- 20000 doesn't matter nowdays on a computer, but if you're really worried by divides then simply precompute the inverse and multiply by that. Dmcq (talk) 13:09, 5 March 2009 (UTC)
Uniform distribution and independence
Could anyone possibly explain briefly to me how one would go about showing that if X and Y are independent, identically distributed random variables, each uniformly distributed on [0,1], and , and aren't independent?
I think it's something to do with the fact that if U is large within the given range, the ratio of X:Y is approximately 1, but I'm not sure... help! Thanks :) —Preceding unsigned comment added by 131.111.8.98 (talk) 15:15, 5 March 2009 (UTC)
- Yes, follow your idea. Consider the event U>3/2 and the event V>2, for instance, and their intersection --pma (talk) 16:09, 5 March 2009 (UTC)
- (slight topic hijack, hope it's ok) What is a good book to read about this stuff? One that would let me write a sentence like the one you (pma) just wrote. Something that explains what random variables and events are in a practical enough way to show how to do such calculations, but also mathematically rigorous enough (let's say, starting from basic real analysis) to explain what the words really mean. Thanks. 207.241.239.70 (talk) 04:45, 7 March 2009 (UTC)
Differential Equations for Chemical Engineering
I'm currently reading a little into reactor design for a project I am doing (in particular, this collection of lectures. It's going well, except for some problems with differential equations (which I only started investigating 10 minutes ago, so apologies if I'm asking obvious questions). For example, the design equation of a batch reactor is:
I am reliably informed that this is an ordinary differential equation and can be solved by seperating the variables. Hence:
So far, I'm happy with that. It then says "integrating gives":
This is where I get stuck. First of all, integrating with respect to what? As I don't understand this, I don't understand why dt becomes t and dy becomes 1 (i.e. so integrates to give ). I have a feeling I've made a complete hash of this, can anyone set me straight? Thanks. --80.229.152.246 (talk) 21:30, 5 March 2009 (UTC)
- That should be
- Separation of variables is a short-cut, but somewhat of an abuse of notation when expressed like this, as it trades on the (intentional) resemblance of Leibniz's notation to a fraction. A more rigorous approach is:
- then you integrate both sides with respect to t to get
- Gandalf61 (talk) 21:58, 5 March 2009 (UTC)
- Thanks very much. It makes a lot more sense now. --80.229.152.246 (talk) 23:34, 5 March 2009 (UTC)
- If you are familiar with group theory (which has many applications in chemistry, by the way), you will know that manipulations with the quotient operator are also an "abuse of notation" analagous to manipulating the differentials. --PST 04:17, 6 March 2009 (UTC)
- Thanks very much. It makes a lot more sense now. --80.229.152.246 (talk) 23:34, 5 March 2009 (UTC)
Calculation of Pi
Suppose I randomly pick a point inside a 1x1 square, determine whether its distance from a certain vertex is less than 1, and repeat the process n times. I then calculate pi using pi=number of hits/number of misses*4. How large does n have to be if I want pi to be accurate to k digits?
Out of curiosity, I wrote a Java program to calculate pi in this way. The first 10 million repetitions gave me 3.142, but the next 250 billion only managed to determine 1 extra digit. --Bowlhover (talk) 22:13, 5 March 2009 (UTC)
- With n trials, you get a random variable with mean π and standard deviation sqrt((4π-π2)/n)≈1.6/sqrt(n). To get k reliable digits, you want the s.d. to be well under 10−k, say under 10−k/3, so you want over 25*102k trials. So your 250 billion trials should be good for 4 or 5 digits, as you discovered. Algebraist 22:23, 5 March 2009 (UTC)
- This reminds me the story of the famous Buffon's needle. Various people enjoied the experimental measure of by counting intersections of needles thrown on the parquet. Results:
- Wolf (1850), 5000 needles, =3.15..
- Smith (1855), 3204 needles, =3.15..
- De Morgan (1860), 600 needles, =3.13..
- Fox (1864), 1030 needles, =3.15..
- Lazzerini (1901), 3408 needles, =3.141592..
- Reina (1925), 2520 needles, =3.17..
- --pma (talk) 00:10, 6 March 2009 (UTC)
What did Lazzerini do differently to the others? That can't just be good luck... --Tango (talk) 00:15, 6 March 2009 (UTC)I clicked the link... surprising how informative that can be! --Tango (talk) 00:17, 6 March 2009 (UTC)
- This reminds me the story of the famous Buffon's needle. Various people enjoied the experimental measure of by counting intersections of needles thrown on the parquet. Results:
- That's some interesting cheating. If Lazzerini had done his experiment the proper way, it would probably have taken him millions of years to get 3.141592. Even my program, which has finished 360 billion trials, is still reporting around 3.14159004, with no indication that the first "0" will become "2" anytime soon.
- Algebraist: how did you get the standard deviation expression sqrt((4π-π2)/n)? I know very little about statistics, so please explain. Thanks! --Bowlhover (talk) 06:07, 6 March 2009 (UTC)
- A single trial gives rise to a Bernoulli random variable with parameter p=π/4. It thus has mean π/4 and variance π/4-π2/16. Multiplying by 4 gives a r.v. with mean π and variance 4π-π2. Averaging out n of these (independently) gives mean π and variance (4π-π2)/n. Standard deviation is the square root of variance. Algebraist 09:03, 6 March 2009 (UTC)
- It will take me a while to learn those concepts, but thanks! --Bowlhover (talk) 08:15, 7 March 2009 (UTC)
- A single trial gives rise to a Bernoulli random variable with parameter p=π/4. It thus has mean π/4 and variance π/4-π2/16. Multiplying by 4 gives a r.v. with mean π and variance 4π-π2. Averaging out n of these (independently) gives mean π and variance (4π-π2)/n. Standard deviation is the square root of variance. Algebraist 09:03, 6 March 2009 (UTC)
- Are you using something faster than Math.random() to generate random numbers, or do you have a really fast computer? After some vague amount of time (15 minutes?) I've only got 2 billion trials... though I already have 3.1415 stably.
- I am vaguely contemplating the difficulty of writing a program that cherry-picks results: given a fixed amount of time (measured in trials), it would calculate every few million trials how close the current approximation to pi is, and calculate whether continuing running the program or restarting the program gives a lower expected value for your final error. Or maybe, a-la Lazzerini, it chooses a not-too-ambitious continued fraction approximant for pi and attempts to hit it exactly, aborting when that happens. Eric. 131.215.158.184 (talk) 07:01, 6 March 2009 (UTC)
- I'm using java.util.Random, the same class that Math.random uses, so I don't think that has anything to with it. My computer's processing speed is probably not relevant either because it is only 2 GHz, probably slower than your computer's. Maybe you're outputting the value of pi every iteration? I used "if (num_trials%10000000==0)" so that printing to screen doesn't limit the program's speed.
- You might be interested to know that I modified the program to only output a calculated value of pi if it is the most accurate one made so far. Here are the results (multiply all the fractions by 4):
Data dump 4.0 1/1 3.5 7/8 3.272727272727273 9/11 3.142857142857143 11/14 3.140534262485482 1352/1722 3.1410330818340104 1353/1723 3.1415313225058004 1354/1724 3.1416202844774275 2540/3234 3.1415743789284645 2624/3341 3.1416072990876143 6284/8001 3.1416015625 6434/8192 3.141587606581002 6540/8327 3.141589737441554 6551/8341 3.1415929203539825 6745/8588 3.1415924255132652 519694/661695 3.1415928668580926 519698/661700 3.1415928268290365 520214/662357 3.1415924928442895 520254/662408 3.1415925871823793 521535/664039 3.1415926140510604 521901/664505 3.1415926673317953 521923/664533 3.14159265069769 2757939/3511517 3.141592653304311 2758832/3512654 3.1415926537518883 2789534/3551745 3.1415926534962155 8230357/10479216 3.1415926535954957 8415806/10715337 3.1415926535873497 8493712/10814530 3.1415926535881242 31499778/40106763 3.141592653590579 41739544/53144438 3.1415926535901835 260989009/332301527 3.141592653589953 261093002/332433935 3.1415926535896586 409395251/521258223 3.1415926535898775 484979918/617495610 3.141592653589793 39509540665/50305109569
- It's neat that the last, 3.141592653589793, is accurate to the last printed digit. --Bowlhover (talk) 08:16, 7 March 2009 (UTC)
- My computer's new, but it's a low-end laptop. Hmmmm. No, I print out every 2^24 iterations. I don't do a square root, either, which is an easy mistake. Maybe I'm just impatient with not running the program long enough. Good idea with printing only the best approximation so far... it sort of "cheats" because as your current estimate drifts from being too low to too high it must pass very near pi in between. Eric. 131.215.158.184 (talk) 21:27, 7 March 2009 (UTC)
- I suspect that Lazzarini's point was to show how a delicate matter is a statistical experiment... --pma (talk) 09:05, 6 March 2009 (UTC)
- My computer's new, but it's a low-end laptop. Hmmmm. No, I print out every 2^24 iterations. I don't do a square root, either, which is an easy mistake. Maybe I'm just impatient with not running the program long enough. Good idea with printing only the best approximation so far... it sort of "cheats" because as your current estimate drifts from being too low to too high it must pass very near pi in between. Eric. 131.215.158.184 (talk) 21:27, 7 March 2009 (UTC)
- It's neat that the last, 3.141592653589793, is accurate to the last printed digit. --Bowlhover (talk) 08:16, 7 March 2009 (UTC)
March 6
About Lagrange's Theorem
Lagrange's four-square theorem states that every positive integer can be written as the sum of four squares of integers.
If S is the set of all non-negative integers except 7, is it still the case that every positive integer is the sum of the squares of four elements of S?
I feel sure that this sort of question has been considered before. Does anyone know where to find it in the literature?Partitioin (talk) 00:43, 6 March 2009 (UTC)TC)
- I fooled around with a computer program for a few minutes and didn't find any counterexamples in a few thousand trials, but that means nothing. I wikilinked the theorem for you, hope that's ok. 207.241.239.70 (talk) 05:37, 7 March 2009 (UTC)
- I've confirmed solutions (sans 7s) for integers up through 300,000,000. The average number of solutions per value appears to have a slightly less than linear relationship with numbers. In the 1,000,000 range the average number of solutions per integer is on the order of 25,000 or a ratio of about 0.025. However, I've found some numbers having significantly fewer solutions, such as 10,485,760 with only 2 solutions – that's quite a statistical bump. Of course, none of this has anything to do with a proof. -- Tcncv (talk) 05:47, 8 March 2009 (UTC)
- I am not surprised that the number of solutions does not behave smoothly - there are an infinite number of integers which only have one representation (up to order) as the sum of four squares - see OEIS A006431. If you omit 5 from S instead of 7 then empirical evidence suggests that every integer has a representation as a sum of four squares of integers in S apart from 79. And, interestingly, 79 has 3 distinct representations as the sum of four squares:
- 79 = 72 + 52 +22 + 12
- 79 = 52 + 52 + 52 + 22
- 79 = 62 + 52 + 32 + 32
- ... but they all happen to include 52. However, I don't see an approach to a proof. Gandalf61 (talk) 13:17, 8 March 2009 (UTC)
- I am not surprised that the number of solutions does not behave smoothly - there are an infinite number of integers which only have one representation (up to order) as the sum of four squares - see OEIS A006431. If you omit 5 from S instead of 7 then empirical evidence suggests that every integer has a representation as a sum of four squares of integers in S apart from 79. And, interestingly, 79 has 3 distinct representations as the sum of four squares:
Physics question
Please see Wikipedia:Reference_desk/Science#Force_meter_question, (somebody sensible), for some reason the usually sentient editor "Gandalf61", appears to have lost their marbles, or I'm going blind...
ie Third opinion213.249.232.187 (talk) 14:34, 6 March 2009 (UTC)
- Nope, he's right. "Tango" proposes a good analogy near the end of the thread. yandman 16:26, 6 March 2009 (UTC)
Finding from Finding from
How do you find from ? How do you find from ?The Successor of Physics 15:19, 6 March 2009 (UTC)The Successor of Physics 13:15, 8 March 2009 (UTC)
- It's a little difficult to understand what you mean, but if you mean that knowing as a function of , you want to express as a function of , then Antiderivative is your article. —JAO • T • C 17:29, 6 March 2009 (UTC)
- Perhaps I should restate my question as "How do you find from ?". Thanks anyway.
The Successor of Physics13:15, 8 March 2009 (UTC)
- Perhaps I should restate my question as "How do you find from ?". Thanks anyway.
- is an operator. Applying it to is how you find . Your question sounds like "How do you find from ?" and has the same answer: knowing what operator to apply ( or , for instance) is not useful if you don't know what to apply it to. —JAO • T • C 17:37, 8 March 2009 (UTC)
- I want to know it so in for example because you can express derivatives more easily, and if , and sometimes I want know y and knows t but not y so that is why I asked the question.
The Successor of Physics10:04, 9 March 2009 (UTC)- Your expression is complete nonsense. Algebraist 10:58, 9 March 2009 (UTC)
- I want to know it so in for example because you can express derivatives more easily, and if , and sometimes I want know y and knows t but not y so that is why I asked the question.
- If you do know that for some known function which you can integrate, then you can use separation of variables to find . But as Algebraist says, don't try to divide both sides by , because the left-hand side isn't a multiplication to begin with. —JAO • T • C 14:21, 9 March 2009 (UTC)
- I'm reading a book on vector calculus, and suddenly remembered that ! does this help? —Preceding unsigned comment added by Superwj5 (talk • contribs) 14:53, 9 March 2009 (UTC)
- Not really. An actual example of a case where you "know " would help more. (Separation of variables was not helpful?) —JAO • T • C 18:01, 9 March 2009 (UTC)
- Look at it this way: if you know that , does that mean that ? Of course not. Just because it looks like a fraction instead of a funny symbol doesn't mean that isn't an operator, and to deal with an equation with a differential operator you can't deal with it like it was a normal fraction or took a single value. Confusing Manifestation(Say hi!) 21:23, 9 March 2009 (UTC)
Determinants as Tensor Fields or Matrix Functions
Are Determinants Tensor Fields or Matrix Functions? If yes which and please state the equation and state its inverse.(Please don't be to harsh on me. I'm only new in this field.)The Successor of Physics 15:27, 6 March 2009 (UTC)
- The most common use of the word "determinant" is as in "the determinant of a matrix", which is of course just a scalar, not a function. But there's also the determinant function, , which maps certain matrices to their respective determinants, so that's a matrix function ("matrix function" usually means a function from matrices to matrices, but scalars can be identified with 1×1 matrices so that makes little difference). I don't know enough about tensor fields to answer your first question, and I have no idea what equation you refer to. —JAO • T • C 17:38, 6 March 2009 (UTC)
- The determinant is a (rank 0, i.e. somewhat trivial) tensor on the rows (or also the columns) of the matrix, if that's what you're asking. 207.241.239.70 (talk) 05:57, 7 March 2009 (UTC)
Cesaro mean
I am looking into the Cesaro mean right now. My question is, if I take any sequence that is bounded and I do the Cesaro mean over and over, do I eventually get a sequence that converges after a finite number of turns? Thanks for any help. StatisticsMan (talk) 18:10, 6 March 2009 (UTC)
- I don't know, but looking at Cesàro summation's iterated section, "The existence of a (C, α) summation implies every higher order summation, and also that a_n=o(nα) if α>-1.", does seem to suggest bounded is a very strong hypothesis. (I think (C,n) summation means do Cesaro mean n times). JackSchmidt (talk) 20:46, 6 March 2009 (UTC)
- Ah, unfortunately, it looks like things similar to your question are well studied (so beyond my limited skills). Peyerimhoff's Lectures on Summability doi:10.1007/BFb0060951 MR0463744 appears to have some reasonable discussion of this sort of thing, and in particular describes who and when people studied iterated Cesàro summation (Hölder, Schur) and how to more easily discuss it. I didn't find any specific counterexamples, and couldn't tell if any of the theorems applied to your case, but you might have an easier time. JackSchmidt (talk) 21:00, 6 March 2009 (UTC)
- Oooh, you want a Tauberian theorem for iterated Cesàro means, and there is one in that book, Theorem III.2, and I think it says that a bounded sequence is Cesàro summable if and only if it is (C,n) summable for some n ≥ 1. There is an example in the Cesàro means chapter of a bounded sequence that is not Cesàro summable, so i think that means the answer to your question is no. JackSchmidt (talk) 21:07, 6 March 2009 (UTC)
- Of course the answer is no. The counterexample has the form of a sequence alternating two values on consecutive intervals of larger and larger length. Instead of doing a quantitative computation for you, I'll just try to convince you by this example. You are in your sofa watching TV, say zapping from channel 1 to 9. This way you generate a sequence in {1,2...,9}, each second k, switching from channel to channel . The trivial switch is also allowed, so at a certain moment, if you wish, you may start pressing channel 1 button compulsively for 100 times or more, one after the other (as actually happens to addicted TV watchers). The sequence of the Cesaro means, , somehow follows the sequence (in fact, with a regularization-delay effect). Anyway, after you have pressed channel 1 consecutively for 100 times or more, the means also are quite close to 1, say less than 2, and also the sequence of the means of the means starts being close to 1. Now, you switch to channel 9, pressing it each second, for 10,000 times or more, untill the mean, and the mean of the mean, are both close to 9 (say >8). At that point you go back to channel 1, pressing it till the first 4 iterated Cesaro means are all less than 2 again, even 100,000,000 times if needed. As you see, this way you can make a two-value sequence with no converging iterated Cesaro mean. With a bit of patience you can make the construction quantitative. I think for and for suffices.--pma (talk) 00:05, 7 March 2009 (UTC)
- Thanks all. I can not read that book online. As far as pma's example, I have thought of this. I just alternated 1 and -1. I could say I want the average to go to at least 1/2 and then at least -1/2 each time to make sure it's not converging to something. So, 1, -1, -1, 1, 1, 1, 1, 1, ... . But, if I do the averages, I get 1, 0, -1/3, 0, 1/5, 1/3, 3/7, 1/2, and if I do averages again this time on the 2nd sequence, I get 1, 1/2, 2/9, 1/6, 13/75, 1/5, 57/245, 149/560, ...
- The point is, I am barely getting above 1/4 now (I realize I did very few terms) and all values are positive. So, I am closer to convergence in only one step. No matter how weird and wild you make a sequence, if I iterate this process, I will at least get closer and closer to convergence. In your example with the TV, you also get closer to convergence after only one iteration. No matter what sequence you construct, even with factorial number of terms, each iteration you get closer and closer to convergence. So, your example may be correct, but I am not convinced yet.
- I originally asked this question because of a homework problem that we would need this in, involving Banach Limits. But, we decided to use the shift operator instead of this operator in order to invoke the Hahn-Banach theorem (actually a slight generalization of it). Then, the problem became easy. But, I thank you all for answer. I am interested in this just in general and may look into it more sometime when I have more time. StatisticsMan (talk) 14:16, 7 March 2009 (UTC)
- No problem. You would probably like the book (or other books with Tauberian theorems or Divergent series in the title). It works out PMajer's example, as well as the theorem that says that if one Cesàro mean is not enough, no finite number of iterations will be enough. It should be available in any reasonably sized university library (over 350 libraries according to OCLC 47540). JackSchmidt (talk) 15:42, 7 March 2009 (UTC)
- OK, let's see if I convince you, sorry for the TV example. It's very elementary. Start defining and ; then define to be 1, choosing so large that the first 2 Cesaro means at differ from 1 less than 1/2. Then define to be -1 choosing so large that the corresponding first 3 iterated Cesaro means at differ from -1 less than 1/3. And so on: by induction on k you define to be constantly equal to for all , choosing so large that the first k iterated Cesaro means differ from less than 1/k. As you see, you do not get any closer to convergence; all the iterated Cesaro means have liminf=-1 and limsup=1.--pma (talk) 19:28, 7 March 2009 (UTC)
- Alright, now what you are saying seems to make sense. Within the one sequence there are chunks that guarantee the kth Cesaro mean will not converge, for each positive integer k. Easy: that is, now that I have seen it, it is easy. As far as that book, I looked online and my library does not have it. But, that's okay. I have so much to do that I am required to do, I don't have much time to read some other book, even if it would be interesting. Thanks for the suggestion! StatisticsMan (talk) 20:45, 7 March 2009 (UTC)
- You are welcome. In fact, the best solution is the interpolation result quoted by JackSchmidt, that is, as I understand, "a bounded sequence with convergent second Cesaro mean has the first Cesaro mean convergent too". --pma (talk) 23:26, 7 March 2009 (UTC)
- Yea, that is a very bizarre result. It seems as if each iteration should get you closer to convergence. But, if one does not work, then 1 billion will not either. StatisticsMan (talk) 01:11, 8 March 2009 (UTC)
- We can restate it this way, taking the quotient on the space c of converging sequences: the Cesaro mean defines a bounded linear operator M on (it's a Banach space, in which the norm of [x] is for all ). Then we have , that is 0 is a semisimple eigenvalue of M (of infinite multiplicity). --pma (talk) 10:01, 8 March 2009 (UTC)
March 7
derivative of a series
Suppose we are given the series: . Here where .
I wish to compute . How do I find a closed form expression for the above series so that I can compute the desired derivative. Thanks--122.160.195.98 (talk) 06:28, 7 March 2009 (UTC)
- Because , I suppose you just need to find the derivatives of the terms. Also, if you decompose the which I wouldn't specify, I guess the series would turn into a sum of Fourier Series'.
The Successor of Physics07:20, 7 March 2009 (UTC)- How to compute u^2 is the main problem---OP —Preceding unsigned comment added by 122.160.195.98 (talk) 09:44, 7 March 2009 (UTC)
- I suspect this is supposed to be an application of Parseval's theorem or the related Parseval's identity but I'm not up to working it out. 76.195.10.34 (talk) 12:53, 7 March 2009 (UTC)
- You can't just blinding differentiate an infinite series term-by-term. That requires uniform convergence. --Tango (talk) 14:00, 7 March 2009 (UTC)
- To OP: From my comment you should see that if differentiating once is ok, then just differentiate it again!
The Successor of Physics15:05, 7 March 2009 (UTC) - To Tango: I don't understand, Tango, why uniform convergence is required.
The Successor of Physics15:05, 7 March 2009 (UTC)- It just is. If the series doesn't converge uniformly then its derivative may well be different from the term-by-term derivative. See Uniform convergence#to Differentiability. You can't just assume that limits behave as you want them to behave, you have to actually prove it. --Tango (talk) 15:11, 7 March 2009 (UTC)
- Uniform convergence of the series isn't enough, actually. You need uniform convergence of the term-by-term derivative, as our article states. Algebraist 15:18, 7 March 2009 (UTC)
- Strictly speaking, I never said it was! ;) I couldn't remember the exact theorem (I tend to avoid analysis where possible), so was careful to speak vaguely. --Tango (talk) 15:25, 7 March 2009 (UTC)
- Uniform convergence of the series isn't enough, actually. You need uniform convergence of the term-by-term derivative, as our article states. Algebraist 15:18, 7 March 2009 (UTC)
- I am the OP (from a different IP). This problem is from a paper I am reading and the author has assumed all necessary convergence conditions. The main problem is how to compute u^2. Everything else will come after that--118.94.73.74 (talk) 15:42, 7 March 2009 (UTC)
- It just is. If the series doesn't converge uniformly then its derivative may well be different from the term-by-term derivative. See Uniform convergence#to Differentiability. You can't just assume that limits behave as you want them to behave, you have to actually prove it. --Tango (talk) 15:11, 7 March 2009 (UTC)
- To OP: From my comment you should see that if differentiating once is ok, then just differentiate it again!
- How to compute u^2 is the main problem---OP —Preceding unsigned comment added by 122.160.195.98 (talk) 09:44, 7 March 2009 (UTC)
- To compute u^2 you need to multiply two infinite series: see Cauchy product for how to do that. Your situation is a bit messy but a direct extrapolation from the description in the article. Eric. 131.215.158.184 (talk) 04:20, 8 March 2009 (UTC)
confusion about the proof of the splitting lemma in the wikipedia article
I have been reading the proof of the splitting lemma in the wikipedia article of that name and was wondering if anyone could help me to understand the very first part
at the very start of the proof to show that 3.(direct sum) implies 1.(left split) they take t as the natural projection of (A×C) onto A, ie. mapping (x,y) in B to x in A now why does this satisfy the condition that tq is the identity on A. similarily to show that 3. implies 2. they take u as the natural injection of C into the direct sum of A and C (A×C) ie. mapping y in C to (1,y) how does this satisfy the condition that ru is the identity on C.
It would apear to me that they mean something else by the "natural" projection and injection but i cant see what this would be??? thanks for your help and im sorry if this is badly worded —Preceding unsigned comment added by Jc235 (talk • contribs) 16:44, 7 March 2009 (UTC)
- 3. States not only that B is the direct sum of A and C, but also that the maps in the SES are the obvious ones: that q is the natural injection from A to A×C, i.e. q(a)=(a,0) and that r is the natural projection from A×C to C, i.e. r(a,c)=c. Then, setting t to be the natural projection of A×C onto A, we have that tq(a)=t(a,0)=a for all a in A. Similarly, if u is the natural injection of C into A×C, then ru(c)=r(0,c)=c for all c in C. (note: for the case of a general abelian category, talking about elements like this makes no sense, but this suffices for concrete examples) Algebraist 17:07, 7 March 2009 (UTC)
Confusion about differentiability & continuity
Hi there - I'm looking at the function - the standard example for an infinitely differentiable non-analytic function - and I'm wondering exactly how you prove that the function has zeros at x=0 for all derivatives. In general, is it invalid to differentiate the function as you would normally would (assuming nice behaviour) to get, in this example, , and then simply say it may (or may not) be differentiable at the 'nasty points' such as x=0? Or are there functions which have a derivative which is defined for well behaved points, but also has a well-defined value for non-differentiable points? Apologies if that made no sense. How would you progress then to show that all are equal to 0? Do you need induction?
On continuity, I'm using the definition effectively provided by Heine as in [1] - i.e. that a function is continuous at c if : but what happens if both the function and the limit of the function are undefined at a single point and continuity holds everywhere else? Because it's not like the function has a -different- limit to its value at c explicitly, they're just both undefined - does that still make it discontinuous? Incidentally, I'm thinking of a function like , where (if I'm not being stupid) both the function and its limit are undefined at 0, but wondering more about the general case - would that make it discontinuous?
Many thanks, Spamalert101 (talk) 21:23, 7 March 2009 (UTC)Spamalert101
- Firstly, is not defined at 0, any more than is. In the case of , however, there is a unique continuous function f on the real line extending the given function, and it's a standard abuse of notation to call this function (which takes the value for x not zero and the value 0 at x=0) by the same name. Once you've done that, showing that f has derivative 0 at 0 is just a matter of applying the definition of differentiation: you have to show that tends to zero as x tends to zero. At all other points, the chain rule (which only requires differentiability at the points involved; no nicer behaviour is needed) tells you that the derivative is . Thus the derivative of f is the function g where g(x)= for x not zero and g(0)=0. Then you can show that g is also differentiable at 0 with derivative 0. Showing that f is infinitely differentiable at 0 requires some sort of induction. Inducting on the statement 'the nth derivative of f is 0 at zero and is some rational function times elsewhere' ought to work. Algebraist 21:32, 7 March 2009 (UTC)
- On your second question, the function f from R\{0} to R given by f(x)= is a continuous function. It can't be extended to a function continuous at 0, though, so by an abuse of terminology it might be thought of as being a function on R discontinuous at 0. Algebraist 21:35, 7 March 2009 (UTC)
- Or, without an abuse of terminology, you can just define f(0)=0, or whatever, and then it simply is a function continuous everywhere except 0. --Tango (talk) 21:53, 7 March 2009 (UTC)
- On your second question, the function f from R\{0} to R given by f(x)= is a continuous function. It can't be extended to a function continuous at 0, though, so by an abuse of terminology it might be thought of as being a function on R discontinuous at 0. Algebraist 21:35, 7 March 2009 (UTC)
- Let me add this. In order to prove that a function extends to a function on it is necessary and sufficient that all derivatives of f have limit as , which is easily seen to be the case of exp(-1/x2). One applies repeatedly the following well-known lemma of extension of differentiability, which is a simple consequence of the mean value theorem: "Let f be a continuous function on , differentiable in ; assume that there exists the limit . Then f is differentiable at x=0 too, and " (going back to f(x):=exp(-1/x2), the thing works because the k-th derivative of f has the form mentioned above by Algebraist, that is f(x) times a rational function). --pma (talk) 23:54, 7 March 2009 (UTC)
Ah that's wonderful, thanks so much for all the help. As a matter of interest, I see the wikipedia article on [2]'an infinitely differentiable non-analytic function' gives that the analytic extension to has an essential singularity at the origin for - is the same true with z^2 in lieu of z? Also, another related article [3] hints at 'via a sequence of piecewise definitions, constructing from a function g(x) with g(x)= and g is infinitely differentiable - I've managed to construct such a piecewise function using and stretches, translations etc of that, but how would you go about it with ? Is there a nice way to create such a function?[User:Spamalert101|Spamalert101]] (talk) 17:21, 10 March 2009 (UTC)Spamalert
- Yes. — Emil J. 17:29, 10 March 2009 (UTC)
- Yes, the essential singularity persists with z^2. You can see this by considering the value of the function for z small and either real or pure imaginary: for real z, the function goes to 0, while for pure imaginary z, it goes to infinity. The piece-defined functions you describe are normally called bump functions, and that article gives one way of constructing them. How did you manage to construct such a function with ? It seems impossible to me. Algebraist 01:36, 11 March 2009 (UTC)
Oh, did I make a mistake then? I sincerely doubt my own knowledge over yours! When would it become non-differentiable? I had the function so it tends to infinity on either side of ±3/2 but to 1 at ±1 and 0 at ±2 by defining the functions piecewise as above so they had a derivative which tended to 0 as x tended to ±1 or 2, for example on - would it become non-differentiable at some point then? Spamalert101 (talk) 06:39, 11 March 2009 (UTC)Spamalert101
March 8
Pascal Matrix
Is there a way to invert the Pascal matrix Uinfinity? Black Carrot (talk) 02:16, 8 March 2009 (UTC)
- The exponential definition should give a simple formula for the inverse. Upper triangular matrices behave reasonably well, even in infinite dimensions (even over non-discrete linear orders). If you work a few finite examples, the pattern for the full inverse should become clear too. JackSchmidt (talk) 03:10, 8 March 2009 (UTC)
- And for the symmetrical one - since Sn = LnUn, this means that . עוד מישהו Od Mishehu 10:20, 8 March 2009 (UTC)
Formula for nxn Singular Matrices
Is there a formula to calculate any arbitrary nxn sized singular matrices, and if yes, what?The Successor of Physics 03:56, 8 March 2009 (UTC)
- I think you can create one by creating an arbitrary (n-1)×n matrix and then add any two rows to give you the final row. This should yield a matrix whose determinant is zero. More generally, you can calculate for arbitrary values. -- Tcncv (talk) 05:59, 8 March 2009 (UTC)
Add a row of zeros. Add rows of zeros. Add all zeros. —Preceding unsigned comment added by 69.72.68.7 (talk) 03:11, 10 March 2009 (UTC)
- True, but if we take that approach, we might as well make an entire matrix of zeros. However, I suspect the OP was looking for matrices where the singular nature was not obvious – possibly as a test question or homework problem. -- Tcncv (talk) 03:35, 10 March 2009 (UTC)
- It seems to me that the OP was looking for a way of generating all singular matrices. In that case, the method above (take an arbitrary (n − 1)×n matrix, add an extra row which is linear combination of the others) almost works, except that you have to allow the extra row to be put in an arbitrary position in the matrix, not just the last row. — Emil J. 11:09, 10 March 2009 (UTC)
- You're right. Defining could generate additional cases not reproducible in my original formula if . Question: If we applied the technique to columns instead of rows, would we generate the same set? My gut tells me yes. -- Tcncv (talk) 03:13, 11 March 2009 (UTC)
- Yes, singularity can be defined either in terms of rows or in terms of columns, it makes no difference. — Emil J. 11:23, 12 March 2009 (UTC)
Hard to compute mathematical constant
Is there a prominent example of a mathematical constant which is so hard to compute, that (relatively) few decimal places are known? I'm aware that you easily can construct numbers to be hard to compute, but I'm asking for mathematical constant which is reasonanly important in its field and is hard to compute "by accident" and not by design. --Pjacobi (talk) 13:49, 8 March 2009 (UTC)
- Off the top of my head, I'd suggest looking at Feigenbaum constants. Apparently, only a thousand digits have been worked out of δ. Pallida Mors 14:32, 8 March 2009 (UTC)
- Chaitin constant is hard for sure, even if not by accident... --pma (talk) 16:29, 8 March 2009 (UTC)
- A hard to compute function is the zeta function. If you "compute" that, expect to get a fields medal.
- What's so hard about computing the (presumably Riemann) zeta function? People do it all the time. Algebraist 12:07, 9 March 2009 (UTC)
- The inverse to the Riemann zeta function is difficult to compute (which is, presumably, what the person without a signature meant), but I don't think that's really relevant to this discussion. --Tango (talk) 13:39, 9 March 2009 (UTC)
- If you can compute n digits of a monotonic function in O(f(n)), shouldn't you be able to compute n digits of the inverse in O(n(f(n))? (Assuming you have finite bounds on the value of the inverse.) Eric. 131.215.158.184 (talk) 18:15, 9 March 2009 (UTC)
- The inverse to the Riemann zeta function is difficult to compute (which is, presumably, what the person without a signature meant), but I don't think that's really relevant to this discussion. --Tango (talk) 13:39, 9 March 2009 (UTC)
- What's so hard about computing the (presumably Riemann) zeta function? People do it all the time. Algebraist 12:07, 9 March 2009 (UTC)
- A hard to compute function is the zeta function. If you "compute" that, expect to get a fields medal.
- Of course, you can define constants that are currently impossible to compute at all, such as C=0 if Goldbach's conjecture is true, otherwise C=the least even number that is not the sum of two primes; or the lowest counterexample to the Collatz conjecture; or the position of the first sequence of a googol consecutive zeros in the decimal expansion of pi. AndrewWTaylor (talk) 12:36, 9 March 2009 (UTC)
- Chaitin's constant beats them by being uncomputable period, not just currently uncomputable. It's less contrived, too. Algebraist 12:45, 9 March 2009 (UTC)
- Percentage-wise I'd imagine it's fairly hard to compute the digits of Graham's number. You'd probably run out of atoms to write the digits down with before you got any significant percentage of the overall number. Readro (talk) 13:18, 9 March 2009 (UTC)
- Brun's constant is only known to a dozen or so decimal places. Here, "known" actually means more like "made an educated guess based on a massive computation and unproved conjectures", IIUIC there is not even an explicit upper bound on the constant which would be rigorously proved. There's also a discussion in [4]. — Emil J. 13:59, 9 March 2009 (UTC)
- Good example. Eric. 131.215.158.184 (talk) 18:15, 9 March 2009 (UTC)
- How about the area of the Mandelbrot set? Fredrik Johansson 15:15, 9 March 2009 (UTC)
Compute the largest number of consecutive primes via a polynomial of an order less than the number of primes. —Preceding unsigned comment added by 69.72.68.7 (talk) 03:18, 10 March 2009 (UTC)
conditions for a map to be a homomorphism
if a map between two groups preserves group structure on some non-trivial normal subgroup of the domain, is this enough to say that it is a homomorphism. If the answer is no are there any further conditions under which this is the case.
- No, it's not sufficient. Consider the map that maps (0,0) to 0 and (1,0) to 2. That's a homomorphism on a normal subgroup of the domain. Then map (0,1) to 1 and (1,1) to 3. You how have f(2(0,1))=f(0,0)=0, but 2f(0,1)=2*1=2, so not a homomorphism everywhere. I don't know if there is a similar result that works, with just a few extra conditions, I doubt it, though. --Tango (talk) 15:23, 8 March 2009 (UTC)
what about if the map is surjective and the normal subgroup maps once over injectively onto the image
- The counter-example I gave is bijective. --Tango (talk) 16:37, 8 March 2009 (UTC)
- Note that if any automorphism of a group, G, fixes a subgroup H of G, H is said to be characteristic in G and we write H char G. If every inner automorphism of G fixes H, H is normal in G. In particular, every characteristic subgroup of a group is normal. Examples of characteristic subgroups are: the center of a group, the whole group, and the trivial group. A question you might like to investigate is whether any automorphism of a normal subgroup of a group G, extends to an automorphism over the whole group. If not, what additional conditions can you impose? --PST 11:44, 9 March 2009 (UTC)
Solving Exponentials... with a twist
Is it possible to solve exponential functions (mathematically) when the variable also appears in the base. For example: x2x=3 ?
Thanks, 160.39.193.25 (talk) 18:37, 8 March 2009 (UTC)
- I think the answer will end up being in terms of the Lambert W function. --Tango (talk) 18:55, 8 March 2009 (UTC)
- In Lambert_W_function#Examples, Example 2 shows that the solution to is . So in the case of , . Incidentally, does anyone know if there's an incantation to get Maxima to use that? By default,
solve(x^(2*x) = 3, x);
gives and doesn't solve further. --Delirium (talk) 02:48, 9 March 2009 (UTC)
- In Lambert_W_function#Examples, Example 2 shows that the solution to is . So in the case of , . Incidentally, does anyone know if there's an incantation to get Maxima to use that? By default,
March 9
sum of n terms of a series
Hello all. If are in AP and for each i, then what will be the sum of the first n terms of the series: ? Essentially I am looking for a method to find expressions for similar problems also.--Shahab (talk) 06:33, 9 March 2009 (UTC)
- In your particular case, if you first rationalize, you get the same denominators, and you are left with alternate sums of square roots of the with nice cancellations. But in general, a simple closed formula like this one, is not available for partial sums of a series; look at Euler summation formula. --pma (talk) 08:28, 9 March 2009 (UTC)
- Thanks. Your link is what I had been searching for since a long time.--Shahab (talk) 08:55, 9 March 2009 (UTC)
Fractional derivatives of exp(-1/x^2)
Just expanding on the question asked above, are all the fractional derivatives of also equal to zero? Zunaid 09:25, 9 March 2009 (UTC)
- Of course you mean: equal to zero at x=0. Yes, they are. They define the fractional integrals of f, or , for , as the convolution
- ,
- where (check your link). Then in general, is defined via the identity , with any positive integer and . For the function (and vanishing at 0), all vanish at x=0. Indeed, in this case is a smooth function, and , so we can switch derivation and convolution in the above definition of :
- ,
- showing that all fractional derivatives vanish at 0, as all the do. PS: I took the liberty of changing the header. --pma (talk) 16:35, 9 March 2009 (UTC)
Communicating meaning with distant space aliens - no pictures allowed
[This discussion began at the Language Reference Desk. -- Wavelength (talk) 14:56, 9 March 2009 (UTC)]
Imagine that the two-way communication of signals between us and some space-aliens orbiting a distant star has been established. They are blind and immobile and cannot use pictures or diagrams of any kind. There is no pre-established code or alphabet. While I can imagine that eventually the meaning of mathematical or logical symbols might eventually be established (for example tranmitting many messages such as "..+..=...." would give meaning to + and =), would it be possible to eventually build up enough meaning from a zero base so that in time they would understand what was meant by the message "Last thursday my Uncle Bill went to the supermarket"? Helen Keller springs to mind. 89.240.206.60 (talk) 02:01, 8 March 2009 (UTC)
- I don't see how it's possible to go from 2+2=4 to any non-math concept. Remember, it was impossible to decipher hieroglyphics without help from the Rosetta Stone, even though they were written by human beings, and this would be n times worse (n >> 1). Clarityfiend (talk) 05:22, 8 March 2009 (UTC)
- Earth has blind, immobile animals called barnacles, and some humans have done research on how to talk with animals (http://www.howtotalkwithanimals.com/), but I have never heard of anyone attempting to communicate with a terrestrial barnacle. Instead of contemplating communication with alien barnacle-like creatures, why not ponder how we humans can communicate better with each other? -- Wavelength (talk) 06:45, 8 March 2009 (UTC)
- LINCOS was a whole elaborate language (developed at length in a book) based more or less around that premise (though I think there were some abstract mathematical images included)... AnonMoos (talk) 07:00, 8 March 2009 (UTC)
- H. Beam Piper's much-reprinted story Omnilinual has terrans cracking the Martian language by finding a periodic table. Unfortunately, the idea in the story simply doesn't work: the English names for common elements only make sense in the context of the history of science, not modern science (eg oxygen = 'acid-maker' and hydrogen = 'water-maker]; these are Graeco-Latin rather than English, but German for example translates the roots and still perpetuates the errors), so why assume that the Martian names would be meaningful? --ColinFine (talk) 18:51, 8 March 2009 (UTC)
- For what it's worth, I coincidentally ran into the following article today ---> Pioneer plaque ... in which NASA scientists are, in fact, trying to communicate with distant space aliens ... albeit with the use of pictures. (Joseph A. Spadaro (talk) 22:20, 8 March 2009 (UTC))
The essential bottleneck to get through may be that of naming geometric shapes, such as a triangle. A triangle could then be used to build up other shapes. The triangle could be named after being identified by its mathematical properties. If however they have no sense of the spatial, then you are stuffed. 89.243.72.122 (talk) 23:56, 8 March 2009 (UTC)
- How can an organism distinguish between a random collection of perceptible stimuli and a purposeful collection of perceptible stimuli produced by intelligent design? How can it distinguish between a message and a non‑message?
- -- Wavelength (talk) 02:09, 9 March 2009 (UTC)
- Humans or even sheepdogs or bees seem to have no problems with doing that. And if we humans recieved a signal from a distant star in the form of the Fibonacci series or any other simple mathematical series, then that would indicate that the sender was an intelligent being. 89.242.94.128 (talk) 11:37, 9 March 2009 (UTC)
- The series should not be too simple, as then we could not be sure it was not generated by some nonsentient physical process. The Fibonacci sequence in particular is a very bad example, as it is known to appear in nature without any involvement of intelligence, see Fibonacci number#Fibonacci numbers in nature. — Emil J. 13:42, 9 March 2009 (UTC)
- Humans or even sheepdogs or bees seem to have no problems with doing that. And if we humans recieved a signal from a distant star in the form of the Fibonacci series or any other simple mathematical series, then that would indicate that the sender was an intelligent being. 89.242.94.128 (talk) 11:37, 9 March 2009 (UTC)
This fellow's research into a generalization of information theory that assumes no prior common language might be of interest, for a formal take on a specific variation of the question, which he calls "Universal Semantic Communication". The general strategy is to frame it as goal-oriented communication, which allows us to conclude that we've successfully communicated something when we can achieve some goal as a result of the communication faster than we would've been able to do without it. --Delirium (talk) 02:55, 9 March 2009 (UTC)
- It might be worth posting this question on the mathematics desk. I am sure that they would have ways of encoding mathematics that they would think recognisable (and going from simple operations to advanced formula). They might even have some insights in how to jump out of Mathematics. -- Q Chris (talk) 13:49, 9 March 2009 (UTC)
[The copied text ends here. -- Wavelength (talk) 22:46, 9 March 2009 (UTC)]
Computers, like these space aliens discussed, are "blind and immobile and cannot use pictures or diagrams of any kind." Yet you communicate with them and benefit from that. Bo Jacoby (talk) 10:38, 10 March 2009 (UTC).
Commuting family of matrices
Hi, I am working on a representation theory problem but I need some linear algebra to do it. I have talked to the professor and he has given me much of the problem but I just still do not understand simply because I do not think we ever covered what he is saying in linear algebra. Here's a theorem I do have
If is a commuting family, then there is a vector that is an eigenvector of every .
My professor is talking about something that is more powerful. He says, I think, if a commuting family leaves any subspace invariant, then that commuting family has a common eigenvector in that invariant subspace. Is this true?
Thanks for any help StatisticsMan (talk) 15:41, 9 March 2009 (UTC)
- They are the same thing. If they have a common invariant subspace, then they act as operators on that subspace. Just think of the elements of F as functions rather than lists of numbers and it should be clear; you just restrict the functions to that invariant subspace. JackSchmidt (talk) 15:59, 9 March 2009 (UTC)
- Okay, thanks. StatisticsMan (talk) 19:05, 9 March 2009 (UTC)
What do I have to be to get my profile on Wikipedia?
My name is Vo Duc Dien.
My question is what a person has to be to get his / her profile featured on your website?
Vo Duc Dien —Preceding unsigned comment added by 71.80.236.201 (talk) 17:44, 9 March 2009 (UTC)
- If by "profile" you mean an article about you, then the answer is at Wikipedia:Notability (people). There are a lot of specialized subguidelines of that, like Wikipedia:Notability (academics) which would apply to mathematicians (I assume that would be the relation to mathematics here). If you have more questions about Wikipedia policy, Wikipedia:Village pump (policy) is a more suitable place. —JAO • T • C 17:50, 9 March 2009 (UTC)
Question on Perfect Squares
For which n does the following hold:
?
where it is possible to choose either + or - in any of the cases.
There are a lot of solutions for number 1 to 100, but how do I generalise this to the n case? I appreciate the contribution made already, albeit I typed the question into the incorrect section on Wikipedia.
--84.70.242.151 (talk) 17:48, 9 March 2009 (UTC)
- I hope you don't mind I took the liberty of formatting the equation so it's clearer what the question is. I don't have much of an answer though. —JAO • T • C 17:55, 9 March 2009 (UTC)
- Let's say that it is necessary that either n=0 mod 4 or n=-1 mod 4, otherwise the sum is odd. As you mentioned, here PrimeHunter shows that from n=7 to 100 it is also sufficient, while there are no solutions for 0<n<7. In general, observe that an algebraic sum of 8 consecutive squares with signs exactly (+ - - + - + + -) always vanishes. Therefore you can build solutions for any n=8m, n=7+8m, n=11+8m, n=12+8m, (with ) just taking the PrimeHunter's solutions resp for n=0, n=7, n=11, n=12, and attaching to them a null algebraic sum of the subsequent 8m consecutive squares, with signs choosen with the periodicity (+ - - + - + + -). This does all n>6 with n=0 mod 4 or n=-1 mod 4. A more challenging question would be, find an asymptotics for the number of solutions (it is exponential, at least , for you can replace any group of 8 signs with the opposite, if you wish) --pma (talk) 18:56, 9 March 2009 (UTC)
Ok, that does make sense. Thanks for the input. The 8 consecutive squares result is a new one to me...can use that again sometime no doubt! I wouln't have thought to use congruence here. I would have thought a series approch would have been the way to go about it. Thanks though --84.70.242.151 (talk) 23:47, 9 March 2009 (UTC)
- Notice that the sequence (+ - - + - + + -) is produced starting by + and iterating the operation of adding an inverted copy. Once more produces (+ - - + - + + - - + + - + - - +), that gives you a null algebraic sum of 16 consecutive cubes, &c, just in case. --pma (talk) 01:30, 10 March 2009 (UTC)
- It is the constant term in
- Maybe the asymptotics can be obtained by the saddle-point method or something like that. An equivalent formulation is that it is
- where are independent random variables with having mass 1/2 at and mass 1/2 at . It is plausible that some local central limit theorem exists which gives the answer, but maybe not. McKay (talk) 04:49, 10 March 2009 (UTC)
- Oh, very nice... so maybe we can put in your product and write the equality between trigonometric polynomials:
- where is the number of representations of k as an algebraic sum of the form ; in particular we can write, for the number of solutions of the OP's problem
- ,
- and start the asymptotics...--pma (talk) 14:47, 10 March 2009 (UTC)
- Following McKay's hint, and estimating the integral by something like the saddle-point method, I obtained:
- ,
- which is not bad. For instance, for we get while --pma (talk) 08:41, 12 March 2009 (UTC)
- It is the constant term in
An alternative to human mathematical concepts
Could a system of advanced mathematics develop without abstract concepts like vectors, imaginary numbers, or matricies, or are they somehow necessary? Could an alien species understand just as much science as humans do without, or with other, abstract ideas? As an example, humans use differential equations and complex numbers to solve simple harmonic motion. Is it possible to express simple harmonic motion using other ways that aren't used simply because they were discovered late in history? --99.237.96.33 (talk) 20:31, 9 March 2009 (UTC)
- You just have to look at Newton's Principia to see for instance that a geometric way of expressing things could be used and an alien could possibly do that without a single line of equations. My own feeling is that people think about things in very different ways but still come to the same conclusions normally, so I've no difficulty thinking alien math might be quite unrecognizable and need quite a bit of reinterpretation - math is as much or more about how to go about it as the actual results. Dmcq (talk) 21:16, 9 March 2009 (UTC)
- The basic concepts will probably need to be the same, but the way they are thought of could be completely different. For example, they will probably have complex numbers (they occur quite naturally when solving technological problems), but even with Earth maths we have two completely different ways of thinking about them (i=sqrt(-1) and i="90deg anticlockwise rotation") - an ETI could do things very differently. The answers, however, ought to be the same - maths is pretty absolute (we don't know if they would assume the continuum hypothesis or not, but we can be pretty sure they will have something close to Euclid's postulates, and if you have the same basic axioms, you'll draw the same basic conclusions, regardless of how you formulate them). --Tango (talk) 23:09, 9 March 2009 (UTC)
- I can imagine an alien race that really has no advanced math or theoretical science, but simply learns things by trial and error. Take bridge design, for example. They could just keep trying different designs and choose the one that seems to hold up the best, without ever calculating the forces involved. I suspect that this is how the early bridges were built here on Earth, but an alien civilization might have continued this practice far longer than us. StuRat (talk) 00:33, 10 March 2009 (UTC)
- Would that be like the discovery and continuation of roasting pigs by burning down huts? -hydnjo (talk) 01:32, 10 March 2009 (UTC)
- The standard assumption is that any civilisation that has developed the technology necessary to receive a radio transmission from the stars will have a similar technological basis to us, including the mathematics behind it. I think it's a good assumption because if it doesn't hold we probably can't meaningfully communicate with them anyway - you need some common ground to start the process. --Tango (talk) 00:41, 10 March 2009 (UTC)
Topological Vector Spaces
Hi, I am working on a several part problem and I am stuck on this part. Specifically, if you have the book, it's Problem 10.30 part c from Royden. We have a base B at for a translation invariant topology, so the set of all basis elements for the whole set are of the form x + B. In this specific part, we are trying to show
If multiplication by scalars is continuous (at ) from to X, then If and , there is an such that .
I, with friends, have been working on this part for a while and we do not know what to do. I am assuming it's pretty simple but we just can't see it. Hopefully I have provided enough information for you to be able to understand. Can any one help me out with a hint? Thanks StatisticsMan (talk) 23:09, 9 March 2009 (UTC)
- (I assume that you mean that θ is the origin of X). Are you sure about the hypotheses? Consider an infinite dimensional real vector space X, and the topology generated by the finite codimensional affine subspaces of X (that is, open sets are union of these). This is a translation invariant topology; actually it makes (X,+) a topological group; moreover, the multiplication by scalars is continuous at (0,θ) in the pair. But it does not make X a TVS; indeed the property you stated does not hold (just take U an hyperplane and x a vector not in U). In fact now I see that an even simpler counterexample is the discrete topology on any X of positive dimension. That property follows immediately if you ask instead, that for any x in X the multiplication by scalar with x is continuous at 0 from R to X. --pma (talk) 00:29, 10 March 2009 (UTC)
- Sorry, I am assuming it is a topological vector space I think... the problem is to prove a proposition which is almost an if and only if, and the problem is confusing as to what you can assume at each point. At any point we can assume it is a vector space, but at this point I think we can assume it is a topological vector space. Is this your problem with what I was saying? Certainly, if this is needed, I am pretty sure we can just assume it. Thanks StatisticsMan (talk) 00:53, 10 March 2009 (UTC)
- Well, if X is a TVS then the multiplication by scalars is continuos at any . But you just need separate continuity in the scalar variable, as I said: assume that the map is continuous at t=0. Then for any U in B there is a nbd [-a,a] of 0 in that is sent into U by this map. In particular consider ax....--pma (talk) 01:06, 10 March 2009 (UTC)
- I get that if is continuous at , then for any open set O containing theta, there must exist an open set U in R containing 0 such that f[U] subset O. U contains an open interval around 0, that contains one that is centered at 0, so [-a, a] as you said. Since [-a, a] \in U and f[U] subset O, we know ax in O. Now, of course, x in 1/a O, which is what we wanted to show. But, I guess I do not understand if I can assume that function is continuous? Are you saying in a topological vector space that it is not given that is true?
- In a TVS, it is continuous by definition; but of course in the "conversely" part of the main theorem you can't assume it, for "X is TVS" is the thesis. So, what you are saying proves that, if X is a TVS, then 4 holds for any U nbd of θ. Conversely, if you have a vector spaces with a family B satisfying properties 1 through 5, and you give it the topology generated by set of B and their translates, properties 4 and 5 tell you that, for any x in X, the map taking a real t into tx is continuous at t=0 (Do you see why? If x=θ there is nothing to prove; otherwise, the α given by 4 is not 0, and (1/α)x is in U; by 5 the whole interval [-1/|α|, 1/|α|] times x is contained in the U of B given by 4). This is not all you need to prove, it is just the separate continuity of the multiplication in the first variable at 0. See below as to the full continuity of the multiplication by scalars. --pma (talk) 09:56, 10 March 2009 (UTC)
- Let me just write out the entire Proposition I am trying to prove and the two parts I have already proven (which I was trying to avoid :)... ):
Let X be a topological vector space. Then we can find a base B at theta that satisfies the following:
1. If U, V in B, then there is a W in B with .
2. If U in B and x in U, there is a V in B such that .
3. If U in B, there is a V in B such that .
4. If U in B, and x in X, there is an such that .
5. If U in B and , then and .
Conversely, given a collection B of subsets containing theta and satisfying the above conditions, there is a topology for X making X a topological vector space and having B as a base at theta. The topology will be Hausdorff if and only if
6. .
- And, the first three parts of the problem are (remember, I am on the third part):
Prove Proposition 14:
(a) A collection B of subsets containing theta is a base at theta for a translation invariant topology if and only if 1 and 2 hold.
(b) Addition is continuous from to X if and only if 3.
(c) If multiplication by scalars is continuous (at ) from to X, then 4.
- So, does any of this let us prove that this function you constructed is continuous. I appreciate that you are helping. I'm just lost. StatisticsMan (talk) 04:17, 10 March 2009 (UTC)
- Ok, the only possibly uncorrect point is c): it is not true taken alone, as I showed to you; but it's not a problem; you can replace it with the continuity at 0 in R of the map . Anyway, I think I understand where the mess came from: proving the sufficiency and the necessity part of a big proposition at once (it happens to me too, sometimes). In order to clarify things I suggest that you keep disjoint the two main parts of the principal theorem and attack separately each of them (then, after, you may have a look at the exact meaning of each axiom or combination of axioms). So, first assume that X is a TVS, that is, the vector space operations and are continuous; in this case you have to build a base B of nbds of θ satisfying 1...5, using the continuity of + and . Notice that the set of axioms is not standard: you may find another equivalent base with slightly different properties. In fact you may first build a base B', partially satisfying the axioms, then find an equivalent base B improving B'. In any case the idea is to exploit the continuity assumptions in order to get a nbd base as rich of properties as possible. Then do part 2: now X is just a vector space, but you have the family B satisfying 1...5: you give X the topology generated by sets in B and their translates, and you need to prove that it makes continuous the vector space operations. The continuity of the sum comes from 3 as you wrote; once you have it, to prove the continuity of the multiplication by scalars, it is useful to write , that reduces the task to just proving continuity in the pair (a,x) at (0,θ); continuity in the first variable a at 0 for all fixed ; continuity in x at θ, for all fixed . Well, it's still a bit generic, I hope it helps however. Which part makes problems to you: I) "X TVS there exists a B satisfying 1..5", or II) "X with a B satisfying 1..5 X is TVS with nbd base B" ? --pma (talk) 08:21, 10 March 2009 (UTC)
- Ah, maybe I see what happened. Philologically, there has been a contraction: I guess that originally point (c) stated: "If multiplication by scalars is continuous (at ) from to X, then 5"; and there were a further point (d) stating: "If multiplication by scalars is separately continuous in the real variable at 0, for all x, then 4".
- Last remark: to prove the continuity of at θ, for all fixed , you can write , with n a positive integer and , and then use the (already proven) continuity of the sum and the axiom 5. Is it OK? Now you should have everything, although scattered here and there... --pma (talk) 10:21, 10 March 2009 (UTC)
- Ah, maybe I see what happened. Philologically, there has been a contraction: I guess that originally point (c) stated: "If multiplication by scalars is continuous (at ) from to X, then 5"; and there were a further point (d) stating: "If multiplication by scalars is separately continuous in the real variable at 0, for all x, then 4".
- The problem is we do not know if 5 is true yet. So, not sure if it's allowable to use it. StatisticsMan (talk) 15:17, 10 March 2009 (UTC)
- This depends on what implication you are proving. If you are proving (I), you have to find a base B of nbd's (not all nbds, just a base) with the properties 1..5; if you are proving (II), it is true by hypothesis... --84.221.208.38 (talk) 17:54, 10 March 2009 (UTC)
Unindenting because it's harder to read now. I talked to my professor and it turns out to not be too hard. I understand it now. For part c, we either are given it's continuous everywhere, so it is at (0, x) for any x, or we're given only (0, theta) and by translation invariance, it is also at (0, x). So, for any U in B, there exists an open set O in R times X such that (0, x) in ) and times(O) subset U. In particular, there exists a subset of O of the form (-epsilon, epsilon) times (x + V) where V in B, and this is true because (0, x) is in there. Therefore, epsion/2 x \in U so x in 2/epsilon U so we're done. Thanks for your help though. It was helpful to think about it in other ways. StatisticsMan (talk) 23:52, 10 March 2009 (UTC)
- Yes, it is very easy. Just do things with a bit of order. When you start a proof, just ask yourself if it is clear to you what are the hypotheses, and what is the thesis. The same when you communicate with other people: make it clear what you want to prove, under what hypotheses. Anyway, I'm glad you got it :)--pma (talk) 09:51, 11 March 2009 (UTC)
March 10
Pearson's Square namesake?
Pearson's Square, a method for blending mixtures, is not to be confused with Pearson's Chi Square by Karl Pearson who is not connected to Pearson's Square in any of many searches. So, who is this Pearson Square person?69.72.68.7 (talk) 02:54, 10 March 2009 (UTC)
- Dunno, wiki doesn't seem to have an article on it ;-) I have come across this before so I guess it is notable - so why not start an article on it? Dmcq (talk) 23:36, 10 March 2009 (UTC)
- Googled a bit, found no original reference. Pearson's Square is used in this 1922 paper, so it must have been well known at that time. Karl was born in 1857 (and 65 in 1922), Egon was born in 1895 (and 27 in 1922). --NorwegianBlue talk 20:43, 11 March 2009 (UTC)
Very quick question on constant functions
Hi there - if we have a function s.t. , I want to show f is constant. I'm tempted to just say so taking the limit as we get f'(x)=0 - however, don't you have to assume differentiability to take this limit? Naturally f(x)=c is differentiable with no problems but it seems somewhat circular to say f(x)=constant because of a property we get from f(x) being differentiable which we know is possible because f is constant...
Does my argument need refining? Where would I start?
Thanks, Otherlobby17 (talk) 08:28, 10 March 2009 (UTC)Otherlobby17
- From the inequality, it follows that the limit defining the derivative exists and is 0. Ringspectrum (talk) 08:45, 10 March 2009 (UTC)
- Given any sequence xn converging to y, the sequence of elements of the form Δf (xn) (where Δf denotes the difference quotient of f) is bounded between two sequences; one sequence being the constant "0" sequence (lower bound) and the other sequence being an arbitrary sequence converging to 0 (upper bound if xn greater than y for all y (the other case is similarly handled)). By the squeeze theorem, the sequence of difference quotients converges to 0; since the sequence we choose was arbitrary, we have established that the derivative of f at y is 0. --PST 09:17, 10 March 2009 (UTC)
- well, in fact the squeeze theorem is also stated for functions, in which case there is no real need of an addendum to Ringspectrum's neat argument ;) --pma (talk) 10:59, 10 March 2009 (UTC)
- My argument basically proves the squeeze theorem for functions just in case it was not clear enough for the OP. I like sequences better than functions because they encode enough information when you are living in a topological space(although sequences are actually functions!). --PST 07:46, 11 March 2009 (UTC)
- Sequences encode enough information when you're in a topspace? What do you mean by that? Algebraist 20:48, 11 March 2009 (UTC)
- He means he lives in a first countable space, I suppose ;) --pma (talk) 08:54, 12 March 2009 (UTC)
- Right, but I prefer to allow all topological spaces to be in the zoo rather than just first countable ones. :) --PST 09:23, 12 March 2009 (UTC)
- So what did you mean, then? Algebraist 10:21, 12 March 2009 (UTC)
- Right, but I prefer to allow all topological spaces to be in the zoo rather than just first countable ones. :) --PST 09:23, 12 March 2009 (UTC)
- He means he lives in a first countable space, I suppose ;) --pma (talk) 08:54, 12 March 2009 (UTC)
- Sequences encode enough information when you're in a topspace? What do you mean by that? Algebraist 20:48, 11 March 2009 (UTC)
- My argument basically proves the squeeze theorem for functions just in case it was not clear enough for the OP. I like sequences better than functions because they encode enough information when you are living in a topological space(although sequences are actually functions!). --PST 07:46, 11 March 2009 (UTC)
- well, in fact the squeeze theorem is also stated for functions, in which case there is no real need of an addendum to Ringspectrum's neat argument ;) --pma (talk) 10:59, 10 March 2009 (UTC)
Where did the symbol (J) for irrationals on this page originate from?
Hi,
Question: At the bottom of the following page, the symbol (J) is listed for irrationals with R for Real, and Z for integers. Could someone verify, prove or provide the origination of this information?
URL: http://en.wikipedia.org/wiki/Irrational_number
Thanks, doug
Basic Natural numbers (N) · Integers (Z) · Rational numbers (Q) · Irrational numbers (J) · Real numbers (R) · Imaginary numbers (I)· 12.77.184.121 (talk) 10:20, 10 March 2009 (UTC)
- I and J are just local notations; I don't know where their story originated, and where it will end :) --pma (talk) 11:04, 10 March 2009 (UTC)
- I've never seen this notation, on Wikipedia or elsewhere. It was added to the template by an anonymous user without explanation in January, and I strongly suspect it is simply bogus, so I'll revert it. — Emil J. 11:11, 10 March 2009 (UTC)
- ...so now I know where it will end: exactly where I thought ;) --pma (talk) 11:29, 10 March 2009 (UTC) I
- I've seen it before, but very rarely. is much more common. --Tango (talk) 11:33, 10 March 2009 (UTC)
I'm working on a new article about differentiation by taking logarithms. It's a bit of spin-off of logarithmic derivative, which mostly concerns the derivatives of logarithms (oddly); this one I've tried concerns the technique of taking the natural log of both sides of some trickier functions in order to simplify them before differentiation.
Because I'm an amateur who merely takes an interest in mathematics, and this is my first article on the subject, I'd not feel right moving it to the mainspace before someone more informed had a look in. It's possible I've made mistakes, in the content and maybe in the calculations. Therefore, I'd appreciate some feedback and some review of how it's looking. At any rate, it's by no means done; there is still referencing to be done, as well as general cleanup. I appreciate the help. Best, —Anonymous DissidentTalk 15:04, 10 March 2009 (UTC)
- Just for the other ref-deskers: the article appears fit for mainspace as far as policy, notability, formatting, sources, wiki links etc. This question really is just about the math content itself, which is first year calculus (taking derivatives quotients, powers, and compositions of elementary functions).
- For the OP: If you don't get an answer here, you might try at WT:MATH, the WikiProject Mathematics discussion page. It is probably the right place to ask if you need more assistance. Some of the math is "wrong" or "incomplete", but not horribly so. It needs somebody to check it over and correct a few small things, but overall it looks fine. The references could use some polishing too (author; find non-self published works like a paper/dead-tree calculus textbook). JackSchmidt (talk) 15:24, 10 March 2009 (UTC)
- I think the question is whether this violates Wikipedia is not a textbook. I'm not sure this technique is notable, it's just a useful trick for differentiating certain functions. --Tango (talk) 16:27, 10 March 2009 (UTC)
- As I said, the referencing is not finished. I have quite a few more reliable sources, and this is quite well documented. It might not be of the same order as the chain or product or quotient rule, but I've definitely seen it used in even textbooks. So, I'm not really sure it's just a trick; that's what I thought at first, but it's wide usage and actual prescription lead me to think it could be classified as a technique. I did try, deliberately, to make sure it was not textbook-y. Perhaps I overdid it on the examples? In that regard, I was just following the suit of articles like chain ruleor product rule. Perhaps I'll tone those down a bit. —Anonymous DissidentTalk 20:40, 10 March 2009 (UTC)
- Note that while Wikipedia is not a textbook, some of Wikipedia's math editors are also working on the Wikibooks b:Calculus project, which is a textbook. Wikibooks is a sister project to Wikipedia, which works on textbooks. The calculus book can always use more contributors. 76.195.10.34 (talk) 20:56, 10 March 2009 (UTC)
curves within curves
What is the volume of a sphere of given radius in hyperbolic 3-space? —Tamfang (talk) 15:47, 10 March 2009 (UTC)
- On page 83 of:
- Ratcliffe, John G. (1994), Foundations of hyperbolic manifolds, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94348-0
- Exercise 3.4.5 asks to show that the volume of a 3-dimensional hyperbolic ball of hyperbolic radius r is π·( sinh(2r) − 2r ). JackSchmidt (talk) 16:27, 10 March 2009 (UTC)
- Thanks. Let me guess: in any dimension, you lop off the beginning of the Taylor series for
sinh(a*r)
orcosh(a*r)
and scale it so that it approaches the Euclidean case for small r? How do you find a? —Tamfang (talk) 21:36, 11 March 2009 (UTC)- That may well work, but something a little more rigorous would be preferred! If that method does work (I haven't tried to prove it), I think you need to choose a so that the coefficient of the nth term (for an n-ball) matches the fraction in the Euclidean case (and then you minus off all the preceding terms), and put a pi out the front. --Tango (talk) 02:29, 12 March 2009 (UTC)
- Thanks. Let me guess: in any dimension, you lop off the beginning of the Taylor series for
Energy loss in Reflection (intensity loss) [formula wanted please]
Move to Science desk. --Tango (talk) 17:02, 10 March 2009 (UTC)
Probability
So, I am trying to figure out the probability that something won't happen. This is in reference to a computer game, World of Warcraft. There is a boss in this game you can kill once a week. This boss has a 17% chance of dropping Item X. This implies an 83% chance to not drop it. So I would assume the chance to not drop 'Item X,' 11 times in a row would be equal to (0.83)^11 = ~0.22 = 22% chance that he won't drop it 11 time straight. My first question is this: is my math correct up to this point?
Second question: It takes 25 people cooperating to kill this boss; Does this affect the drop rate or probability of attainment for an individual? (i.e. 1/25 * 22% to arrive at 0.8% from an individual's perspective.....Does perspective here, either as a group or individual, even play in to the statistics?--Mrdeath5493 (talk) 19:18, 10 March 2009 (UTC)
- (0.83)^11 is about 0.13 = 13%; so there is an 87% chance that it drops at least once after 11 runs. The drop rate for an individual depends on how you divide the loot. If everyone has an equal shot, then the probability of one particular person getting it is a little complicated, but is at least 87%/25 which is about 3%. It is a little higher, since it might drop twice or even eleven times. My calculations are that assuming everyone has an equal shot, the odds of an individual getting are:
- This assumes a person still rolls even if they already have it from a previous run. My calculation for if people stop rolling once they get it is a little fuzzier, but only comes out at 7.7%. JackSchmidt (talk) 20:42, 10 March 2009 (UTC)
- I noticed that 0.87^11 is about 0.22, so in case the drop rate was 13%, here is the calculation
- and the "stop rolling" version is about 5.9%. JackSchmidt (talk) 20:44, 10 March 2009 (UTC)
March 11
number of spanning trees of K_n-e
Hello. I want to use Cayley's formula to show that the number of spanning trees in the labeled graph Kn-e is (n-2)nn-3. Here e is any edge in Kn. Can someone point me in the right direction please. Equivalently I want to figure out the number of spanning trees involving the edge e.--Shahab (talk) 06:23, 11 March 2009 (UTC)
- You know there are spanning trees altogether. Choose one at random. What is the probability that it includes e? McKay (talk) 07:37, 11 March 2009 (UTC)
- .
Now what?--Shahab (talk) 07:56, 11 March 2009 (UTC)OK I got it. Thanks for the tip. Cheers--Shahab (talk) 08:03, 11 March 2009 (UTC)
- .
March 12
transcendental numbers and Khinchin's constant
In the biography of Gelfond, it states that before he proved his theorem few explicit transcendental numbers were known. Did not Liouville (constructively) prove the existence of infinitely many using certain decimal expansions?
In the description of Khinchine's constant, it reads "it has not been proven for any specific real number whose full continued fraction representation is not known". Should this not read "known"? For how can the determination be made for a specific number if the digits a0, a1, ... are unknown?
Thanks for any clarification —Preceding unsigned comment added by Aliotra (talk • contribs) 03:10, 12 March 2009 (UTC)
- Yes, Liouville proved the existence of transcendental numbers (in fact of continuum-many of them). However, Liouville's proof only works for a class of numbers specifically invented for the purpose, so perhaps what is meant is that few naturally-occurring numbers were known to be transcendental. Algebraist 03:25, 12 March 2009 (UTC)
Square root of a limit = limit of square root?
If g_n is a sequence of functions in L^2 and is it also true that I sure hope so because I need it to be true. I can also assume g_n and g are continuous if that matters. Thanks StatisticsMan (talk) 04:01, 12 March 2009 (UTC)
- The square root of a limit of nonnegative real numbers equals the limit of the square roots, yes. Black Carrot (talk) 06:20, 12 March 2009 (UTC)
- And the reason for that is that the square root function is continuous. — Emil J. 11:19, 12 March 2009 (UTC)
- I wish that all assertions I make are true (in mathematics) but fortunately this is not the case (otherwise there wouldn't be much fun). --PST 09:25, 12 March 2009 (UTC)
- Alright, thanks a lot. StatisticsMan (talk) 12:12, 12 March 2009 (UTC)
"Ecological" study etymology
In statistics, why is a study that deals with only aggregates called an ecological study when it doesn't necessarily relate to ecology? NeonMerlin 04:44, 12 March 2009 (UTC)
- The word "ecology" here is being used similarly, but not indentically to the word "population" (see statistical population). In aggregate studies, statisticians work with a large number of aggregates of individuals: the word population would be inappropriate to describe each individual aggregate, because (1) the statistician is not sampling individuals from each aggregate, and (2) the statistician is sampling aggregates from the collection of all aggregates. In this context "population", being used technically (i.e., meaning "that which samples are taken from"), refers to the collection of all aggregates. So some word other than "population" is needed to describe each aggregate of individuals.
- The word "ecology" is suggestive because individuals within a given aggregate tend to be highly correlated in the relevant dimensions, and form very messy, far-from-Gaussian distributions (or at least, the statisticians are not interested in the properties of the distributions within an aggregrate, so that messy distributions are expected and not problematic): for example, the example at ecological fallacy#Origin of concept could have a bimodal (at least) distribution of literacy rates within each state, for immigrants and non-immigrants. Although "ecology" is far from the perfect word to describe this, it seems to fit reasonably well. I'd guess that the usage of the word "ecology" arose because this concept arose first in biological studies (this is a guess, I really don't know), and was later generalized to other situations.
- I'd welcome discussion from others. Eric. 131.215.158.184 (talk) 08:51, 12 March 2009 (UTC)
Calculation probability for draws
In the aftermath of last nights Champions League matches, I was wondering what the probability for a couple draws for the quarter-finals would be. So there are 4 English teams, 2 Spanish, 1 Portuguese, 1 German. So what's the probability of the four English teams draw each other. No English team draw another. Two of the English teams draw each other. Two Spanish teams draw each other. And the big one, the four English teams draw each other while the two Spanish teams draw each other. (this might perhaps be trivial math though I can't figure out the right formula for calculating) chandler · 09:37, 12 March 2009 (UTC)
- If the english teams are ABCD, the spanish EF and the others GH, we can only get your last criteria if the draw is one of: AB CD EF GH, AC BD EF GH, AD BC EF GH. So that is 3 in 7*5*3 (A v One of 7; first alpha left v remaining one of 5; first alpha left v remaining one of 3; the last two play each other.), so 3 in 105 or 0.0286. -- SGBailey (talk) 10:24, 12 March 2009 (UTC)
bound degree of a region in a planar graph
Hello. Suppose we are given a planar representation of a graph G, and for each of its regions we define the bound degree to be the number of edges enclosing it. Suppose also that it is given that the bound degree of each region is even. I want to show that the graph is bipartite. Obviously as a bipartite graph has no odd cycles, so what I want is to show that cycles involving edges encompassing more then 1 region are also even. How can I do that? Thanks--Shahab (talk) 12:39, 12 March 2009 (UTC)
What is this probability distribution?
I've recently graphed the following probability distribution:
What is it? It seems to have a fat tail, but that doesn't concern me much; its the oddly mis-shapen nose at the top, and the log-linear sides that have my interest. linas (talk) 14:38, 12 March 2009 (UTC)
Extract the constant term from a polynomial, with Maple9
Hi, excuse this naive question. I'm doing a computation with Maple9. As a result I have a huge trigonometric polynomial, and I want to extract the constant term. As a mathematician, I would just integrate over [-pi, pi], but this can not be the right answer. How can I just make it find the constant term? Thanks --131.114.72.215 (talk) 15:31, 12 March 2009 (UTC)