Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 75.57.243.88 (talk) at 17:19, 26 June 2010 (Contingent events). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


June 20

Fields medal

How come no women has ever got the fields medal? —Preceding unsigned comment added by 58.109.119.6 (talk) 08:16, 20 June 2010 (UTC)[reply]

Many people haven't got the Fields medal. Some of them were not qualified. The rules discriminate against age, but not against gender. Bo Jacoby (talk) 11:09, 20 June 2010 (UTC).[reply]
Also, mathematics is very much a male dominated subject. In my faculty there was a single female member of the academic staff amongst 50 or so male members. Why haven't any women won Fields Medals? Well, it's just a statistical thing more than anything else. •• Fly by Night (talk) 20:48, 22 June 2010 (UTC)[reply]

A-invariant subspaces

Let V a finite-dimensional vector space over a field K, A ∈ HomK(V,V), V = ⊕i=1k Wi with A-cyclic subspaces Wi ≠ {0}, and Ai := A|Wi for 1 ≤ i ≤ k. Furthermore, the characteristic polynomials fAi are pairwise coprime. My question: Is v = ∑i=1k vi with vi ∈ Wi for all 1 ≤ i ≤ k an A-generator of V iff vi is an A-generator of Wi for all 1 ≤ i ≤ k? --84.62.192.52 (talk) 13:26, 20 June 2010 (UTC)[reply]

If you don't get an answer here, you might try taking this question to www.mathoverflow.com . However if you post it there, you're going to need a little more motivation for the question. Read the FAQ - questions that aren't presented so as to interest mathematicians will be closed very quickly. 24.6.2.115 (talk) 22:37, 23 June 2010 (UTC)[reply]

Right-angled triangles in a quadrant

The "number of right triangles with nonnegative integer coordinates less than or equal to N and one corner at the origin." (A155154) seems to have a very regular graph- is there an explicit formula for this sequence? 70.162.12.102 (talk) 19:40, 20 June 2010 (UTC)[reply]

It seems very unlikely to me. There are n2 triangles with the right angle at the origin, but then for the ones with the right angle elsewhere it gets pretty messy pretty fast. Any right triangle on the lattice is going to have its legs in a rational ratio so we can think of all the triangles as scaled up versions of the triangles with leg lengths a, b, with a, b positive and relatively prime. There are two possible orientations, so just consider the one where the right angle is closer to the x-axis than the other vertex. For given a, b there is the triangle with right angle at (a, 0) and the other vertex at (a, b) but we can also replace our "x unit" (0, 1) with any (c, d) where c, d are non-negative and not both zero. Then the "y unit" becomes (-d, c), so the vertices of the triangle become (ac, ad) and (ac - bd, ad + bc). For these points to be inside the region we want they have to satisfy bd ≤ ac ≤ n and ad + bc ≤ n. So the problem is now counting all the values of a, b, c, d that satisfy these conditions for a given n, which is not straight forward. Rckrone (talk) 21:31, 20 June 2010 (UTC)[reply]
I agree, there probably isn't a nice formula (it is, of course, almost impossible to prove that). There may, however, be an asymptotic formula that is very accurate even for small values of n. There must be some explanation for the graph looking so regular. --Tango (talk) 00:27, 21 June 2010 (UTC)[reply]
an ≈ 3.1735n2.14937±7.65 for 0≤n≤44. Bo Jacoby (talk) 12:44, 23 June 2010 (UTC).[reply]
Or perhaps an ≈ 3.1356n2.152891.00303±1. Bo Jacoby (talk) 08:33, 24 June 2010 (UTC).[reply]


June 21

Books on Randomness

Could anyone recommend some good books on randomness? More specifically, how randomness affects everyday lives and is often interpreted by people as having patterns. It needs to be at a level that a non mathematician can understand. 'A Random Walk Down Wall Street' is a good example of what kind of book I am looking for though it does not need to be financially based. Also, I would be interested if there are any books that really delve into how randomness and perceived patterns can be utilized/exploited, for example Casinos using the house edge against gamblers thinking they have patterns and systems that beat the odds.

I searched the archive to see if there were any previous questions along this line but could not find any, although searching for randomness is a tough search. 63.87.170.174 (talk) 16:26, 21 June 2010 (UTC)[reply]

You might want to try The Black Swan by Nassim Nicholas Taleb. -mattbuck (Talk) 16:36, 21 June 2010 (UTC)[reply]
Based on the reviews at Amazon, it looks like The Black Swan is significantly worse than Taleb's earlier book, Fooled by Randomness, which I think is also more aligned to what the OP is looking for. I haven't read either yet. -- Meni Rosenfeld (talk) 17:35, 21 June 2010 (UTC)[reply]
The Drunkard's Walk: How Randomness Rules Our Lives by Leonard Mlodinow is a good non-mathematician's introduction, covering a lot of the areas you mention such as casinos. --OpenToppedBus - Talk to the driver 13:59, 22 June 2010 (UTC)[reply]

Grand Piano lid prop angles

Most modern grand pianos' lid props appear to form a 90º angle where they meet the underside of the piano's lid. It seems logical to me that the lid prop is less likely to slip at that angle because there would be a direct load transfer of the weight of the piano's lid to the support stick. That is, grand piano manufacturers intentionally use a 90º angle for safety reasons. Could someone show me the mathematics, perhaps using vector analysis, to prove my hypothesis?

The reader may want to visit http://en.wikipedia.org/wiki/Grand_Piano to see a couple of pianos that do not appear to use the 90º angle. Note the Louis Bas grand piano of 1781 and Walter and Sohn piano of 1805.Don don (talk) 16:52, 21 June 2010 (UTC)[reply]

Thinking about it I'm surprised any of the props meet the lid at a right angle. That means the prop would have to be supported on all sides to stop it from slipping after being knocked. If it is at an angle it could just fit into an angle, sounds easier to me. I don't think one need worry much about the strength of the prop and the force will always go straight down it since the ends aren't held firmly in place. Dmcq (talk) 19:57, 22 June 2010 (UTC)[reply]

Question continues at Wikipedia:Reference desk/Science#Piano Lid Prop angle.94.72.242.84 (talk) 01:57, 6 July 2010 (UTC)[reply]

Which Statistics Test?

I'm trying to determine the effect taking a particular class has on student retention rates to the next grade (sophomore year). I have the number of freshman students who started and the number who went on to sophomore year (about 66% of 1600). I also have the number of those same freshman who took this class, and how many of this subset went on (about 90% of 300). Obviously the class made a difference, but what test do I use to prove it (with significance)? It's been over five years since my last statistics class. I'm not really dealing with samples, these are the official numbers. Do I still use the z-score, right tailed hypothesis test... even though the z-score is 9.19 before fpc, 10.18 after? Do I use fpc even though it's not really a sample? I've got to run similar tests for ten other years.160.10.98.34 (talk) 20:47, 21 June 2010 (UTC)[reply]

Pearson's chi-square test seems appropriate here. But it looks like you have made an observational study, so keep in mind that correlation does not imply causation. -- Meni Rosenfeld (talk) 07:49, 22 June 2010 (UTC)[reply]
McNemar's test. HTH, Robinh (talk) 07:51, 22 June 2010 (UTC)[reply]
I can't see any pairing here, so I can't see why McNemar's test would be appropriate rather than Pearson's chi-square test. As for finite population correction (fpc), you use that if you're trying to estimate uncertainty in your estimate of a population parameter when your sample is a substantial fraction of the size of the population. Here, you sample is the entire population, so the fpc should be 0 as you know the proportion in the population exactly, i.e. its standard error is 0. However, i think you can still interpret statistical tests of a null hypothesis, such as Pearson's chi-square test, without assuming variability is due to sampling from a larger population. Another matter to consider as you're repeating this for ten other years is multiple comparisons. And Meni is right to remind you about not reading causation into this - you say "obviously, the class made a difference", but that's not obvious at all from what you've told us. Were the students randomly allocated to take this class? If they weren't, you can't assume those who took the class are comparable to those who didn't, e.g. maybe the more motivated students were more likely to choose this class. See self-selection. Qwfp (talk) 08:26, 22 June 2010 (UTC)[reply]
You meant "but that's not obvious at all". -- Meni Rosenfeld (talk) 08:34, 22 June 2010 (UTC)[reply]
Fixed, thanks! Qwfp (talk) 08:41, 22 June 2010 (UTC)[reply]
Thanks to all. Can I still do a Chi Square test even though the 1600 number includes the 300 students who took the class? Most of the examples I've come across deal with two independent, non-overlapping groups. Just so you know, I appreciate it's correlation, not causation. We're planning on testing whether the major indicators of academic success are different for these groups too. —Preceding unsigned comment added by 160.10.98.106 (talk) 13:06, 22 June 2010 (UTC)[reply]
The first step before you conduct the test is to construct a 2×2 contingency table of non-overlapping groups, i.e. "took class", "didn't take class" vs. "went on to next year", "didn't go on to next year". That's just a couple of straightforward subtractions. Qwfp (talk) 13:45, 22 June 2010 (UTC)[reply]
The probability P1, that a student having taken the class went on, has a beta distribution with α=1+0.90·300=271, β=1+0.10·300=31, α+β=302, mean = μ = = 0.897351 . standard deviation = σ = = 0.0174356 . So P1 ≈ 0.897351±0.0174356 . The probability that a student not having taken the class went on, is P2≈ 0.646837±0.0131106 . The difference P1P2 = 0.250514±0.0218149. Zero is μ−11.4836σ. This difference is highly significantly different from zero. Bo Jacoby (talk) 14:47, 23 June 2010 (UTC).[reply]
Huh? How do beta distributions come into this? This is just a straightforward comparison of two binomial probabilities, i.e. Pearson's chi-squared test for a 2×2 contingency table. There's no need to give the probabilities themselves a distribution. Granted in reality the probability of going on to the next grade may vary between individuals within each group (class takers and non-takers) depending on all sorts of other factors, but there's not the information here to say anything about that; all you know are the overall probabilities for each group. Qwfp (talk) 23:55, 23 June 2010 (UTC)[reply]
If you know a probability, P, and a sample size, n, then the number of successes in the sample, i, has a binomial distribution. But our situation is that we know n and i, while P is unknown. The distribution of the continuous parameter P is not binomial, but beta, with parameters α=i+1 and β=ni+1. Bo Jacoby (talk) 09:15, 24 June 2010 (UTC).[reply]
Oh, i see now, sorry; you're taking a Bayesian approach, while Meni and I (and, i think, the orginal poster) were being frequentist. Either is fine and going to come to essentially the same conclusion here (though don't usually associate Bayesian inference with phrases like "highly significantly different"). I'm still more concerned about (over)interpretation. 21:38, 24 June 2010 (UTC)
The difference between the frequentist and the bayesian approach is obvious when the sample is very small. If the sample size n = 0, (and then of course i = 0 too), then the beta distribution gives P ≈ 0.5±0.3, which makes sense, while the frequentist approach gives P = 0/0, which does not make sense. Bo Jacoby (talk) 14:41, 25 June 2010 (UTC).[reply]
True, but with no data the Bayesian posterior distribution is the same as the prior distribution, so it depends on which prior you choose. You're implicitly assuming a uniform prior on the probability, but there are other possible choices even if we stay with 'uninformative' priors. See prior probability#Uninformative priors. Qwfp (talk) 21:15, 25 June 2010 (UTC)[reply]
The continuous uniform distribution f(P)dP = dP for 0≤P≤1 is the limiting case for large values of N of the uniform discrete distribution for I=0,...,N. (Here N is the size of the population, and I is the number of successes in the population). The discrete uniform distribution is the correct choice of uninformed prior distribution in the finite case, and so the continuous uniform distribution is the correct choice in the limiting case. This is the beta distribution for α=β=1 : P ≈ 0.5±0.3 . Bo Jacoby (talk) 06:36, 26 June 2010 (UTC).[reply]


June 22

Euclidean Geometry

A friend asked me to prove that the shortest distance bewteen two points is covered by a straight line. I told him that I couldn't, because this was an axiom of Euclidean geometry. Am I right? 173.179.59.66 (talk) 05:54, 22 June 2010 (UTC)[reply]

No, actually, that has been used as an axiom or definition (for example, by Archimedes), but not by Euclid. In his 20th proposition in The Elements, Euclid proves that any two sides of a triangle add up to more than the third side. It follows from this that a straight line is shorter than a route made up of a sequence of straight lines in different directions. I don't know whether or not Euclid also proves that a straight line is also shorter than a curved path; I don't think he would have been able to do it rigorously. --Anonymous, 06:41 UTC, June 22, 2010.
Okay, much thanks. 173.179.59.66 (talk) 06:52, 22 June 2010 (UTC)[reply]
Modern mathematicians use the calculus of variations to show that straight lines have the shortest path property. This also allows one to show that, for example, the shortest path along a sphere between two points is a great circle. HTH, Robinh (talk) 07:46, 22 June 2010 (UTC)[reply]
If you take the length of a smooth arc to be , then the inequality is just The analogous inequality follows if you use the more general notion of total variation of the arc (then no regularity assumption is needed on ). --pma 19:35, 22 June 2010 (UTC)[reply]

Kepler's First Law (Antiderivative)

In the process of deriving Kepler's First Law, the antiderivative needs to be evaluated. By book gives the solution as an arcsine of something, but I've been having trouble reproducing the result. It seems that every math program I try (as well as my own attempts) yield a messy combination of imaginary numbers and logarithms. Can anyone help me solve this antiderivative? —Preceding unsigned comment added by 173.179.59.66 (talk) 10:45, 22 June 2010 (UTC)[reply]

I haven't checked where that equation comes from but are you sure about that r on its own in the denominator? That would be where your log comes from. Dmcq (talk) 11:22, 22 June 2010 (UTC)[reply]
Are you sure it isn't arcsec? 129.67.37.143 (talk) 11:28, 22 June 2010 (UTC)[reply]
Sorry my comment about logs, expression is okay. Try differentiating arcsin(a+b/r) and you should get a similar expression that you can change around to get that what you want. Dmcq (talk) 12:31, 22 June 2010 (UTC)[reply]

World Cup group stages - how likely is it that drawing of lots would be required?

I've been following the 2010 FIFA World Cup and there's been some discussion about scenarios that might lead to some of the groups being decided by drawing lots. This seems to me to be an unfair way to make the decision, and I'd like to know the prior possibility (i.e. not from where we are now but from the beginning of the tournament) that this might happen.

There are four teams in each group, each playing each of the others once, and the first two qualify for the next round. FIFA uses the following criteria to rank teams in the Group Stage.[1]

  1. greatest number of points in all group matches;
  2. goal difference in all group matches;
  3. greatest number of goals scored in all group matches;
  4. greatest number of points in matches between tied teams;
  5. goal difference in matches between tied teams;
  6. greatest number of goals scored in matches between tied teams;
  7. drawing of lots by the FIFA Organising Committee.

I guess the other thing we'd need to know is the general distribution of scores within the group stages, which we can find by looking at the group stage results from the last three world cups in 2006, 2002 and 1998 (before that there were fewer teams, which might possibly affect results - best to eliminate any variables we can).

Anyone care to have a go at working out the likelihood that one or more groups would have to be decided by drawing lots? Presumably you'd do it by running some kind of simulation, but I know my statistical knowledge isn't up to it. --OpenToppedBus - Talk to the driver 12:07, 22 June 2010 (UTC)[reply]

If every game in the group resulted in a equal draw - eg all 1-1 or all 0-0 - then whatever happens you get a 4 way tie. You either draw lots or start using fewest yellow/red cards or choose alphabetically or ... -- SGBailey (talk) 13:16, 22 June 2010 (UTC)[reply]
Of course, whatever criteria you have there's a possibility of a tie. My concern is that there's too high a possibility of a tie using the current criteria.
But that's only based on my perception. What I'm looking for is someone to say, "actually, based on a normal distribution of results, that'll only happen once every 50 years" - in which case I'd probably think it's fine. Or alternatively, "statistically, you're likely to have to decide something by drawing lots once in every other tournament", in which case I would think FIFA were remiss in not adding an extra criteria based on number of cards, number of corners, number of times teams have hit the woodwork, etc, any of which would be fairer than just picking names out of a hat. --OpenToppedBus - Talk to the driver 13:48, 22 June 2010 (UTC)[reply]
To the best of my knowledge its happened once in the history of the World Cup, in 1990 for Group F to be precise. Republic of Ireland and the Netherlands came 2nd and 3rd respectively after a draw, though both qualifed as that was back in the days of 6 groups where the 4 best 3rd placed teams qualified. Its never actually affected if someone qualified before at World Cup level, though may have happened in other competitions. - Chrism would like to hear from you 19:45, 22 June 2010 (UTC)[reply]
I disagree that any of those methods would be better than drawing lots. The object in soccer is very clear: score more goals than the other team and you win. If you use other criteria to decide who advances like how many corners a team takes, that adds new incentives. Then it become somewhat of a different game: a contest to score goals and not allow corner kicks. That's not really what the game is supposed to be. As it stands, it's more valuable to score a goal than to prevent the other team from scoring a goal, which seems a little bit questionable to me but whatever. I guess they want to encourage more aggressive play.
As for drawing lots, I don't see what's unfair about it even it's a really crappy way to lose. Also sorry this doesn't really address the question at all. Rckrone (talk) 22:43, 23 June 2010 (UTC)[reply]

June 23

hint needed for apparently difficult problem

Hello all... I have a problem which I have been grappling with for some time. Let b be a positive integer and consider the equation z = x + y + b where x,y,z are variables. Suppose the integers {1,2,...4b+5} are partitioned in two classes arbitrarily. I wish to show that at least one of the classes always contains a solution to the equation.

I have tried using induction on b. The case b = 1 has been solved entirely by me. But I cannot understand how to use the induction hypothesis to prove the result. The more I think of it, the more I feel that a different approach to the problem is needed, but I cant figure out what. It is sort of a special case of a research problem, which has been solved in a more general way. I have little experience of doing research on my own, and so will be glad if anyone can offer me any advice or hints. Thanks - Shahab (talk) 07:06, 23 June 2010 (UTC)[reply]

Do x, y and z need to be different? -- Meni Rosenfeld (talk) 07:26, 23 June 2010 (UTC)[reply]
No.-Shahab (talk) 07:34, 23 June 2010 (UTC)[reply]
It looks like a pigeonhole principle problem to me but I haven't figured it out yet. Dmcq (talk) 08:33, 23 June 2010 (UTC)[reply]
The other thing I can see is that if you consider x+x+b you tend to need to have the 0 mod 3 and 1 mod 3 ones into separate sets and then the 2 mod 3 ones have to be stuck in and cause problems. Dmcq (talk) 09:48, 23 June 2010 (UTC)[reply]
Here is an outline of a solution. Let's call the two partitions A and B and let's try to split the integers from 1 to 4b+5 between A and B so that neither A nor B contains a solution to z=x+y+b.
Let's put 1 in A. Then if b+2 is also in A we have x=1, y=1 and z=x+y+b all in A. So we must put b+2 in B.
Now, if 3b+4 is also in B, we have x=b+2, y=b+2 and z=x+y+b all in B. So we must put 3b+4 in A.
Now we have 1 and 3b+4 both in A. If 4b+5 is in A then we have ... what ? Or if 2b+3 is in A then we have ... what ? And if we put 2b+3 and 4b+5 both in B, along with b+2 which is already in B, then we have ... what ?
(There may be a more elegant solution that uses induction on b and/or the pigeonhole principle, but I can't see it.) Gandalf61 (talk) 14:45, 24 June 2010 (UTC)[reply]
That seems plenty elegant to me, thanks, I wish I'd spent a bit more time on it and got something like that. Dmcq (talk) 21:22, 24 June 2010 (UTC)[reply]

dy

In one of my textbooks it says What is meant by the —Preceding unsigned comment added by 76.230.251.114 (talkcontribs) 17:01, 23 June 2010

Generally the notation means the same thing as . --Trovatore (talk) 17:50, 23 June 2010 (UTC)[reply]
I bet your textbook says meaning . Bo Jacoby (talk) 19:33, 23 June 2010 (UTC).[reply]

Solving equations like this

How do you solve equations like this for y:

Where the derivative of y with respect to x is equated to some function of y and x? —Preceding unsigned comment added by 203.22.23.9 (talk) 22:55, 23 June 2010 (UTC)[reply]

See separation of variables. --Tango (talk) 23:00, 23 June 2010 (UTC)[reply]
I'm not sure how rigorous this is, but we can multiply both sides by dx and then divide both sides by y. This gives a dx on one side and a dy on the other side, so we integrate to give
Evaluating these two indefinite integrals gives ln(y) = ½x2 + c where c is the constant of integration. We can solve for y by taking the exponentiation of both sides to the base e. Writing k for ec to simplify things we have
We can check this solution, and show that it is right:
•• Fly by Night (talk) 13:48, 24 June 2010 (UTC)[reply]
From what I remember, the method that is normally taught is the integrating factor method. Rewrite the equation as
Now, our integrating factor is . Multiply through and we get
This is actually the derivative of a product. Simplifying, we get
This can be easily integrated and rearranged to get
Readro (talk) 14:16, 24 June 2010 (UTC)[reply]

See also separation of variables. 75.57.243.88 (talk) 21:45, 24 June 2010 (UTC)[reply]

It's not "also", because Tango already mentioned it. -- Meni Rosenfeld (talk) 15:02, 25 June 2010 (UTC)[reply]

June 24

Constrained solutions of a linear equation

Hello...My today's question is as follows. Consider the 4 variable equation v + x + y - z = 4. If I desire to find solutions to it, I can easily do it by assigning arbitary values to x, y and z and solving for v. But what I wish for, is to find solutions where the variables all belong to the set {1,2,3}. How to do that systematically? Also if I generalize the problem, keep my equation as v + x + y - z = b, b any positive even integer and search for solutions within {1,2,...,n} is there a systematic way to do that. Thanks. - Shahab (talk) 06:26, 24 June 2010 (UTC)[reply]

There's a fairly obvious simple systematic way to do that: three nested loops for each of x, y and z in 1…n, solve for v = b - x - y + z, output x, y, z, v if v is in [1,n]. That scales as n3, so should be perfectly feasible for values of n up to several hundreds or thousands on modern CPUs when implemented in any decent programming language or numerical computing environment, and you could do it with pencil and a single sheet of paper for n = 3 (<small(>19 solutions i reckon when b=4. Ok i admit i used a computer even for that). I'm sure that simple algorithm can be made at least a bit more efficient, but with modern computing power it's probably not worth the extra mental and programming effort unless you're interested in larger values of n. Qwfp (talk) 09:50, 24 June 2010 (UTC)[reply]
(edit conflict) If your variables are constrained to be members of a finite set, then a brute force solution is to create all the tuples of the appropriate size that can be formed from that set, and test each one of them in turn to see if it is a solution. So if v, x, y and z have to come from the set {1,2,...,n} then you construct each of the n4 4-tuples (p, q, r, s) with members drawn from this set, and then check to see if p + q + r - s = b. If you want a more efficient algorithm, then you can reduce the size of the search space by exploiting the symmetries of your equation - the result for (p, q, r, s) is the same as the result for (q, p, r, s), (r, p, q, s) etc. - or the properties of your candidate set - if solutions have to come from {1,2,...,n} then p + q + r must lie between b + 1 and b + n, so max(p, q, r) ≥ (b + 1)/3 and min(p, q, r) ≤ (b + n)/3. Gandalf61 (talk) 10:04, 24 June 2010 (UTC)[reply]
Also 3 ≤ p+q+r ≤ 3n. Bo Jacoby (talk) 14:28, 24 June 2010 (UTC).[reply]

Simple formula

Not a homework question. If 0.82(s1-p)=0.72(s2-p) then s2/s1= what? I have not been able to work out a solution. I'd be interested in how a solution is obtained as well as the answer. S1, s2 and p must all be greater than zero, and s1>p and s2>p. Thanks. 92.24.186.235 (talk) 09:33, 24 June 2010 (UTC)[reply]

  1. Simplify the equation to 41s1 − 36s2 − 5p = 0.
  2. The inequality s1>p implies that 0 = 41s1 − 36s2 − 5p > 41s1 − 36s2 − 5s1 = 36s1 − 36s2.
  3. The inequality p>0 implies that 41s1 − 36s2 > 0.
  4. So 1 < s2/s1 < 41/36.
I hope it is correct now. Bo Jacoby (talk) 13:18, 24 June 2010 (UTC).[reply]

Probability Distributions

Hello. If the probability of a computer chip failing quality-control testing is 0.015, then what is the probability that one of the first three chips off the line will fail? Do I use a geometric or binomial distribution? Thanks in advance. --Albert (talk) 17:27, 24 June 2010 (UTC)[reply]

You can use either if you use it correctly. But it's best not to consider distributions at all, but just basic probability: The probability of a chip to pass is 0.985. If they're independent, the probability they all pass is , so the probability that at least one fails is .
Unless you meant the probability that exactly one fails, in which case use binomial. -- Meni Rosenfeld (talk) 18:24, 24 June 2010 (UTC)[reply]

June 25

Probability question

n identical shapes (think of them as cut out of paper), each of area S, are placed independently and uniformly at random over a region of area A, overlapping as necessary. What is the probability, p, that every point in the region will be covered by at least one shape? I am only interested in cases where n will be very large (millions or billions, say), before p becomes non-zero, and a decent approximation would be fine. Edge effects can be ignored. I assume p does not depend on the (non-pathological) shape of the region being covered, but it's not obvious to me whether it depends on the (non-pathological) shape of the covering shapes. If it does, assume circles. —Preceding unsigned comment added by 81.151.34.16 (talk) 03:49, 25 June 2010 (UTC)[reply]

would you clarify better "...uniformly over a region A". A possible interpretation is: points of are choosen indep. and uniformly on , and the congruent shapes are , for (having assumed ). This allows some of them to partially get out of . Or are you imposing (this should make things more complicated). And are you also considering rotations or just translates (simpler option)?
Thanks for your reply. I'm afraid I do not understand your notation " congruent shapes are ". As I mentioned, I am not concerned about what happens at the edges, including whether shapes can partially overlap the boundary. The shapes are so small compared to the containing region that it's irrelevant for my purposes. I assume that there is a limiting distribution that holds as the covering shapes get indefinitely small, and this limiting distribution is really what I'm after. If the shape of the covering pieces matters, and it matters whether we consider rotations, then assume circles. 86.135.29.110 (talk) 14:12, 25 June 2010 (UTC).[reply]
When S out of A square centimeters are covered, each cm2 is covered on average S/A times. When n shapes are placed, the average cm2 is covered nS/A times. The actual number of times has a poisson distribution. The probability that some particular cm2 is uncovered is enS/A. The average uncovered area is L = AenS/A. Bo Jacoby (talk) 07:27, 25 June 2010 (UTC).[reply]
Does this lead to an answer to the original question? 86.135.29.110 (talk) 14:12, 25 June 2010 (UTC).[reply]
Yes, I think so. The dependence on shape enters here. Consider the case where each shape is a square and the shapes are placed on a square grid like a chess board. Then the average number of uncovered squares is L / S and the answer to the original question is
Bo Jacoby (talk) 19:18, 25 June 2010 (UTC).[reply]
Are you assuming that the squares are always placed exactly on the grid lines? If so, this won't work (I mean, it isn't in the spirit of the original question). The squares can be placed anywhere. 86.185.77.226 (talk) 19:29, 25 June 2010 (UTC).[reply]
The shape does matter unfortunately. This can be seen by comparing two circles of area S/2 joined by a line as the basic shape area S. When rotations are allowed the distance between is large this can be consiered almost as two independent circles of area S/2. If we split up the original area A in two then if n circles of area S have probability p of fully covering area S then the same can be said of n circles of area S/2 covering area A/2. So the probability of the two circle combinations of area S covering area A is p2.
I think we should just consider the circles or squares form of the original question. I'll imagine the circle form as if I'm spray painting what is my chance of covering the object completely, I've protected the area around so I can go evenly up to the edges. Of course spray paint is rather more uiform and the paint won't form exact circles but were in maths land here. Dmcq (talk) 08:51, 25 June 2010 (UTC)[reply]
p.s. I had been wondering if this could be applied to how long till Jackson Pollock completely covered every spot of the canvas but since he dribbled bits around rather than using drops I suppose not :) Dmcq (talk) 08:56, 25 June 2010 (UTC) [reply]
I don't quite follow this. I'm not sure how, in the two-circles-joined-by-a-line case, you arrange that the two circles always fall in different halves of A. In any case, in my problem we can assume that the extent (as well as the area) of the covering shapes is negligibly small compared to the region A (otherwise there are numerous edge-related complexities which I specifically want to ignore), so I don't think this argument can apply. 86.185.77.226 (talk) 18:26, 25 June 2010 (UTC).[reply]
I wasn't saying they would fall in different halves of A, just that allowing rotations the two halves of the long dumbell would act as two independent circles to all intents and purposes when covering the area. And I believe Bo Jacoby's formula would apply generally when the area has a reasonable chance of being fully covered. For n shapes of area S in an area A you would have probability of something like exp(-K*A*exp(-nS/A)/S)) of completely covering the area where K is some constant depending only on the shape. This does depend on some big assumptions, the main one being that as the flecks of uncovered space get separated they are small and are wholly covered with probability S/A for each added shape. I wouldn't have thought the difference between a circle and a square would make a big difference to the outcome. Dmcq (talk) 19:49, 25 June 2010 (UTC)[reply]
I wasn't saying they would fall in different halves of A, just that allowing rotations the two halves of the long dumbell would act as two independent circles to all intents and purposes when covering the area. I understand that, but wouldn't this just tell us that p(A,S,n) = p(A,S/2,2n)?? Why would there be a problem with that? (By the way, I'm not particularly arguing that the formula shouldn't be shape-dependent, I just can't see how your argument demonstrates that it is.) 86.185.77.226 (talk) 21:02, 25 June 2010 (UTC).[reply]
It does tell you that,but you can then split up that 2n lot into two halves and (S/2)/(A/2)=S/A. Each half has p(A/2,S/2,n) of being completely filled which will be same as p(A,S,n) with two of those areas stuck together the probability of the two areas stuck together being filled is p(A/2,S/2,n)p(A/2,S/2,n) and this should be the same as p(A,S/2,2n). However if shape doesn't matter this last should be the same as p(A,S,n). So we have p(A,S,n)=p(A,S,n)^2 Dmcq (talk) 21:25, 25 June 2010 (UTC)[reply]
When you talk about splitting A into two halves, each of area A/2, how are those halves defined? I've tried the exp(-K*A*exp(-nS/A)/S)) formula against simulations with square-shaped covering shapes and, while there is always the possibility that I made a mistake, the results do not look good. If I haven't made a mistake then the formula looks definitely wrong. How confident do you feel about it? 86.174.161.139 (talk) 23:08, 25 June 2010 (UTC).[reply]
Nothing special, just chop in half. It shouldn't matter if A is a square or rectangle or circle if it is significantly larger than S. What did you simulate? Dmcq (talk) 23:18, 25 June 2010 (UTC)[reply]
I'm sorry if I'm being slow, but I just don't see it. If you chop A in half, then, in order for your logic to work, half of the dumbell ends need to fall into one half, and half into the other, don't they? How is that arranged? A typical simulation was placing 5 x 5 squares (anywhere) on a 100 x 100 grid. The results I get are nothing like a shape that could be produced by the proposed formula (it feel too wrong to be just due to the fact that this a way off insignificantly small S). However, placing 1 x 1 squares on a 20 x 20 grid, for example, gives fairly plausibly similar results to the formula with K = 1. This suggests something is going wonky with the logic once it becomes possible for the squares to overlap. I am not 100% certain about any of this, by the way! 86.174.161.139 (talk) 23:53, 25 June 2010 (UTC).[reply]
I'm pleased that your 1x1 simulation confirmed my analysis. How did you treat the edges in your 5x5 simulation? Bo Jacoby (talk) 06:49, 26 June 2010 (UTC).[reply]
I think Bo's reasoning may be correct for the 1×1 case but break down for larger shapes because the probabilities of two nearby squares being covered (or uncovered) is not independent. For example, if there are only two squares left to cover, if they are neighbours there's a good chance that both will be covered at once. If they are far apart this is clearly impossible. I also notice that Bo's answer above to the original question takes the form of the cumulative distribution function of a Gumbel distribution, implying the distribution of the number of shapes placed before every square is covered would follow this distribution. Qwfp (talk) 08:52, 26 June 2010 (UTC)[reply]
For the dumbell the two parts would be in the same half practically all the time, Try an A4 sheet with small circles and then two halves of an A4 sheet with dumbells of circles 1/sqrt(2) the radius and I was thinking the smaller circles on the A5 sheet should behave to all intents and purposes like the bigger circles on the A4 sheet. I wasn't trying to make the two bits go in the different halves and normally they'd be in the same half. As to the reasoning about the little bits remaining, the logic is that when there is a reasonable chance of the whole area being covered the specks would tend to be far apart relative to the size of the shapes. The big assumption is that a speck is likely to either be completely covered or completely missed, my guess was this doesn't make a big difference. I had a quick look at paint coverage and a couple of other things but didn't see anything close but I agree with Bo Jacoby someone has probably looked at something similar. Also you get a nicer formula I think if you define N=A/S, i.e. the number of shapes you'd need if they had no overlap. The formula then is p(N,n)=exp(-K*N*exp(-n/N)) and solving for n you get n=N*log(K*N/log(1/p)) so it gets dominated by N*log(N). So the smaller the drops the more paint you have to spray to be absolutely sure of covering every single point, quite a lot of paint in fact. I appreciate that you think the simulations disagree with the formula but I have no idea from your description what it is that you were looking for or what you saw. Dmcq (talk) 10:16, 26 June 2010 (UTC)[reply]
A study of the effects of bombing in [1] might give something, I don't have access Anything which references it might also be useful. percolation is the study of when the area becomes solid, that is independent of n but might I guess relate to the shape effect K. Dmcq (talk) 12:48, 26 June 2010 (UTC)[reply]
Counting dust or bacteria is also related [2], but perhaps more to the percolation problem Dmcq (talk) 13:10, 26 June 2010 (UTC)[reply]
Thanks for those Dmcq. After a bit of forward and backward citation chasing with the help of the Science Citation Index, this paper looks most promising: Miles, R. E. (1969). "The Asymptotic Values of Certain Coverage Probabilities". Biometrika. 56 (3): 661–680. JSTOR 2334674.. I'm fortunate enough to have access to it, but i don't have time to read it properly right this minute and to be honest its format is a bit mathematical for me so i thought i'd just post the ref in case someone else can 'decode' it first. Qwfp (talk) 15:24, 26 June 2010 (UTC)[reply]

Linear or non-linear time series

How do you tell if a time series is linear or non-linear? Is there a formula? Thanks 92.28.242.52 (talk) 14:12, 25 June 2010 (UTC)[reply]

Or, if its an easier question to answer, how do you measure the extent to which a time-series is non-linear or chaotic? Thanks 92.15.15.76 (talk) 09:43, 26 June 2010 (UTC)[reply]

A time series that is linear with time will have the form x(t) = at + b. If there are errors or uncertainties in the observations x(t) then there are various methods for estimating the "best" values for the parameters a and b - the simplest method is ordinary least squares, but others are mentioned in our linear regression article. Once you have a linear model, then you can test the goodness of fit between model and observations using various correlation tests. Note that non-linear is not the same as chaotic - a time series of daily mid-day temperatures over the course of several years will be non-linear, but not chaotic. Gandalf61 (talk) 11:44, 26 June 2010 (UTC)[reply]
Thanks, you've described a straight line (possibly with noise also) but you have not answered either of the two questions. 92.15.13.228 (talk) 16:12, 26 June 2010 (UTC)[reply]
If you compare the goodness of fit of a linear model with that of as quadratic model, which uses up a degree of freedom, and you get a better fit with the linear model then that's a good indication linear is a good model. Dmcq (talk) 14:14, 26 June 2010 (UTC)[reply]
Thanks, why? 92.15.13.228 (talk) 16:12, 26 June 2010 (UTC)[reply]

Assuming the data contains a random component, there is likely no way to tell for certain -- you can only tell probabilistically. For example, regress y on a constant, x, and x^2. If the coefficient on x is statistically significant while the coefficient on x^2 is statistically insignificant, then you have evidence that the model is more likely to be linear than it is to be quadratic. If you want to test higher-order non-linearities, you can include x^3, x^4, etc. This is a better approach than comparing the goodness of fit for a couple of reasons: (1) without doing additional math, you don't know whether the improvement in the goodness of fit is statistically significant, (2) R^2 is guaranteed to increase when you add the quadratic term, so you must use an adjusted R^2 -- but, there are several (or none) to choose from depending on what you intend to do with the data and whether your dependent variable is discrete or continuous.Wikiant (talk) 16:17, 26 June 2010 (UTC)[reply]

Would calculating or estimating the Lyapunov exponent be a way? 92.15.13.228 (talk) 16:20, 26 June 2010 (UTC)[reply]

MUSIC INVERSION AS A MATHEMATICAL PATTERN

I have lost a beautiful video, demonstrating the subject material. I recall one example included: Rimsky-Korsakov's Coq d'or, displaying the bird's happy prediction, contrasting with the bird's gloomy prediction - it was the identical notes, just inverted, evoking the contrasting emotion. The video was likely from university lecture.


Any help locating such video would be immensely appreciated. MUSIC INVERSION AS A MATHEMATICAL PATTERN Padraeg Sullivan 23:32, 25 June 2010 (UTC) —Preceding unsigned comment added by PADRAEG (talkcontribs)

June 26

Vector calculus

I found a question in my textbook that really confuses me: Find a C1 function f: R3 --> R3 such that takes the vector i + j + k emanating from the origin to i - j emanating from (1,1,0) and takes k emanating from (1,1,0) to k - i emanating from the origin.

I am confused because I thought vectors are displacable, so it doesn't matter where it originates from. 173.179.59.66 (talk) 08:35, 26 June 2010 (UTC)[reply]

I guess that "ij emanating from (1,1,0)" means (ij) − (i + j). Bo Jacoby (talk) 10:06, 26 June 2010 (UTC).[reply]
Why is that? 173.179.59.66 (talk) 11:24, 26 June 2010 (UTC)[reply]
This is an interesting question. It sounds like we should think of the tangent bundle Tℝ3. We have two vectors based at two points, and a function ƒ : ℝ3 → ℝ3. The first vector, written above as i + j + k is based at the origin, and we want it to be mapped to the vector i - j based at (1,1,0). This means that ƒ(0,0,0) = (1,1,0) and the differential at the origin takes i + j + k to i - j, i.e.
Practically, you need to evaluate the Jacobian matrix of ƒ, and then evaluate it at x = y = z = 0. You'll also need to do the same for the second vector. You want ƒ(1,1,0) = (0,0,0) and
Then you just need to check the differentiability of ƒ, and prove it's C1. •• Fly by Night (talk) 11:11, 26 June 2010 (UTC)[reply]
Oh, I don't really need a solution to the problem...but if you care, there's a slightly easier but much less elegant approach. 173.179.59.66 (talk) 11:20, 26 June 2010 (UTC)[reply]
I see... so you mean that you thought that all vectors were unmovable? Well, given a manifold (just think of the plane if you like), there is something called a tangent bundle. This is a fibre bundle whose fibres are tangent spaces. The tangent space at a given point is the space of all vector based at that point that are also tangent to the manifold. So in the case of the plane, the tangent space at (1,0) is the space of all vectors based at (1,0). Let's say we have a map from the plane to the plane. If it carries (1,0) to (0,1) then the differential takes all of the vectors based at (1,0) to the space of vectors based at (0,1). The differential gives a linear map between the tangent space to the plane at (1,0) and the tangent space to the plane at (0,1). So vectors are very much movable. More generally, if f : MN is a differentiable map between two manifolds, M and N, then the differential
is a linear map between tangent spaces. So in some sense it actually moves vectors from one manifold onto another manifold. •• Fly by Night (talk) 11:30, 26 June 2010 (UTC)[reply]

Contingent events

What techniques do actuaries or others use to determine the probability of contingent events?--220.253.96.217 (talk) 13:53, 26 June 2010 (UTC)[reply]

Have you had a search of Wikipedia using the search box at the top of the page? Try sticking in 'contingent events'. Bayes Theorem may also be of use. Dmcq (talk) 14:08, 26 June 2010 (UTC)[reply]
I think the concept you're asking for is conditional probability. 75.57.243.88 (talk) 17:19, 26 June 2010 (UTC)[reply]

Limits

I want to show that a function tends to zero as x tends to zero. Let

How do I show that ƒ(x) → 0 as x → 0? I've tried L'Hôpital's rule and power series expansion, but I keep falling flat. For example:

But that doesn't exactly tell me much. L'Hôpital's rule just gives higher order denominators with each and every iteration. Any ideas? •• Fly by Night (talk) 14:55, 26 June 2010 (UTC)[reply]

I would solve it using the Landau notation. I'm strapped for time at the moment so 'mafraid I can't show you how. 76.229.193.242 (talk) 15:02, 26 June 2010 (UTC)[reply]
The substitution y = 1/x2 simplifies the limit to . You should be able to solve the latter easily with whatever method you are used to.—Emil J. 15:16, 26 June 2010 (UTC)[reply]
That is, the manipulation is only valid for , but if you show that to be zero, then is also zero because f is an odd function.—Emil J. 15:21, 26 June 2010 (UTC)[reply]
Easy as that! How foolish I feel now. Thanks EmilJ. •• Fly by Night (talk) 15:33, 26 June 2010 (UTC)[reply]