Wikipedia:Reference desk/Mathematics
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
July 1
Triangle with all angles less than 180º
I know that, in elliptic geometry, it's possible to have a triangle with three right angles — thus adding up to 270º rather than the Euclidian 180º. Is there any system in which it's possible to have a triangle with less than 180º between its angles? I've looked through Hyperbolic geometry, but it's been a few years since I last took any maths, so I can't understand whether this is such a system. Nyttend (talk) 04:00, 1 July 2009 (UTC)
- Yes, triangles having angles totalling less than 180 degrees characterises hyperbolic geometry. I'm surprised our article doesn't make that clear - I'll take a look at it. --Tango (talk) 04:07, 1 July 2009 (UTC)
- I've added a bit about hyperbolic triangles to the lede of hyperbolic geometry. Somebody may want to review it. It would also be good if someone found a reference, I haven't had a chance to look for one but any geometry textbook that touches on non-Euclidean geometry will do. --Tango (talk) 04:15, 1 July 2009 (UTC)
- [ec] Actually, it does, in the Escher section; I missed it. However, your assurance made me look more carefully and confidently, so thanks :-) Nyttend (talk) 04:16, 1 July 2009 (UTC)
- I missed it too! I think it is good to mention it early on, it's a pretty key characteristic. --Tango (talk) 04:20, 1 July 2009 (UTC)
- [ec] Actually, it does, in the Escher section; I missed it. However, your assurance made me look more carefully and confidently, so thanks :-) Nyttend (talk) 04:16, 1 July 2009 (UTC)
- I've added a bit about hyperbolic triangles to the lede of hyperbolic geometry. Somebody may want to review it. It would also be good if someone found a reference, I haven't had a chance to look for one but any geometry textbook that touches on non-Euclidean geometry will do. --Tango (talk) 04:15, 1 July 2009 (UTC)
MathPsy and QuantPsy: Difference
What is the difference between Mathematical Psychology and Quantitative Psychology, if any?```` —Preceding unsigned comment added by Ultraluna (talk • contribs) 18:01, 1 July 2009 (UTC)
- I can't speak to Psychology, but the distinction also exists in Economics. In Mathematical Economics, one builds mathematical models to describe economic phenomena. What results from the models are (generally) qualitative solutions (e.g., an increase in X accompanies a decrease in Y). The focus is on mathematically representing the dynamics of a process so as to better understand the process rather than predicting what outcome the process will yield. In Quantitative Economics (we call it Econometrics), one fits data to mathematical models in an attempt to obtain quantitative solutions (e.g., an increase in X of one unit accompanies a decrease in Y of 10 units). The focus here is more on measuring the outcome of a process. Wikiant (talk) 18:28, 1 July 2009 (UTC)
Thanks for your response. That helps, albeit, somewhat. Without demeaning Econometricians in any way, from what I understand then is that Mathematical Economics is more important than Econometrics. The former builds the model on which the latter works on: like an Architecht and a Mason+Civil Engineer.```` —Preceding unsigned comment added by Ultraluna (talk • contribs) 04:51, 2 July 2009 (UTC)
Solve for z
Hi, I'd like guidance on solving the following equation for z,
The presence of the term makes this trickier; normally you might be able to extra the z term through factorisation and seclude it that way. I know I probably need to factorise the left hand expression in some way, but I don't really know any factoring techniques that apply here. Thanks. 94.171.225.236 (talk) 20:51, 1 July 2009 (UTC)
It's just a quadratic equation
where
Michael Hardy (talk) 21:10, 1 July 2009 (UTC)
- Understood, thankyou! Can I just ask why c isn't equal to xy? Wouldn't all terms lacking z imply that they constitute the constant term? 94.171.225.236 (talk) 21:21, 1 July 2009 (UTC)
- Because Michael Hardy made a typo. Algebraist 21:25, 1 July 2009 (UTC)
- Understood, thankyou! Can I just ask why c isn't equal to xy? Wouldn't all terms lacking z imply that they constitute the constant term? 94.171.225.236 (talk) 21:21, 1 July 2009 (UTC)
Sorry. Typo. xy is correct. Michael Hardy (talk) 01:41, 2 July 2009 (UTC)
- Is there a better way than using the formula (where are roots of the polynomial) in order to achieve a factorisation? The roots in quadratic formula form are, if my calculations are correct, something like this:
- Which seems unecessarily complicated, considering this is an exercise. I haven't tested to see wether the whole expression for for the above polynomial expands to its original form, but I'm fairly certain there's a more elegant approach to this. 94.171.225.236 (talk) 08:07, 2 July 2009 (UTC)
The equation
has the solutions
Your calculation was almost correct. Bo Jacoby (talk) 09:15, 2 July 2009 (UTC).
- I believe the term should be in the above equations. -- Tcncv (talk) 04:24, 7 July 2009 (UTC)
- I agree. —Tamfang (talk) 00:59, 22 July 2009 (UTC)
July 2
The empty function
Suppose f:A->0, where 0 is the empty set and A is non-empty. Using vacuous truth, we can prove f is a bijection.
Surjectivity: "For all y that belongs to 0, there exist a x in A such that f(x)=y"
Since there aren't any y in 0, this proposition is automatically satisfied by vacuous truth
Injectivity: "For all x1 and x1 in A that aren't equal, f(x1) isn't equal to f(x2)"
Since for all x in A, f(x) isn't even defined, again this proposition is satisfied.
I'm not an expert in anything, but this "thing" shows every set has "0 elements". Is there something wrong with my reasoning? Or is it just the strangeness of vacuous truth? —Preceding unsigned comment added by Standard Oil (talk • contribs) 07:47, 2 July 2009 (UTC)
- When we say f:X->Y we require that f have a value for every element of X. That means if f:A->emptyset, then A must also be empty. Because if A has an element x, that would give f(x) is an element of the empty set. So there are no functions like you describe (i.e. with A non-empty). 208.70.31.206 (talk) 08:39, 2 July 2009 (UTC)
You are misusing the phrase "vacuous" which if we have a predicate φ(x), where x ranges over X, then ∀x.φ(x) is true when X is empty, even if φ(x) is not true for any x. Instead, if one supposes f:A->0, where 0 is the empty set and A is non-empty, then one can derive 0=1 because the premises are contradictory. This does not establish that 0=1 by vacuous truth, although it does establish that 0=1 for every f that satisfies the premises. — Charles Stewart (talk) 09:13, 2 July 2009 (UTC)
Safety Stock Formula
Why does safety stock vary with SQRT(Lead Time) & not Lead Time? —Preceding unsigned comment added by Bharat Kantharia (talk • contribs) 09:01, 2 July 2009 (UTC)
- Because the safety stock level is determined by the standard deviation of total demand during the lead time. If we model the demand per unit time period as a normally distributed random variable with volatility σ2 (and volatility is independent of time) then the total demand over a lead time of n periods is a normally distributed random variable with volatility nσ2 (see sum of normally distributed random variables). So the standard deviation of total demand over the lead time is sqrt(n)σ, which is proportional to sqrt(n). Gandalf61 (talk) 09:17, 2 July 2009 (UTC)
I don't understand why the sign changes when grouping polynomials for factoring.
Hi, I am doing some self study of precalculus.
When factoring polynomials by grouping, you are supposed to change the sign of each term in the if there is a minus sign before the group. This seems wierd to me because I don't see how the act of adding parenthises changes the expression.
To see what I am talking about look at Factorization#Factoring_by_grouping, and see how the signs change in the second group.
I can't find any resources that explain why the signs change, what am I missing here? Thanks for any insight or pointers to more info. -- 99.129.153.2 (talk) 14:10, 2 July 2009 (UTC)
- Take a second term of the second group for example. Adding 234x2 is the same operation as subtracting minus 234x2, so if you include the term into a group which is itself subtracted (has a minus before its left bracket) you need to change that term's sign. This is because parentheses cause applying the preceding plus or minus to all terms included:
- –2 – 7 = (–2) + (–7) = –(2 + 7)
- so
- 30 – 2 – 7 = 30 – (2 + 7)
- and similary
- 30x – 70y + 13z = 30x – 70y – (–13z) = 30x – (70y – 13z)
- HTH --CiaPan (talk) 14:44, 2 July 2009 (UTC)
- If you work backwards (expand out the brackets) you'll see that the minus sign gets distributed over the bracket and changes all the signs. When you are grouping the terms together you are doing the inverse of expanding brackets so you have to do the inverse of changing all the signs, which is just changing all the signs again. --Tango (talk) 15:15, 2 July 2009 (UTC)
- 10 - 3 + 2 = 9, but 10 - (3 + 2) = 5, so you get an idea how parentheses can change the meaning of an expression when there is subtraction involved. To group the terms in the first expression without affecting the value, we have to consider how the negative implied by subtraction will distribute over the whole term in parentheses, so 10 - 3 + 2 = 10 + (-3 + 2) = 10 - (3 - 2). Notice how the sign of every term inside the parentheses flips when we stick a (-) outside the parentheses. In other words adding -3 becomes subtracting +3 and adding +2 becomes subtracting -2. Rckrone (talk) 17:23, 2 July 2009 (UTC)
...and here's an example with a minus sign on the inside becoming a plus sign on the outside:
- 1000 − (200 − 10) = 1000 − 200 + 10.
Say you have $1000 in your checking acount. Something that normally costs $200 is on sale for $200 − $10. After you pay for it with your debit card, your checking account balance is $1000 minus the price, and the price is $200 − $10, so your account balance is now $10 MORE than it would be if you'd paid the regular price. Hence plus rather than minus. Michael Hardy (talk) 18:02, 2 July 2009 (UTC)
Thanks everyone I think I've got my head wrapped around this now. Great examples! -- 99.129.153.2 (talk) 19:08, 2 July 2009 (UTC)
July 3
Development of calculus
Why did calculus develop as late as it did? I would have thought that, given how easy it is to derive the derivatives of most functions and how elementary questions like "what's the rate of change of this quantity?" are, calculus should have been "discovered" thousands of years ago. I certainly pondered what were basically calculus problems long before I knew about calculus; I'm surprised that the ancient Greek mathematicians didn't do the same. --99.237.234.104 (talk) 07:58, 3 July 2009 (UTC)
- As can be seen here, some of the basic concepts of differential and integral calculus floated around for millennia; it's just only in the past few hundred years that rigorous axioms and a solid branch of mathematics (calculus) have emerged. —Anonymous DissidentTalk 10:49, 3 July 2009 (UTC)
- (e/c) They did (Archimedes for instance). They did not develop the general theory, which is probably related to the fact that they did not think about functions in the algebraic way we are accustomed to. — Emil J. 10:55, 3 July 2009 (UTC)
- I knew that certain aspects of calculus were around for a long time, but I'm surprised that they didn't develop further, considering the utility of calculus in solving mathematical problems. The complexity of what Archimedes and other mathematicians of the time managed to derive--like the surface area and volume of a sphere--is amazing; I'd never understand how they came upon such complicated proofs. That's why I'm surprised that nobody developed something as simple and intuitive as the basic concepts of calculus. --99.237.234.104 (talk) 11:19, 3 July 2009 (UTC)
- A systematic development of calculus can't get very far without modern algebraic notation and the synthesis of geometry and algebra in analytic geometry. Diophantus started the development of algebraic notation, but even apparently simple advances like plus, minus and equals signs didn't emerge until the 16th century. Descartes didn't develop the key insight that an equation describes a curve and vice versa until the early 17th century. Major paradigm shifts like calculus affect our perspective so much that it can be difficult to appreciate how difficult they were to develop in the first instance. Gandalf61 (talk) 13:44, 3 July 2009 (UTC)
Discovering calculus is a harder problem than it appears to be because when you learn calculus, you can see only the easy part. The easy part is to answer all the questions after you know the questions. Michael Hardy (talk) 23:47, 3 July 2009 (UTC)
- However, recall that the Hellenistic science has been abruptly stopped by the Roman expansion on the Mediterranean. For the story of the Hellenistic science I suggest Lucio Russo's wonderful book "The forgotten revolution".--pma (talk) 06:36, 4 July 2009 (UTC)
Method to solve 3-variable equation
x=(y/(z^y))
If I know x and z, what's the best way to obtain a value for y? I assume it can't be solved algebraically because I end up with y and log(y) in any rearrangement which can't be combined in a single term to make it y=f(x,z). If Newton's method is to be used then I'm not sure how to differentiate it or apply the method in this case. 86.163.186.102 (talk) 08:55, 3 July 2009 (UTC)
- You can express the solution as , where W is the Lambert W function, if that's any help. — Emil J. 11:02, 3 July 2009 (UTC)
- (ec) I don't have the time to explain this with much depth right now, but for any integer n, for your case. This is assuming a non-zero log(z). Wn(x) is the Lambert W function. Hope this helps. I expect someone will be able to come along shortly and give something more elaborate and explained. —Anonymous DissidentTalk 11:05, 3 July 2009 (UTC)
- If you want to solve it numerically using Newton's method, you need to express the equation as . In your case is one way, but is simpler. To differentiate it, you need to treat x and z as constants and use the normal rules of differentiation. The result is , and the Newton iteration is . -- Meni Rosenfeld (talk) 11:37, 3 July 2009 (UTC)
- Thanks everyone. Since using W is iterative too but is more complicated I just used Newton's method with your f(y) and f'(y) and it's working great. 86.163.186.102 (talk) 11:56, 3 July 2009 (UTC)
Mininmum Sample size for correlation study
I know of a formular for determining the minimum sample size for a prevalence study, which most research studies use. Is there any formular to determine minimum sample size for a correlation study, like a correlation between birth weight and fetal cord leptin concentration-a research?Tunmisadej (talk) 19:43, 3 July 2009 (UTC)
- If you're doing homework or trying to replicate established procedures, look at the testbook/published literature. Otherwise, I can't think of a formula off-hand, but in general the best way to approach these sorts of problems is to work backwards. Figure out what sort of confidence level you want to have (0.05, 0.01, 0.001, etc.) in your results, and then using worst-case estimates for the various other parameters apply the equation you're going to use to analyze the completed study. You can then predict how the confidence level varies for various sample sizes. The sample size you want is the one that will give you at least the desired confidence level. A little extra work will get you information on how sensitive your confidence interval is to fluctuations in the various parameters. -- 76.201.158.47 (talk) 21:00, 4 July 2009 (UTC)
- There is no formula, indeed there cannot be, as that would imply the existence of a threshold at which the correlation test began to work, and above which it did not work any better. What you probably want is a power calculation for correlation test.
This tells you how many subjects you will need to test in order to have a certain likelihood (80% is common) to detect an effect of a certain size (which you have to provide). If you were looking for a correlation you think will be around 0.3 in magnitude, and want to find it 80% of the time it is really there, you would need around 85 subjects:
Using R.
> library(pwr) > pwr.r.test(r=0.3,power=0.80,sig.level=0.05,alternative="two.sided") n = 84.7 r = 0.3 sig.level = 0.05 power = 0.8 alternative = two.sided
Tim bates (talk) 11:58, 11 July 2009 (UTC)
Connected component
How does one know the existence of a connected component (i.e., maximal connected subset) in a topological space? Put in another way, does one need Zorn's lemma to show the existence? (Our connected space doesn't address this question, maybe it should unless this is a trivial matter.) Somehow related question: the article says: "The components in general need not be open"; true, but this pathology almost never happens in practice, I believe. What guarantees that connected components are open? -- Taku (talk) 23:38, 3 July 2009 (UTC)
- The connected component of the point x is simply the union of all connected subsets containing x. It's not hard to show that this set is connected. Alternatively, the connected components are the equivalence classes of the relation x~y iff there is a connected set containing both x and y. Neither of these characterizations require heavy machinery, and they certainly don't require choice.
- In the rationals, the connected components are not open. Surely Q doesn't count as a pathology? Anyway, the most obvious condition that forces components to be open is that there be only finitely many of them. Algebraist 00:14, 4 July 2009 (UTC)
- Yeah, taking union. Why didn't I think of that. Anyway, thank you. You answered a lot more than I asked :) -- Taku (talk) 01:34, 4 July 2009 (UTC)
- A kind of a followup-question. This is probably trivial too, but how do we know that there is a connected set containing given a point x at all? (Obvious, say, in a metric space, but in general) -- Taku (talk) 01:50, 4 July 2009 (UTC)
- {x} Algebraist 02:02, 4 July 2009 (UTC)
- Right :) -- Taku (talk) 02:25, 4 July 2009 (UTC)
- Probably the most marvellously succinct answer I've ever seen given on a refdesk, Algebraist :D Maelin (Talk | Contribs) 03:22, 4 July 2009 (UTC)
Let me expand slightly on Algebraist's (sufficient) response. In a topological space, the components are always closed since the closure of a connected set is connected. Thus Algebraist's remark is justified. Secondly, the perhaps more general requirement for the components of a space to be open, is that the space be locally connected. Local connectedness is both a necessary and sufficient requirement for the components of any open subspace of the space in question to be open.
Similarly, there may be only one connected set containing a point. For instance, in the space of rationals (as mentioned above), there exists no connected subspace having more than one point. Thus each point is contained in precisely one connected subspace.
With regards to the necessity of the axiom of choice/Zorn's lemma to justify the existence of connected components, I think that Taku was comparing the situation with that in ring theory. More specifically, the truth of the idea that any proper ideal in a ring is contained in at least one (proper) maximal ideal requires the application of Zorn's lemma.
To conclude, I think that the word "pathology" has no specific meaning and depends on the context in which it is used. For instance, I do not consider any space to be pathological (because I am a point-set topologist :)), but on the other hand, people may consider the Bug-eyed line to be a pathological example of a manifold. Some people may even consider the Vitali set to be pathological. To take a completely different view point, pathological examples are often the most interesting. In particular, if a theorem does not hold unless one rules out specific pathologies, it is probably less natural than a theorem which holds for a class, including every pathology. --PST 04:40, 4 July 2009 (UTC)
- Ah, that's very nice (and complete solution): locally connected-ness is necessary and sufficient. Thanks a lot. (It's just so much quicker to simply ask than going through topology books myself. Not to mention that the library is closed for the July 4-th). As for Zorn's lemma, actually, I was thinking of cases of irreducible components, since I thought I read somewhere that one uses Zorn's lemma to construct a maximal irreducible subset (i.e., irreducible component]). As for "pathology", I used the term because to me connected components that are not open are counterintuitive. I guess one could say a topological space that is not locally connected is not intuitive: In fact, locally connected has this: "In fact the openness of components is so natural that one must be sure to keep in mind that it is not true in general". -- Taku (talk) 12:51, 4 July 2009 (UTC)
- Careful: local connectedness is not necessary for openness of components. It's necessary for openness of components of every open set. Algebraist 15:03, 4 July 2009 (UTC)
- Right :) so not exactly the complete solution, then. -- Taku (talk) 02:00, 5 July 2009 (UTC)
- Actually, if you read my response carefully, I noted this - "Local connectedness is both a necessary and sufficient requirement for the components of any open subspace of the space in question to be open." Perhaps Taku did not notice the wording because it is easy to miss.
- Yes, I know. Where do you think I cribbed it from? Algebraist 02:44, 5 July 2009 (UTC)
- Taku's statement, "so not exactly the complete solution, then" suggested that he believed that I had not correctly noted the equivalence in my post. I understood that you were addressing Taku (and not me), and I then addressed Taku to clarify the confusion. --PST 11:42, 5 July 2009 (UTC)
- Yes, I know. Where do you think I cribbed it from? Algebraist 02:44, 5 July 2009 (UTC)
- Actually, if you read my response carefully, I noted this - "Local connectedness is both a necessary and sufficient requirement for the components of any open subspace of the space in question to be open." Perhaps Taku did not notice the wording because it is easy to miss.
- Right :) so not exactly the complete solution, then. -- Taku (talk) 02:00, 5 July 2009 (UTC)
- Careful: local connectedness is not necessary for openness of components. It's necessary for openness of components of every open set. Algebraist 15:03, 4 July 2009 (UTC)
July 6
Del
Why is where c is constant and X is the identity funtion (ie X(Xn) = Xn)? 76.67.76.234 (talk) 04:23, 6 July 2009 (UTC)
- I find myself trying to guess what the question means. Your X with an arrow seems to indicate a vector in Rn, and the nabla would usually mean the gradient operator, so I'm thinking
- is the gradient—a vector field—of the scalar-valued function ƒ. If g is the identity function then you'd write
and then
- would be a scalar-valued function, so you could take its gradient, which would be a vector-valued function. But what you'd get is
- And if that's what you meant then your functional equation X(Xn) = X would not make sense. So I'm wondering what you have in mind. Michael Hardy (talk) 05:31, 6 July 2009 (UTC)
- also note , if you want the "2" --pma (talk) 05:56, 6 July 2009 (UTC)
Isomorphism
I have a small question related to group isomorphism which has been bothering me for sometime. My intution says that isomorphic (normal) subgroups of a group G have the same structure, and so when we take the factor groups we should have the two factor groups isomorphic. Yet this is not true. In we have two isomorphic subgroups and where . But clearly the two factor groups and are not isomorphic. Shouldn't isomorphic structures behave in the same way? Can someone please explain. Thanks--Shahab (talk) 16:23, 6 July 2009 (UTC)
- Your intuition is miscalibrated. nZ and mZ are isomorphic as groups, so their behaviour as groups is the same. But you aren't considering them as just groups: you're considering them as subgroups of Z. They are not isomorphic as subgroups of Z (which would mean there was an automorphism of Z which restricted to an isomorphism between them). If they were, then the quotients would indeed be isomorphic, as one would expect. For an example where this works, consider Z×Z and the subgroups Z×{0} and {0}×Z. These are isomorphic not just as abstract groups but as subgroups, and indeed the quotient is isomorphic to Z in both cases. Algebraist 16:34, 6 July 2009 (UTC)
- In other words, what counts is not only the subgroups being isomorphic objects, but also, the way they are put into the ambient group. You can find plenty of analogous situations in all mathematics. One for all: any simple closed curve is homeomorphic to a circle, but it may be embedded in R3 in many ways. --pma (talk) 17:02, 6 July 2009 (UTC)
- Thank you both--Shahab (talk) 17:30, 6 July 2009 (UTC)
- Another example: and are isomorphic as sets (they have the same cardinality), but not as subsets of . is very different from (\ is set-theoretic difference). -- Meni Rosenfeld (talk) 13:04, 7 July 2009 (UTC)
question about computer library programs for solving partial differential equations
On someone elses behalf, please see Wikipedia:Reference_desk/Science#Simulate_semiconductor - question 2. —Preceding unsigned comment added by 83.100.250.79 (talk) 21:53, 6 July 2009 (UTC)
July 7
Take pity on the statistics illiterate
I have performed a linear regression on the dataset I collected experimentally (see below) in excel which spat out the following statistics:
slope (mn) = -1.720582243
y intercept (b) = 9.918110731
standard error for for slope (SEn) = 0.284159964
standard error for y intercept (SEb) = 0.988586199
coefficient of determination (r2) = 0.148034996
standard error for y estimate (SEy) = 6.635638125
F statistic / F observed value (F) = 36.66275495
degrees of freedom (df) = 211
regression sum of squares (ssreg) = 1614.323183
residual sum of squares (ssresid) = 9290.687293
I have 2 hypotheses (at least I think I do)
- The independent variable and the dependent variable have some positive or negative correlation (slope !=0)
- (null) The independent variable and the dependent variable are not correlated (slope = 0)
For α = .05 I would like to know if I can reject my null hypothesis. I also want to know the likelihood of making a type II error given the below dataset, (b=.05? what do you think is appropriate?), but I am not sure if I am asking the question correctly. I have a table here which might be helpful if I knew what to do with it. While it's one thing to get an answer, what I am really interested in is understanding the theory behind it and when given the appropriate statistics for any sort of similar fit to a line case to know confidently whether I can accept or reject the null hypothesis. (I'm not so interested in how to calculate the numbers as much as how to apply them.) If anyone can direct me to resources that will help me in this endeavor I would appreciate it. I tried reading the WP articles, but I couldn't figure out if and how they are applied to my case and simple wikipedia didn't have all the articles. Also there might be things I don't know about yet that if existing I likely failed to take into account like distribution type(?).
One thing about this data set I gave below is that there is a hidden variable (right term?) which is independent of the current independent variable. The dependent variable may also have a correlation with the hidden variable. The problem is that the hidden variable has a non-constant signal to noise ratio which is proportional to another factor (an experimental error which unfortunately cannot be eliminated). What this means is that for any given point, the signal to noise ratio varies but the points are roughly evenly distributed in the below data set. Will this affect anything? As a side question, when I have a data set where the dependent variable has a signal to noise ratio which is proportional to the independent variable what should I do? Is there anything I can do? I appreciate any help. 152.16.15.144 (talk) 02:28, 7 July 2009 (UTC)
Independent variable | Dependent variable |
---|---|
1 | 25.29725806 |
1 | 25.19327419 |
1 | 24.89158065 |
1 | 38.37648387 |
1 | 43.24045161 |
1 | 2.672193548 |
1 | 1.080225806 |
1 | 6.004677419 |
1 | 0.991322581 |
1 | 22.55169355 |
1 | 24.70856452 |
1 | 50.8436129 |
1 | 3.249870968 |
1 | 15.8688172 |
1 | 16.6713871 |
1 | 17.48212903 |
1 | 13.4363871 |
1 | 10.32890323 |
1 | 7.838086022 |
1 | 6.430483871 |
1 | 2.504774194 |
1 | 11.89580645 |
1 | 14.13787097 |
1 | 9.653354839 |
1 | 5.079769585 |
1 | 7.854914611 |
1 | 4.517450572 |
1 | 8.758021505 |
1 | 5.535249267 |
1 | 5.52836129 |
1 | 3.04296371 |
1 | 5.572699491 |
1 | 5.208481183 |
1 | 4.733164363 |
1 | 3.121344086 |
1 | 3.796580645 |
1 | 4.292361751 |
1 | 2.484580645 |
1 | 3.350993548 |
1 | 3.446548387 |
1 | 5.803467742 |
1 | 6.386572581 |
1 | 3.54125 |
2 | 1.534629032 |
2 | 0.114096774 |
2 | 12.69280645 |
2 | 24.59641935 |
2 | 19.31422581 |
2 | 6.393193548 |
2 | -0.471645161 |
2 | 1.459354839 |
2 | -0.400741935 |
2 | 1.429693548 |
2 | 7.022032258 |
2 | 2.890951613 |
2 | 10.23980645 |
2 | 13.70490323 |
2 | 3.858548387 |
2 | 4.442193548 |
2 | 10.66690323 |
2 | 7.737096774 |
2 | 1.286 |
2 | 1.301612903 |
2 | 2.211794355 |
2 | 1.884 |
2 | 1.136105991 |
2 | 3.449322581 |
2 | 1.903182796 |
2 | 1.196236559 |
2 | 3.046072106 |
2 | 4.296149844 |
2 | 5.391397849 |
2 | 1.94026393 |
2 | 4.501032258 |
2 | 1.413891129 |
2 | 7.305297114 |
2 | 2.544771505 |
2 | 3.618387097 |
2 | 1.834032258 |
2 | 3.124204301 |
2 | 2.959930876 |
2 | 0.796467742 |
2 | 0.959883871 |
2 | 1.34433871 |
2 | 5.019489247 |
2 | 3.893682796 |
2 | 2.597610887 |
3 | 16.34480645 |
3 | 0.629580645 |
3 | 1.067532258 |
3 | 28.70645161 |
3 | 0.398258065 |
3 | 0.113 |
3 | 0.454064516 |
3 | 1.075096774 |
3 | 0.860919355 |
3 | 5.275290323 |
3 | 5.302580645 |
3 | 10.04935484 |
3 | 2.613419355 |
3 | 1.395483871 |
3 | 2.874096774 |
3 | 2.744516129 |
3 | 1.996516129 |
3 | 1.063516129 |
3 | 0.785913978 |
3 | 0.612479839 |
3 | 0.501387097 |
3 | 0.494193548 |
3 | 1.653935484 |
3 | 1.316580645 |
3 | 1.122396313 |
3 | 1.871954459 |
3 | 2.647991675 |
3 | 2.802688172 |
3 | 1.325630499 |
3 | 3.578619355 |
3 | 2.20890121 |
3 | 2.052003396 |
3 | 3.66438172 |
3 | 2.255084485 |
3 | 1.416462366 |
3 | 2.288365591 |
3 | 3.04906682 |
3 | 0.893790323 |
3 | 0.556735484 |
3 | 0.863870968 |
3 | 2.55391129 |
3 | 1.700349462 |
3 | 1.069405242 |
4 | 14.23806452 |
4 | 0.657870968 |
4 | -0.110032258 |
4 | -0.245322581 |
4 | 0.582370968 |
4 | 9.728467742 |
4 | 3.702290323 |
4 | 0.858516129 |
4 | 0.522688172 |
4 | 1.476096774 |
4 | 1.709806452 |
4 | 0.40883871 |
4 | 0.558032258 |
4 | 0.931935484 |
4 | 0.47640553 |
4 | 1.337709677 |
4 | 0.699612903 |
4 | 1.503563748 |
4 | 1.905180266 |
4 | 2.2289282 |
4 | 2.988903226 |
4 | 1.814428152 |
4 | 2.907045161 |
4 | 1.17172379 |
4 | 1.383106961 |
4 | 1.688239247 |
4 | 2.451136713 |
4 | 1.517903226 |
4 | 1.823483871 |
4 | 2.026693548 |
4 | 0.405435484 |
4 | 0.760954839 |
4 | 1.009725806 |
4 | 0.976827957 |
4 | 1.04469086 |
4 | 1.079899194 |
5 | 11.59672581 |
5 | -0.581967742 |
5 | 0.082258065 |
5 | 1.232306452 |
5 | 5.375903226 |
5 | 4.49766129 |
5 | 5.74083871 |
5 | 0.425096774 |
5 | 0.667634409 |
5 | 0.71083871 |
5 | 0.37163871 |
5 | 2.051096774 |
5 | 0.717806452 |
5 | 0.574602151 |
5 | 0.964979839 |
5 | 3.095552995 |
5 | 1.451516129 |
5 | 1.398924731 |
5 | 2.345578748 |
5 | 2.306580645 |
5 | 1.026510264 |
5 | 1.158669355 |
5 | 2.976706989 |
5 | 2.725729647 |
5 | 0.975075269 |
5 | 1.706139785 |
5 | 1.394781106 |
5 | 0.439758065 |
5 | 0.622812903 |
5 | 0.860209677 |
5 | 1.160564516 |
5 | 1.003333333 |
5 | 0.938316532 |
6 | 11.6323871 |
6 | 0.313129032 |
6 | 0.734387097 |
6 | 4.214887097 |
6 | 3.015870968 |
6 | 3.516967742 |
6 | 0.530752688 |
6 | 1.127258065 |
6 | 1.198878648 |
6 | 1.397887097 |
7 | 0.7575 |
7 | 4.220516129 |
7 | 1.431096774 |
8 | 4.253354839 |
- The low value of your coefficient of determination, 0.148034996, tells you that the dependent variable and the independent variable have very little correlation i.e. the data is a very poor fit to its regression line. You would need a coefficient of determination much closer to 1 before you could conclude that there was any significant correlation. Actually, if you plot the data you can tell by eye that there is almost no correlation - the statistics just give a quantitative confirmation of this. Without a coefficient of determination close to 1, the slope and y intercept values are meaningless - a linear regression line can be calculated for any set of data, but that does not mean that the data points will lie close to the line.
- It appears that the "noise" from experimental error is swamping any effect that you are trying to detect. You could perhaps try to filter out this noise by looking at the mean of the dependent variable values for each value of the independent variable, 1 to 8, and see if there is any correlation in this set of mean values. You probably need to get some more data points for values 7 and 8. Gandalf61 (talk) 09:10, 7 July 2009 (UTC)
- I'm not sure a correlation of -0.385 is that low; it depends entirely on context. Spearman's rank correlation is -0.515. The p-value for testing either against the null hypothesis of zero correlation is <0.0001. But i'd agree it's useful (essential, even) to plot the data, and plotting the mean, or the median, of the y values for each value of x is a good idea. Both the mean and the median clearly decrease over the first 3 or 4 values of x, but for larger x it's pretty flat (though it's hard to be for x>5 sure as the number of observations decreases rapidly). That shows that the straight-line fit from linear regression is unlikely to be appropriate. Also the variance decreases strongly with increasing x, which again means linear regression is unlikely to be appropriate (at least not without some transformation). What is appropriate depends a lot on your scientific objectives, which you don't tell us.
- Wikipedia isn't really designed for learning basic statistics, and there's not much on Wikibooks or Wikiversity about regression as yet. There are a few external lihks at the end of the statistics article that might be useful. Neither would I recommend doing statistical analysis in Excel; there are plenty of statistical packages available, quite a few at no cost. Qwfp (talk) 17:23, 7 July 2009 (UTC)
Probability in card games
What is the probability that one will win "money" in Microsoft's basic Solitaire program, a version of the Klondike card game? In this game, one must pay $52 at the beginning of a game, and one earns $5 per card stacked in the four piles in the rows on the uppermost level, so to win "money", one must put up at least eleven cards. I observe that the odds of winning a game that I describe are altogether unknown, so I'm not asking for that. Nyttend (talk) 04:18, 7 July 2009 (UTC)
- I have no idea how to find that (without prohibitive computational cost), but just to clarify - are you assuming perfect play or something else? Is the strategy optimized for winning the game or for having a positive balance? -- Meni Rosenfeld (talk) 09:35, 7 July 2009 (UTC)
- Sorry, but I'm not familiar with "perfect play" or the strategy details that you ask. I mean playing it strictly as the computer game is generally set up: one card is drawn at a time, and the player knows only the cards that are face-up in this picture. Unlike in this picture, which depicts something similar to Microsoft's "Standard" mode that counts points and permits reshuffling, I'm asking about Microsoft's "Vegas" mode, which does not permit reshuffling. Nyttend (talk) 13:13, 7 July 2009 (UTC)
- The game involves a player who needs to decide which moves to play. The outcome of the game depends on the choices the player takes. You can't ask "what is the probability of such-and-such outcome" without any reference to how the player chooses his moves.
- "Perfect play" means just that - the player chooses, at every step, the best move; the one that maximizes whatever it is the player wishes to maximize. The player can choose to shoot for having the highest probability possible of winning the game fully. The best move would then be the one that maximizes the probability of winning. Alternatively, the player can choose to maximize the probability of finishing the game with a positive amount of cash.
- Of course, unless the game is trivial, humans can't play perfectly, and neither can contemporary computers. So you'll have to be clear about how exactly the player should act. A systematic way to choose a move in any given situation is called a strategy.
- You can also ask what is the probability that for a random game, there will be some sequence of moves which results in positive cash. Note that this sequence of moves absolutely cannot be found in actual play, because the player knows the location of only some of the cards.
- For more background information about this you should read up on game theory, noting that you have here a single-player, non-deterministic game with imperfect information, in extensive form.
- If you have an algorithm which passes for a strategy, the best way to estimate the probability you want is by running a simulation. -- Meni Rosenfeld (talk) 16:32, 7 July 2009 (UTC)
- Sorry, but I'm not familiar with "perfect play" or the strategy details that you ask. I mean playing it strictly as the computer game is generally set up: one card is drawn at a time, and the player knows only the cards that are face-up in this picture. Unlike in this picture, which depicts something similar to Microsoft's "Standard" mode that counts points and permits reshuffling, I'm asking about Microsoft's "Vegas" mode, which does not permit reshuffling. Nyttend (talk) 13:13, 7 July 2009 (UTC)
- From my OR with that game, I would say that you can not give a probability that you will win $ without giving a number of games you will be playing. You odds of a positive balance after 1 game will be radically different then the odds of a positive balance after 100 games. I strongly suspect that this games has a losing expectation, and your best bet is to not play. 65.121.141.34 (talk) 13:57, 7 July 2009 (UTC)
- Nyttend probably means that one game is played. -- Meni Rosenfeld (talk) 16:32, 7 July 2009 (UTC)
- It is going to depend on your skill level, though. I just played 3 games and made a profit in one of them (and a net loss overall). That suggestions (rather imprecisely) that the probability of me making a profit is 33%. I am quite certain I am not the best solitaire player in the world, so I expect better players have a significantly higher probability, and very likely a positive expectation (I made a net loss of about $40, so just three more cards per game would give me a net profit). --Tango (talk) 18:33, 7 July 2009 (UTC)
- Thanks for the input! I failed to say 'assuming that I make what is always the best decision, given what I know', and simply trying to get as much "money" as possible. Of course, decisions could ultimately be bad despite looking good (getting $5 for a ♠4 could be bad because it doesn't give me anywhere to put a later-appearing ♥3), so I just meant 'as good as I could know'. Can't find a source for this right now, but I believe that it's named Klondike for an enterprising storekeeper during the Klondike Gold Rush, who discovered that the average card game loses money. Thus I play on the computer, not with real money :-) Nyttend (talk) 20:14, 7 July 2009 (UTC)
- By the way, I should note — I left this comment here because I remember reading in a junior high school textbook that probability theory was first developed because of card games. How is it possible to have probability for some card games but not for this? I can't imagine a card game where human decisionmaking isn't a significant part of the game. Nyttend (talk) 20:16, 7 July 2009 (UTC)
- Ok, so we're talking about perfect play, but "trying to get as much 'money' as possible" is still not well-defined enough. Are you trying to maximize the expectation of the money, the probability of having positive money after one game, or something else? Note that the expected gain being positive has little to do with the probability of positive balance being greater than half (mean vs. median).
- I find it unlikely that the enterprising storekeeper would have found a perfect strategy and calculated its expectation, so I'm guessing he measured the average of a typical player. Of course, "a typical player" is far from being rigorously defined, so not much can be said about it math-wise.
- I can't answer the last question fully, but I think it mostly has to do with the number of decisions involved and the choice of which probability to try computing. -- Meni Rosenfeld (talk) 20:56, 7 July 2009 (UTC)
- Of course on the storekeeper; I'm sure that it was just quickly obvious that most games got less than eleven cards put back up. As far as my goal — I want to get at least eleven cards up, so that I "make money": of course, more is better, but this question is meant to ask "what percentage [or proportion] of games could be expected to result in any gain?" Nyttend (talk) 22:42, 7 July 2009 (UTC)
- You're still not very clear. The goal 'maximize the chance of making money in one game' may not be consistent with 'more is better'. Which do you actually mean? Algebraist 23:12, 7 July 2009 (UTC)
- I'm saying that my ultimate goal is to make any "money": therefore, if I can put up 11 cards, I'll not run any risks afterward. If I wanted to maximise my "money", I might put down the previously-mentioned ♠4 so as to make a place for the later-appearing ♥3, in hopes that I might use that card later; since I don't want to run any risks, I'll not consider doing that. However, I'm not simply going to stop the game once I get my eleventh card: if a card appears that I could put up as the twelfth, I'll not leave it down simply because I have eleven and don't care anymore. In short — although I'll keep trying to make money after I get into the black, I'll not do anything that puts me back into the red. I brought this up to make it clear that my behaviour after getting the eleventh card is (as far as I can see) irrelevant to the original question. Nyttend (talk) 00:18, 8 July 2009 (UTC)
- You're still not very clear. The goal 'maximize the chance of making money in one game' may not be consistent with 'more is better'. Which do you actually mean? Algebraist 23:12, 7 July 2009 (UTC)
- Of course on the storekeeper; I'm sure that it was just quickly obvious that most games got less than eleven cards put back up. As far as my goal — I want to get at least eleven cards up, so that I "make money": of course, more is better, but this question is meant to ask "what percentage [or proportion] of games could be expected to result in any gain?" Nyttend (talk) 22:42, 7 July 2009 (UTC)
- By the way, I should note — I left this comment here because I remember reading in a junior high school textbook that probability theory was first developed because of card games. How is it possible to have probability for some card games but not for this? I can't imagine a card game where human decisionmaking isn't a significant part of the game. Nyttend (talk) 20:16, 7 July 2009 (UTC)
- Thanks for the input! I failed to say 'assuming that I make what is always the best decision, given what I know', and simply trying to get as much "money" as possible. Of course, decisions could ultimately be bad despite looking good (getting $5 for a ♠4 could be bad because it doesn't give me anywhere to put a later-appearing ♥3), so I just meant 'as good as I could know'. Can't find a source for this right now, but I believe that it's named Klondike for an enterprising storekeeper during the Klondike Gold Rush, who discovered that the average card game loses money. Thus I play on the computer, not with real money :-) Nyttend (talk) 20:14, 7 July 2009 (UTC)
Inverse of a Matrix Mod 2
I have a square matrix with binary entries and I want to find its multiplicative inverse using mod 2. I think (and correct me if I am wrong) this means that find the inverse using operations mod 2. So every time I add or multiply, I need to do those operations using mod 2 arithmetic. The problem is that my matrices are large such as 9x9 or 16x16 so doing them by hand is not even an option. My questions is that is there someway to do them in MATLAB or Mathematica? Are there any built-in functions, packages, or programs/m-files that I can use to do this quickly for either program? What about PARI? I know very little about PARI so if someone knows how to do it in PARI, any helps with the commands will be appreciated. Thanks! -Looking for Wisdom and Insight! (talk) 08:37, 7 July 2009 (UTC)
- Why not just take the inverse and reduce mod 2? 98.210.252.135 (talk) 08:46, 7 July 2009 (UTC)
Because the inverse under the usual addition and multiplication may not even have ANY integer values so taking mod 2 doesn't even make sense.-Looking for Wisdom and Insight! (talk) 08:58, 7 July 2009 (UTC)
- But if the matrix is invertible mod 2, the denominators will all be odd (since they must divide the determinant), so those just go away when you reduce mod 2. Alternatively, you can multiply through by the determinant to get the adjugate and clear the fractions first. But the suggestion below sounds better anyway. 98.210.252.135 (talk) 17:13, 7 July 2009 (UTC)
- In Mathematica, where A is your matrix:
Inverse[A, Modulus -> 2]
- Obviously there are also other ways to do this (e.g., using the adjugate of A taken mod 2, which is not efficient). -- Meni Rosenfeld (talk) 09:28, 7 July 2009 (UTC)
You can always use straightforward Gaussian elimination in the finite field GF(2). 208.70.31.206 (talk) 02:46, 8 July 2009 (UTC)
Understanding the basic quadratic formula proof
I was trying to follow the proof for the quadratic formula by the completing of the square method, but from I can see how but don't see how , My reasoning would be as you would divide both sides by 2a, and when you divide by a fraction you invert the denominator so to get . Where am I going wrong if at all? --Dbjohn (talk) 19:12, 7 July 2009 (UTC)
- Where did come from? If , then . Algebraist 19:21, 7 July 2009 (UTC)
Okay I see where I went wrong my mind was getting mixed up with someone else's explanation that used different letters. Anyway the mistake I was making is that you need to get "h" alone and find out what h equals and . Okay I can follow the rest of it now.--Dbjohn (talk) 20:49, 7 July 2009 (UTC)
Solving the differential equation
What is the geometrical shape of a curve(in a plane) which would gather the parallel beams of light (for example), in a single point? Trying to answer this question, I get into this differential equation:(assumptions are, the light beams are parallel to the x-axis and are coming from , + inf , and would be focused in A(p,0) )
, with initial condition:
Although I know the parabola of the form: ,is an answer, but I don't know how to solve this equation and whether there is another solution to it or not. Re444 (talk) 22:38, 7 July 2009 (UTC)
July 8
Raise a matrix to a matrix power
Can a matrix be raised to the power of another matrix? NeonMerlin 02:38, 8 July 2009 (UTC)
- I have never heard of such an operation, and I can't yet think of any useful way of giving meaning to it. Why do you ask? Algebraist 02:41, 8 July 2009 (UTC)
- Well, An obviously makes sense for matrix A and integer n, and there is a matrix exponential exp(A) defined as the obvious power series in A, so why not AB=exp(B log A) for some suitable "log A" that comes out the right shape? 208.70.31.206 (talk) 02:50, 8 July 2009 (UTC)