Wikipedia:Reference desk/Mathematics: Difference between revisions
→secure hash: new section |
|||
Line 386: | Line 386: | ||
Hopefully she's impressed <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/69.165.137.230|69.165.137.230]] ([[User talk:69.165.137.230|talk]]) 17:08, 23 October 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot--> |
Hopefully she's impressed <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/69.165.137.230|69.165.137.230]] ([[User talk:69.165.137.230|talk]]) 17:08, 23 October 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot--> |
||
::If she knows calculus, she will quickly spot your naive error: your assumption that "any complicated math is calculus." Once you are outed thusly, she will see through your charade and will probably be unimpressed by you (and will likely consider your stunt a poor indicator of character). To avoid this eventuality, you should educate yourself first by reading our articles on [[algebra]] and [[calculus]], (and your textbooks) and work out as many practice problems as you can. Girls (or guys) who would be impressed by ''[[wikt:pseudointellectual|pseudo-calculus]]'' are of dubious quality. You should stick to ''actually knowing'' things; this will impress everybody, including those girls who ''actually know calculus''. [[User:Nimur|Nimur]] ([[User talk:Nimur|talk]]) 18:03, 23 October 2010 (UTC) |
::If she knows calculus, she will quickly spot your naive error: your assumption that "any complicated math is calculus." Once you are outed thusly, she will see through your charade and will probably be unimpressed by you (and will likely consider your stunt a poor indicator of character). To avoid this eventuality, you should educate yourself first by reading our articles on [[algebra]] and [[calculus]], (and your textbooks) and work out as many practice problems as you can. Girls (or guys) who would be impressed by ''[[wikt:pseudointellectual|pseudo-calculus]]'' are of dubious quality. You should stick to ''actually knowing'' things; this will impress everybody, including those girls who ''actually know calculus''. [[User:Nimur|Nimur]] ([[User talk:Nimur|talk]]) 18:03, 23 October 2010 (UTC) |
||
== secure hash == |
|||
how to securely hash any two numbers each over 100 digits long into a number between 1 - 10 with equal distribution/probability. The easier the hash is to explain/follow, the better. this is not homework. thank you! [[Special:Contributions/84.153.221.42|84.153.221.42]] ([[User talk:84.153.221.42|talk]]) 18:41, 23 October 2010 (UTC) |
Revision as of 18:41, 23 October 2010
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
October 17
human values
what are the human values that we learn through maths?????????? —Preceding unsigned comment added by Chharish775 (talk • contribs) 04:08, 17 October 2010 (UTC)
- Are you asking about the philosophy of mathematics? One subtopic is whether mathematics is invented or discovered. This is a subject of Paul Ernest's work. -- 124.157.218.5 (talk) 05:01, 17 October 2010 (UTC)
- sorry kid, not much. —Preceding unsigned comment added by 92.224.205.247 (talk) 12:42, 17 October 2010 (UTC)
- Agree with that. However you can learn a lot about human values using maths. For instance by sorting the Wikipedia article by page views we get a good indication that humans are more interested in film actresses than the United States, and that Facebook is about as important as sex. Maths hardly rates in peoples consciousness though Albert Einstein, Isaac Newton and Stephen Hawking seem to be reasonably famous. And using statistics and game theory you can work out to some extent what people really value as opposed to what they say is important. Dmcq (talk) 16:11, 17 October 2010 (UTC)
- It could be said that doing mathematics teaches a person to think rationally, to make logical deductions, to assess facts neutrally, to be patient and assiduous in one's work, etc. From this perspective, many human values could be learned through mathematics. —Anonymous DissidentTalk 20:59, 17 October 2010 (UTC)
- It's also obviously beautiful. Michael Hardy (talk) 00:36, 18 October 2010 (UTC)
- For me, the importance of rigor, and the value of deductive reasoning in discovering truth. 67.158.43.41 (talk) 08:28, 18 October 2010 (UTC)
- It's also obviously beautiful. Michael Hardy (talk) 00:36, 18 October 2010 (UTC)
- Many conflicts can be easily resolved if you understand the isomorphisms between your situation and the other side's. Also humility - in mathematics, it makes much less sense to pretend to understand something you don't than in many other occupations.
- So I disagree with 92 and Dmcq, and also with G. H. Hardy who I suspect would have said none and been proud of it. -- Meni Rosenfeld (talk) 11:39, 18 October 2010 (UTC)
October 18
Displacement –> Velocity
If you have a time dependent displacement vector, how do get the velocity vector?
Consider for example the time dependent displacement vector
115.178.29.142 (talk) 03:07, 18 October 2010 (UTC)
- If you're asking this, you probably know that the derivative is the slope of a curve at some point, i.e., the rate of change of a function at that point. If your function is displacement, what's another term for rate of change of position? Feezo (Talk) 03:26, 18 October 2010 (UTC)
- Take a look at the velocity vector article. — Fly by Night (talk) 08:53, 18 October 2010 (UTC)
trouble with differentiating something using only first principles
Hi all,
I have this function f(x) = cos (sin x) and I am told to differentiate it using first principles only. That means that I cannot bring in the chain rule and I will have to go through the different trig formulas.
I tried doing this but I had trouble arriving at the answer that I want (-sin (sin x) cos x). Can somebody please teach me how to do this?
Thanks a bunch! —Preceding unsigned comment added by 164.67.84.95 (talk) 05:40, 18 October 2010 (UTC)
- We have
- For the derivative, we take the difference quotient
- As A goes to zero, cos(A) goes to 1 and sin(A) goes to A; we apply this for h, to get:
- Expanding again, we have
- As h goes to 0, hcos(x) goes to 0, so that cos(hcos(x)) goes to 1 and sin(hcos(x)) goes to hcos(x):
- which is the result we were looking for. —Anonymous DissidentTalk 06:36, 18 October 2010 (UTC)
- Another way (depending on precisely which first principles are allowed) is to convert the trig formulas into exponential ones using Euler's formula. This is basic if you take the usual Taylor series for sin, cos, and the exponential function as their definitions, since Euler's formula is an immediate corollary of these definitions. Wolfram Alpha can do these manipulations (see the "alternate form" entry). From there you have a couple of terms of the form . You'll require the sum of differentiable functions to be the differential of the sum, and scaling functions scales the derivative, both of which are arguably very basic, and, if not, their proofs are very short. You can compute the derivative of these terms directly. First, there's the approximation
- where the ...'s give terms with the degree of n larger than 2. Thus so
- .
- Now you have the desired derivative in terms of exponentials. If you want, you can convert back to trig, most likely using very mundane transformations, which will give the usual result. IMO the advantage of this method over the above is that you get as a lemma the derivative of and you avoid some of the more hand wavey "function goes to V as h goes to 0" arguments (though not all, particularly the initial approximation above--which does, however, fit perfectly with the intuition behind derivatives). The disadvantage is that it's less basic, and less direct. —Preceding unsigned comment added by 67.158.43.41 (talk) 09:28, 18 October 2010 (UTC)
- Another way (depending on precisely which first principles are allowed) is to convert the trig formulas into exponential ones using Euler's formula. This is basic if you take the usual Taylor series for sin, cos, and the exponential function as their definitions, since Euler's formula is an immediate corollary of these definitions. Wolfram Alpha can do these manipulations (see the "alternate form" entry). From there you have a couple of terms of the form . You'll require the sum of differentiable functions to be the differential of the sum, and scaling functions scales the derivative, both of which are arguably very basic, and, if not, their proofs are very short. You can compute the derivative of these terms directly. First, there's the approximation
Thank you all! Really appreciate your help! —Preceding unsigned comment added by 164.67.84.28 (talk) 15:19, 18 October 2010 (UTC)
- Note the connection with #The limit of sin x below. Specifically, Anonymous Dissident's "As A goes to zero, cos(A) goes to 1 and sin(A) goes to A" is a subtler statement than it appears. He knows that h is linear in the denominator, so he is expanding sin and cos to the linear power of their argument. -- 124.157.218.53 (talk) 17:58, 21 October 2010 (UTC)
Statistics
Four random variables were cast 5 times, and these were the results:
X | 1 | 2 | 3 | 4 | 5 | 6 |
---|---|---|---|---|---|---|
A | 96 | 98 | 100 | 97 | 96 | ? |
B | 26.1 | 24.9 | 27.1 | 24.2 | 21.8 | ? |
C | 21.7 | 21.9 | 24.5 | 19.8 | 20.6 | ? |
D | 20.7 | 21.0 | 21.9 | 19.1 | 20.2 | ? |
NOTE THAT THEY ARE CORRELATED. What is the probability, on the sixth trial, that A > B + C + D + 44.6 ?
--220.253.253.75 (talk) 09:58, 18 October 2010 (UTC)
- With so few samples, and absent any information about the process that created the values, it seems to be futile to try to model their interrelations. I'd just compute for each column, assume (for want of a better option) that this was normal distributed, and estimate the mean and variance. –Henning Makholm (talk) 15:53, 18 October 2010 (UTC)
If we could actually assume a normal distribution, maybe we could say something. Michael Hardy (talk) 19:11, 18 October 2010 (UTC)
The condition (A > B + C + D + 44.6) was true 0 times and false 5 times. The probability, on the sixth trial, that A > B + C + D + 44.6, is 1/7. Bo Jacoby (talk) 21:39, 18 October 2010 (UTC).
- That reasoning would seem to imply that we should expect that A > B + C + D + googol on the next try with probability 1/7. That does not look like a useful estimate to me. Do you have any particular statistical model in mind for your calculation? –Henning Makholm (talk) 22:41, 18 October 2010 (UTC)
- You are right. You sketched a sophisticated model but provided no estimate. The question was if A > B + C + D + 44.6, not if A > B + C + D + googol. The statistical model for my calculation is this. Let the number of items in the population be N, the number of items in the sample be n, the number of true answers in the population be K, and the number of true answers in the sample be k. The conditional probability for K knowing N and n and k is
- where
- is the hypergeometric distribution, and the prior probabilities are uniformly
- for k=0..n and K=0..N. So the general formula is
- Substituting k=0, N=6, n=5 you get the results
- Bo Jacoby (talk) 06:21, 19 October 2010 (UTC).
- You are right. You sketched a sophisticated model but provided no estimate. The question was if A > B + C + D + 44.6, not if A > B + C + D + googol. The statistical model for my calculation is this. Let the number of items in the population be N, the number of items in the sample be n, the number of true answers in the population be K, and the number of true answers in the sample be k. The conditional probability for K knowing N and n and k is
- My last sentence was unproductively snarky and should have been left out, sorry. What I mean is that by reducing the observations to a yes-no question you're throwing away information that intuively ought to be useful, which is cruder than I think the OP was expecting (and had some right to expect). –Henning Makholm (talk) 06:50, 19 October 2010 (UTC)
- Yes, we should use more information to estimate the probability. The sad fact is that none of us did. I hoped that someone smarter than me would provide the ultimate answer, but that has so far not happened. The expression X=A−B−C−D assumes the 5 values X=(27.5 30.2 26.5 33.9 33.4), which are obviously all less than 44.6. All we need to know is the number of values (S0=5), their sum (S1=151.5) and the sum of squares (S2=4635.31). Let's assume that the numbers are samples from a normal distribution. In order to compute the distribution for the unknown parameters (μ,σ | S0, S1, S2) we need the known distribution (S1, S2 | S0, μ, σ) but we also need prior distributions
- But uniform prior distributions are not defined as continuous normalized probability distributions. So here I am stuck. I think that the standard way of approaching the problem is to use the Student's t-distribution. The section Prediction interval#Unknown mean, unknown variance treats the traditional approach to this problem. Not being an expert on frequentist statistics, I am not the right person to pursue this task. Bo Jacoby (talk) 18:39, 19 October 2010 (UTC).
- Yes, we should use more information to estimate the probability. The sad fact is that none of us did. I hoped that someone smarter than me would provide the ultimate answer, but that has so far not happened. The expression X=A−B−C−D assumes the 5 values X=(27.5 30.2 26.5 33.9 33.4), which are obviously all less than 44.6. All we need to know is the number of values (S0=5), their sum (S1=151.5) and the sum of squares (S2=4635.31). Let's assume that the numbers are samples from a normal distribution. In order to compute the distribution for the unknown parameters (μ,σ | S0, S1, S2) we need the known distribution (S1, S2 | S0, μ, σ) but we also need prior distributions
how to refute someone's life work nicely
I already have sponsorship based on my first draft and am almost done typesetting my paper, but how can I pull the trigger on someone, now 70, whose name is intimately associated with what I just disproved? There have been doubts, to be sure, his work while having great mind share has been coming apart seriously at the seam, but his work is still a bedrock, and now I'm about to publish a disproof.
How can I do that humanely? It can only be a terrible feeling for him if he reads my paper. What have I ever done that is equivalent to providing a leading model in a discipline for decades, and being synonymous with that model? Nothing. What right do I have to do that to a man's life's work? Should I even do it?93.186.23.237 (talk) 17:56, 18 October 2010 (UTC)
- Not to worry. Stick to the facts and keep it non-personal. If you're right, the blame belongs to truth itself and you're only a messenger; if you're wrong, the face with egg on it will be yours anyway.
- By the way, if whatever you're "disproving" is something people widely consider to be proved, rather than just a conjecture, it would be prudent to adopt a humbler attitude to the possibility that your disproof, and not the original proof, is the one with a subtle flaw not visible to its creator. –Henning Makholm (talk) 18:42, 18 October 2010 (UTC)
- Important proofs that last a long time but then are disproved are quite interesting - sometimes more so than the original proof was! But yes I wouldn't count my chickens and I'd keep it non-personal. I don't think you need worry much and they might like a copy to check out themselves. Dmcq (talk) 19:35, 18 October 2010 (UTC)
- Write the person in question requesting his comments on your work. Then the two of you are working together in finding the truth rather than fighting one another in order to be right. Bo Jacoby (talk) 21:18, 18 October 2010 (UTC).
- Agreed - contact the person, try to enlist their help. Even if they aren't interested in helping you, at least you have had the courtesy to let them know what you are planning to publish, so they hear it from you and not from a third party. And if you believe you have a disproof or counterexample to an open conjecture, then the originator of the conjecture should find this just as interesting as a proof. Gandalf61 (talk) 12:52, 19 October 2010 (UTC)
- Notice, that you use of the word "disproof" automatically got a number of the commenters to assume that you have found an error in a claimed proof. If you mean that you have given a negative answer to an open question put by this person, don't worry; this is reasonably common, and does not at all crush a person in the view of colleagues. Important conjectures stimulate mathematical progress; and both negative and positive answers may stimulate further progress. (For instance, why did it take such a while to find a counterexample? Is the statement true under some circumstances, but not under other? What are the most natural way to frame the conditions, under which the statement is true?)
- Please remember that a person is not proven to be in error, because an open question put by that person has been answered in the negative! If (s)he has made a clear conjecture, i.e., declared him/herself to guess it to be true, then that guess is proved to be wrong; that's all. JoergenB (talk) 18:56, 19 October 2010 (UTC)
- Agreed - contact the person, try to enlist their help. Even if they aren't interested in helping you, at least you have had the courtesy to let them know what you are planning to publish, so they hear it from you and not from a third party. And if you believe you have a disproof or counterexample to an open conjecture, then the originator of the conjecture should find this just as interesting as a proof. Gandalf61 (talk) 12:52, 19 October 2010 (UTC)
- Write the person in question requesting his comments on your work. Then the two of you are working together in finding the truth rather than fighting one another in order to be right. Bo Jacoby (talk) 21:18, 18 October 2010 (UTC).
- Important proofs that last a long time but then are disproved are quite interesting - sometimes more so than the original proof was! But yes I wouldn't count my chickens and I'd keep it non-personal. I don't think you need worry much and they might like a copy to check out themselves. Dmcq (talk) 19:35, 18 October 2010 (UTC)
The limit of sin x
In calculus textbooks it is often asserted that the limit of sin h as h tends to zero is h, so that for small h, sin h ≈ h. But the limit of h as h tends to zero is zero. Why then is dividing sin h by h meaningful as h tends to 0, whilst 0/0 is not meaningful? jftsang 19:48, 18 October 2010 (UTC)
- The notation is formally meaningless. It is just a sloppy shorthand for . You're allowed to divide by because the expression inside the operator is only ever required to make sense for small but nonzero .
- You may have a rule that says . This is not a definition of what the left-hand side means; just a useful short-cut for calculating it that can be used iff the right-hand side makes sense (that is, the limits exist and . If these premises are not true, the rule is invalid and unhelpful, but it is still possible that you can evaluate the LHS from first principles (i.e. the ε–δ definition). –Henning Makholm (talk) 20:14, 18 October 2010 (UTC)
- (edit conflict) This is a very good question, but be careful with your wording. The limit of a function is a number, a constant, not another function. When you say "the limit of sin h as h tends to zero is h," what you mean is that . The ratio between the two functions approaches 1, so sin h is a good approximation to h if h is small. (The statement doesn't make sense.) The reason a limit like this is meaningful is that we are discussing the behavior of the ratio (sin h)/h as h approaches 0, but is still nonzero; we are not concerned with what happens when h actually equals 0. See also L'Hôpital's rule. —Bkell (talk) 20:19, 18 October 2010 (UTC)
- Another way to explain this behavior is to note that the function x is a first order approximation of sin x for x, and "near" zero all the correction terms required to reconcile x and sin x are "small". For instance, sin 0.01 = 0.0099998... which is ridiculously close to the input, 0.01. The infinite series definition for sin x is
- If we plug in a value near zero, the cube of that value is *very* near zero, so those infinitely many correction terms which use higher and higher exponents nearly cancel to zero. Plugging this series in to find the limit of as gives 1 immediately. 67.158.43.41 (talk) 21:38, 18 October 2010 (UTC)
- So, Henning Makhom and Bkell are quite right... but I'm really interested in what precisely your text book states. If it does claim that the limit of sin h is h when h tends to zero, without any further qualification, then I suspect that it is erroneous. (Alas, this would not be the first time there is an error in school textbooks for mathematics!) However, you may have mis-read it.
- Could you please give an exact quote (with all qualifications)? JoergenB (talk) 21:56, 18 October 2010 (UTC)
- Another way to explain this behavior is to note that the function x is a first order approximation of sin x for x, and "near" zero all the correction terms required to reconcile x and sin x are "small". For instance, sin 0.01 = 0.0099998... which is ridiculously close to the input, 0.01. The infinite series definition for sin x is
- All good textbooks on introductory calculus begin the series of theorems on limits by presenting two trivial theorems. The first states that the limit of a constant is that constant. The second states that the limit of a polynomial (or any function with a denominator that doesn't contain a variable) can be found by direct substitution. This second theorem tells me that the limit of sin h as h approaches zero is zero, not h.
- It is true that as h gets smaller, sin h can be approximated by h. But that statement is not the same as saying the limit of sin h as h approaches zero is h (which would not be compatible with the second of the two trivial theorems.) Dolphin (t) 22:10, 18 October 2010 (UTC)
- The second "theorem" you mention is actually the definition of continuity in the general case, and requires someone to show sin x continuous before it shows . But that's a small quibble. 67.158.43.41 (talk) 22:41, 18 October 2010 (UTC)
- jftsang, was your question prompted by Anonymous Dissident's answer to the #trouble with differentiating something using only first principles question above in which he wrote "As A goes to zero, ... sin(A) goes to A"? -- 119.31.121.84 (talk) 00:22, 19 October 2010 (UTC)
- Partially, yes. Bkell's response above clarifies the issue for me, but given that, is it still justifiable to say "As A goes to zero, ... sin(A) goes to A"? 80.46.78.71 (talk) 19:09, 22 October 2010 (UTC)
- I hope you get a better answer before this archives, but as with JoergenB, I'd be very interested in seeing an explicit quote from a text book that says "... sin(x) goes to x". When speaking about such things, I say that "As x goes to zero, sin(x) goes as x", meaning that . Likewise, I say "As x goes to zero, cos(x) goes as 1 - x2". I don't know how formal or proper such language is. -- 58.147.59.32 (talk) 03:33, 23 October 2010 (UTC)
- If you have to ask then no. In other words: The statement is technically meaningless. If either you or your correspondent doesn't have a very solid grasp of the matter, then using it will only cause confusion and should be avoided. However, people who understand something very well can allow themselves to be sloppy, because they know what they can be sloppy about and how to correct the sloppiness. So in a conversation between mathematicians, "as A goes to 0, sin(A) goes to A" will be understood to mean and can be used if it's easier to say. -- Meni Rosenfeld (talk) 13:56, 24 October 2010 (UTC)
Combinatorics
Yesterday I did a math contest and one of the question I didn't get. It said something like: one marble is placed in a bag with the label "1", 2 with lable "2", ..., and 50 with label 50, such that there are 1275 marbles in the bag. How many must be taken out of the bag to guarantee at least 10 with the same number. How should I have solved this? Thanks. 24.92.78.167 (talk) 21:58, 18 October 2010 (UTC)
- Well, what's the largest number of marbles you can take from the bag and have at most 9 of each number? One more than that will guarantee 10. Algebraist 22:02, 18 October 2010 (UTC)
- Note also that the question is not about probability at all. It is purely a combinatorics problem: What is the least number N such that every N-element subset of your set of marbles contains 10 equinumbered ones? –Henning Makholm (talk) 22:17, 18 October 2010 (UTC)
- The worst case scenario is that you pick all the marbles numbered 1 through 8, and 9 marbles of each of the numbers 9 through 50, When you take one more marble you are guaranteed to have 10 with the same number. Bo Jacoby (talk) 00:48, 21 October 2010 (UTC).
Test guessing Probability
Today in class we were talking about a certain multiple choice test, on which there is no guessing penalty. The teachers advised us to guess the same letter on all of the problems we could not solve or could not get to to maximise correct guesses, which intuitively makes sense since a, b, c, and d will each be right about 1/4 of the time and guessing the same letter for each more or less guarantees that ~1/4 of the guesses will be right whereas guessing different letters creates a chance that none of them are right. However I'm having trouble proving this mathematically, since if I guess randomly each guess has a 1/4 chance of guessing right. How does guessing the same each time increase my chances? Thanks. 24.92.78.167 (talk) 22:57, 18 October 2010 (UTC)
- It wouldn't. Maybe the idea is to avoid wasting time thinking about your guess.--RDBury (talk) 23:06, 18 October 2010 (UTC)
- It may be that the test was deliberately constructed such that the correct answers are equidistributed among the four positions for even small ranges of the questions. So if you can only do, for example, the first 60 of 100 questions, guessing the remaining 40 with the same letter would guarantee that at least 9 of them hit correctly; whereas guessing randomly would risk getting none right. This would make sense if you're concerned about optimising for the worst mark you can possibly get, which is probably a reasonable assumption in many cases.
- One can imagine other, desperate, situations where a different strategy would be rational. For example, for some reason you might need 75% correct answers, to the extent that you don't care about the difference between 0% and 74% because either will get you killed / thrown out of school / disowned by your family. Then guessing randomly would give a small but nonzero chance of reaching the threshold, whereas marking all the same would be certain loss. –Henning Makholm (talk) 23:21, 18 October 2010 (UTC)
- If you guess completely randomly (which is hard for a person to do, by the way), the number of correct answers you get would follow a Binomial distribution, with p=0.25 and n = however many questions you guess on. If the teacher distributed the correct answer completely randomly among a,b,c, and d, then guessing the same letter each time would also give you a binomial distribution with the same parameters. However, unless the teacher made a point of distributing the correct answer randomly, I can guarantee that he did not. Like I said, humans aren't good at doing things randomly. If the teacher noticed that a lot of the correct answers were a, he would intentionally make a couple questions where a isn't the correct answer. It's unlikely that the teacher would make each answer correct exactly 1/4 of the time, but you can be pretty sure that the distribution is not random. I don't think that guessing the same answer each time will increase your Expected value for the number of correct answers, but it will almost certainly (unless the teacher really, really did things truly randomly) decrease the Variance. In other words, you'd be leaving less up to chance. From the perspective of an economist, people are Loss averse; the value of the points you might potentially lose are probably worth more than the value of the points you might potentially gain (there are exceptions, as Henning Malkolm points out). In that respect, decreasing your uncertainty is a smart move, and it is indeed in your interest to choose the same answer each time when you guess. Buddy431 (talk) 02:54, 19 October 2010 (UTC)
- I disagree. It is very probable that the teacher will have "favorite letters" which will be the correct answer in a higher proportion. If you don't know what these letters are, sticking with a single letter will usually increase your variance, not decrease it. -- Meni Rosenfeld (talk) 09:10, 19 October 2010 (UTC)
- I would expect, given that the teacher explicitly gives this advice, that he/she has deliberately made sure that the cumulative counts of each letter in questions 1 through n is close to n/4 for all n. (That may be too big an assumption). –Henning Makholm (talk) 10:10, 19 October 2010 (UTC)
- I disagree. It is very probable that the teacher will have "favorite letters" which will be the correct answer in a higher proportion. If you don't know what these letters are, sticking with a single letter will usually increase your variance, not decrease it. -- Meni Rosenfeld (talk) 09:10, 19 October 2010 (UTC)
- If you guess completely randomly (which is hard for a person to do, by the way), the number of correct answers you get would follow a Binomial distribution, with p=0.25 and n = however many questions you guess on. If the teacher distributed the correct answer completely randomly among a,b,c, and d, then guessing the same letter each time would also give you a binomial distribution with the same parameters. However, unless the teacher made a point of distributing the correct answer randomly, I can guarantee that he did not. Like I said, humans aren't good at doing things randomly. If the teacher noticed that a lot of the correct answers were a, he would intentionally make a couple questions where a isn't the correct answer. It's unlikely that the teacher would make each answer correct exactly 1/4 of the time, but you can be pretty sure that the distribution is not random. I don't think that guessing the same answer each time will increase your Expected value for the number of correct answers, but it will almost certainly (unless the teacher really, really did things truly randomly) decrease the Variance. In other words, you'd be leaving less up to chance. From the perspective of an economist, people are Loss averse; the value of the points you might potentially lose are probably worth more than the value of the points you might potentially gain (there are exceptions, as Henning Malkolm points out). In that respect, decreasing your uncertainty is a smart move, and it is indeed in your interest to choose the same answer each time when you guess. Buddy431 (talk) 02:54, 19 October 2010 (UTC)
- If the teacher deliberately makes the answers 'random' as in about the same number for each choice of a,b,c,d rather than actually random you can take advantage of this by counting the various answers you give and choose the one with the minimum frequency for all the rest. Of course that's exactly the opposite answer to if the teacher has favourite answers and doesn't check for an equal number each ;-) Dmcq (talk) 10:32, 19 October 2010 (UTC)
- The obvious next step is to analyze the correct answer distribution in the teacher's past exams. Many teachers post their previous exams (and answer keys) online, making such analysis easy. If he doesn't, you may want to become friends with a fraternity brother, who often keep copies of past exams Buddy431 (talk) 14:31, 19 October 2010 (UTC)
- One factor that may influence this is situations where the real answer falls into a neat pattern with the distractor answers (to take a simple example, if the answers to a run of questions were compass directions, it would be natural [and intuitive for the student as well as the teacher] to use A North, B East, C South, D West each time, but there are plenty of similar situations). If one of the answers in that pattern is never actually correct for whatever reason (but still sensible to have there to catch people out), it would skew the distribution of correct answers.
- In fact, I'm almost certain that the vast majority of people setting these sorts of tests are much more concerned about making the wrong answers plausible enough to catch out the people who've misunderstood the material/been lazy about checking their working thoroughly/etc. than not leaving themselves open to the tiny minority of test takers who are applying probabilistic analysis to try and improve their grade marginally. --81.153.109.200 (talk) 20:10, 19 October 2010 (UTC)
- The obvious next step is to analyze the correct answer distribution in the teacher's past exams. Many teachers post their previous exams (and answer keys) online, making such analysis easy. If he doesn't, you may want to become friends with a fraternity brother, who often keep copies of past exams Buddy431 (talk) 14:31, 19 October 2010 (UTC)
- User:Buddy431 is dead right that the most direct answer to your question is the binomial distribution, but you might want to look into the idea of expectation value as well.--81.153.109.200 (talk) —Preceding undated comment added 20:16, 19 October 2010 (UTC).
Oops I probably should have mentioned that this is a standardized test, so there's no advantage to be had from guessing from past tests or favorite answers or probabalistic analysis. 24.92.78.167 (talk) 02:46, 21 October 2010 (UTC)
- On a standardized test, the test publisher will be very careful to avoid any "letter bias", so the remark about the binomial distribution seems more likely to hold.
- Of course, your best chance of getting the right answer is to know the right answer. Study. 68.36.117.147 (talk) 01:48, 22 October 2010 (UTC)
October 19
Quick query on field extension K-homomorphisms and isomorphisms
Hi all,
I'm looking at K-homomorphisms: L->L in the field extension L/K (i.e. homomorphisms: L->L which are the identity on the subfield K), and trying to show when they must be isomorphisms. I've shown that for any algebraic field it must be isomorphic, and in general any k-homomorphism must be injective no matter whether or not the field is algebraic (looking at the kernel of the homomorphism which is an ideal). However, I'm trying to find an example of a non-bijective K-homomorphism on a transcendental field, and I've come up with the following, which is not even injective so it must be wrong.
Look at K[X], and take the map F:K[X]->K[X], F(p(X))=p(1).
Then F(p(X)q(X))=p(1)q(1)=F(p(X))F(q(X)), F(p+q)=F(p)+F(q) and F(k)=k for any constant k, so F(1)=1 in particular and F is a K-homom. However, first question: F is clearly not injective (F(x-1)=0=F(0)), but what's gone wrong? I can see how it's really a map F:K[X]->K in a sense, but K is a subset of K[X] so presumably that's fine (or else there's no such thing as a map K->K which is not surjective). Which part of my argument is fallacious?
And my second question is (assuming it is my example rather than my logic which is wrong!) could anyone suggest such a non-bijective K-homom. which would work, over a non-algebraic field extension?
Thankyou very very much, just brief answers will suffice, I don't want to take up too much of your time! Spalton232 (talk) 00:08, 19 October 2010 (UTC)
- K[X] is not a field. Your F doesn't extend to the field K(X), since F(X-1)=0, so there's nothing F(1/(X-1)) can be. The map from K(X) to K(X) given by F(X)=X2 should be a non-surjective homomorphism. Algebraist 00:12, 19 October 2010 (UTC)
More complex number trigonometry
1. Simplify
2. Show that
For the first one I have multiplied the numerator and denominator by a wide array of assorted rubbish (including the complex conjugate of 1-cos2x+isin2x), but I have no way of knowing if it's the simplest form. I wouldn't have a clue how to do the second one. --MrMahn (talk) 04:08, 19 October 2010 (UTC)
- For the first one, "wide array of assorted rubbish" does not sound very systematic. I would notice that for . An narrow array of systematic rubbish to try would then be , , , and . One of these should end up with something you can express with a single term. –Henning Makholm (talk) 06:35, 19 October 2010 (UTC)
- If everything else fails, the algorithm I proposed in response to your previous question should work for the second one, if you set and unfold the exponential using Euler's formula. Or (easier in practice) set and rewrite the two sides as after which only algebra is left. –Henning Makholm (talk) 06:21, 19 October 2010 (UTC)
- (After ec)
- Try this way:
- Did you try to substitute for sine and cosine of x? For the right-hand side use the Euler's formula.
- Try this way:
- HTH. --CiaPan (talk) 06:24, 19 October 2010 (UTC)
related to the question above
This is related to my question about refuting a famous open conjecture. Should the proof be titled "A disproof of" ___ conjecture or "A proof that" __ conjecture "is false"?
How should that be capitalized in the title? (title case or just initial capital only). 92.229.12.100 (talk) 11:40, 19 October 2010 (UTC)
- I would go with the first option, shorter titles are more catchy. As for capitalization, it does not really matter. If you are going to publish it, the journal will impose their house style anyway.—Emil J. 12:24, 19 October 2010 (UTC)
- Actually, in the second title the "A proof that" part is redundant. You can call it just "__ conjecture is false".—Emil J. 12:33, 19 October 2010 (UTC)
- So would you use "The ___ is false" or "Disproof of ____" / "A disproof of _____" ? 92.229.12.100 (talk) 14:03, 19 October 2010 (UTC)
- Here's some examples of titles people have used in such circumstances: A counterexample to Borsuk's conjecture (Borsuk's conjecture), Two complexes which are homeomorphic but combinatorially distinct (Hauptvermutung), Disproof of the Mertens conjecture (Mertens conjecture), On Hamiltonian circuits (Tait's conjecture), and A disproof of a conjecture of Pólya (Pólya conjecture). Algebraist 14:54, 19 October 2010 (UTC)
- Personnally (perhaps because I'm a little older?), I dislike both alternatives. "A counterexample to" in my ears is much better. One reason is, that a conjecture wasn't stated as a theorem. When you write "A disproof", in my (older) ears it sounds a bit like "Well, this guy thought that (s)he had more or less proved that statement; but I'm going to show that (s)he was wrong". A properly stated conjecture is nothing like a loosely `claimed fact'. The person(s) putting it are not proven to have made any kind of mistake, if the conjectured statement is proved to be false.
- If you provide a counterexample to a "famous open conjecture", and your example is a correct counterexample, and correctly motivated, then your own fame is ascertained anyhow. The professionals will go for the mathematical content, not for the title; and in fact a title which does not overstate the result probably is the better.
- Actually, there are even more understated formulations, which sometimes are better, like "A negative answer to a question by...". It does happen that a mathematician thinks (s)he did not make a conjecture, just pose an open question; and in such cases may be a bit irritated even over a title like "A counterexample to a conjecture by...". (I'm not even trying to imagine the reaction to a title like "A disproof of..." in such a situation.) You probably should check the original formulation of the problem, before you decide of the title.
- If the statement first was stated as a theorem, but later the proof was found insufficient, then "A disproof..." could be appropriate, I think. JoergenB (talk) 18:41, 19 October 2010 (UTC)
Rectification
What is the origin of the term Rectification? What is the connection between the original sense of the term (action of setting right) and the action of cutting of corners of a polyhedron? --İnfoCan (talk) 20:12, 19 October 2010 (UTC)
- The original sense of the term is not setting right, but setting straight (it is derived from Latin rectus). This is where another geometric term, rectification of curves, comes from. I don't quite understand how (or whether) this is related to cutting polyhedrons.—Emil J. 14:43, 20 October 2010 (UTC)
- The link he gave, Rectification (geometry), explains quite clearly how the word applies to cutting polyhedrons. Use of the word is quite old, like most solid geometry. I don't think it's likely we can discover exactly who first used it this way. But it makes sense to think that it is derived from the use in rectification of curves; the diagram for a rectified curve suggests cutting off. 68.36.117.147 (talk) 01:43, 22 October 2010 (UTC)
- I see no such explanation in the article. Can you point to it more precisely? –Henning Makholm (talk) 02:41, 22 October 2010 (UTC)
- The link he gave, Rectification (geometry), explains quite clearly how the word applies to cutting polyhedrons. Use of the word is quite old, like most solid geometry. I don't think it's likely we can discover exactly who first used it this way. But it makes sense to think that it is derived from the use in rectification of curves; the diagram for a rectified curve suggests cutting off. 68.36.117.147 (talk) 01:43, 22 October 2010 (UTC)
how does arXiv work, or help with peer review?
Hi,
The arXiv article says "the arXiv is not peer-reviewed", so how does putting an article on arXiv before publication help / how do you use arXiv for peer review? If I put a paper up, will I just get emails from people who happen to be interested, and is that what it is good for? Or, should I myself email people who might be interested, linking my arXiv paper, and hopefully they will give me valuable feedback? Or, how is arXiv supposed to be uesd?
Basically what I'm saying is that I'm kind of fuzzy about the difference between me->arXiv->Journal versus me->Journal... thanks! 92.230.234.134 (talk) 20:29, 19 October 2010 (UTC)
- It doesn't help with peer review. It won't help to get your paper published. It's just a big electronic library for people to put their papers in order to aid the dissemination of knowledge. More people will get to read your paper and that will make you work known to a wider base. Hopefully that will increase possible collaborations and citations. Both of which are good for a long and successful academic career. — Fly by Night (talk) 20:44, 19 October 2010 (UTC)
- Some journals have integrated arXiv to their submission system. E.g. if you want to submit a new paper to one of the Physical Review journals, all you have to do is give your arXiv preprint number. Putting a paper on the arXiv may get feedback from other authors who will advertise their papers to you (asking for you to cite their paper). It helps making your paper more visible and will lead to your paper being cited a lot more. That in turn will help you getting your next paper published, because a Referee will, of course, check if the sort of research you are doing is of interest. Having published a few papers with zero citations doesn't help here. Count Iblis (talk) 03:51, 20 October 2010 (UTC)
- Two reasons to post on the arxiv: 1) peer review can take a looong time, but if you post something on the arxiv it appears the next (working) day, 2) in contrast to most journals, the arxiv is accessible freely to all. (Virtually) no quality control is applied to arxiv postings - lots of stuff on there is wrong/cranky, but it's a great way to get your work out there. Tinfoilcat (talk) 18:43, 20 October 2010 (UTC)
October 20
How can I prove that if one takes the floor function of an exponential random variable it yields a geometric distribution? I've read in many places that it is really true, but I cannot find a proof, no matter how much I search. Thanks. --Belchman (talk) 20:51, 20 October 2010 (UTC)
- Interpret the variable as time (in, say, minutes): since the distribution is memoryless, after a minute has passed, the chances of the event occurring in the second minute are the same as it had at the start of happening in the first minute. That the distribution of floors is geometric with the parameter equal to that minute's probability follows immediately. --Tardis (talk) 21:40, 20 October 2010 (UTC)
- You can also use the direct approach: Let . Then
- If you take then this is which is a geometric distribution. -- Meni Rosenfeld (talk) 09:13, 21 October 2010 (UTC)
- Thank you. Brilliant answers. --Belchman (talk) 18:47, 21 October 2010 (UTC)
October 21
Formula for converting solid volume to liquid volume
This may sound like a strange question but this has bugged me for a long time
is there a mathematical formula for predicting the change change in volume of substance after it changes states.
for example if i was to take a 5'x5' block of ice and melt it how would i estimate the volume of liquid water i would have?
thanks in advance —Preceding unsigned comment added by 209.167.165.2 (talk) 03:00, 21 October 2010 (UTC)
- This is really more of a chemistry/physics question, but the ideal gas law is a good approximation to allow you to predict gas volume given the number of particles (equivalently, initial/solid mass or weight), atmospheric pressure, and gas temperature. There are of course more refined approximations if you're really interested, but it doesn't seem like that's what you're after. There are probably similar formulas for other states; I purposefully forgot the chem I learned :). 67.158.43.41 (talk) 05:11, 21 October 2010 (UTC)
- Water is different from most other substances in that it expands when it freezes; a block of ice is approximately 10% larger, by volume, than the water it froze from (see Properties of water#Density of water and ice, in which this is explained in terms of density). So your 5′ × 5′ × 5′ block of ice (I assume you meant to have three dimensions there), which has a volume of 125 cubic feet, would melt into about 114 cubic feet of water. (Keep in mind that the volume of liquid water expands a bit as it heats up, too.) These calculations are specific to water, though—other substances behave differently. Most substances shrink when they freeze. —Bkell (talk) 05:31, 21 October 2010 (UTC)
- Is Coefficient of thermal expansion relevant, or does that only apply when there's no change of state? Rojomoke (talk) 11:18, 21 October 2010 (UTC)
- The thermal expansion coefficient assumes that the state is the same. It would not make sense to have one for state changes, since the phase change takes place without any change in temperature. –Henning Makholm (talk) 14:32, 21 October 2010 (UTC)
Bachmuth representation
I'm currently reading a paper of Chein ([1]), and he states that if F is the free group on three generators (a,b,c), and is the free metabelian group on three generators, then we can regard a,b and c as generators of , and we have a mapping sending
which induces a representation of (where the are commuting indeterminates in some ring R). Perhaps I'm being dense, but I don't see how this gives a homomorphism. I'm assuming there's something I'm not understanding about multiplication in . The problem I'm having is with the (1,2) entry of the image of, say, ab, which is , but in the product of the images of a and b, the (1,2) entry is .
Chein says that Magnus has proved this, but Magnus' paper is in German, and I've not managed to find an English translation. I've checked other sources, and they all agree that this map is a representation, but I just can't see it. Any help would be great, thanks, Icthyos (talk) 11:54, 21 October 2010 (UTC)
- The multiplication reminds me of a semi-direct product. Not sure if it's just a coincidence, or not. — Fly by Night (talk) 12:12, 21 October 2010 (UTC)
- No coincidence. If R is a ring, then the matrix group is isomorphic to the semidirect product , where acts on R by multiplication.—Emil J. 15:04, 21 October 2010 (UTC)
- What makes you believe that the (1,2) entry in the image of ab is ?—Emil J. 13:18, 21 October 2010 (UTC)
- (e/c) Anyway: any function from {a,b,c} to any group G extends to a unique homomorphism f from F to G (that's the definition of a free group). In order to get a homomorphism from Φ to G, you need f to factor through the quotient map F → Φ, which happens if and only if Ker(f) includes F'', so one would need to check that. Alternatively, since Φ is the free metabelian group, it would suffice to check that G (or just the range of f as a subgroup of G) is metabelian. I can't tell how difficult is this to establish in your case, in fact, I don't even understand the description of the mapping. (What is an "indeterminate in a ring"? And when you write that " are commuting", do you only mean that si commutes with ti for every i=1,2,3, or that any pair of elements of {s1,t1,s2,t2,s3,t3} commutes?)—Emil J. 13:37, 21 October 2010 (UTC)
- Oh, I see what you mean about the (1,2) entry, I don't know what I was thinking. Your statement about the unique extension of the homomorphism was one of the first things I learned about free groups, it should've been obvious. I agree that the map is not defined in a clear way. Another source I have (Lyndon & Schupp's "Combinatorial Group Theory", Chapter 1.4) defines the and as being 'independent invertible elements' in a commutative ring, R, which isn't really much better. Thanks for pointing out my mistake; I shall attempt to soldier on! Icthyos (talk) 14:12, 21 October 2010 (UTC)
- I had a look on Chein's paper, and the description there is easier to understand: the elements are apparently taken in the rational function field (or rather, its subring ). The only important point is that the ring is commutative (and the si are invertible). The mapping goes into the group G of matrices of the form , where x,y are elements of the ring, and x is invertible. Then the existence of the homomorphism is clear, since G is metabelian: when the matrices are multiplied, the (1,1) entry of the result is just the product of the individual (1,1) entries, which are taken in a commutative ring, hence G' only contains matrices of the form . The group of such matrices is commutative (they are multiplied by adding the y's), hence G'' is trivial.—Emil J. 14:32, 21 October 2010 (UTC)
- Oh, I see what you mean about the (1,2) entry, I don't know what I was thinking. Your statement about the unique extension of the homomorphism was one of the first things I learned about free groups, it should've been obvious. I agree that the map is not defined in a clear way. Another source I have (Lyndon & Schupp's "Combinatorial Group Theory", Chapter 1.4) defines the and as being 'independent invertible elements' in a commutative ring, R, which isn't really much better. Thanks for pointing out my mistake; I shall attempt to soldier on! Icthyos (talk) 14:12, 21 October 2010 (UTC)
- Could you please provide the reference to Magnus'es paper? JoergenB (talk) 13:25, 21 October 2010 (UTC)
- The reference to Magnus is [2], and Chein says Magnus proves that the above is a representation in Theorem 6.
- He actually says that he proved it to be a faithful representation. I gather that the faithfulness is the hard part.—Emil J. 14:38, 21 October 2010 (UTC)
- Actually, I found the Mathematische Annalen 111... and there Magnus is discussing a (superficially) quite different representation (working with finite upper left hand corners of certain upper triangular infinite matrices, seemingly). This could be slightly confusing.
- I do not understand your discussions about in which rings to find the representations. In the definition on page 606, Chein just talks about commuting indeterminates (i. e., elements like the variables x and y in a polynomial expression such as ), without complicating matters by embedding them in a ring of high enough transcendence degree; right? JoergenB (talk) 15:07, 21 October 2010 (UTC)
- When one just refers to something as commuting indeterminates in a context asking for a ring, it is understood that one takes the ring of polynomials in said indeterminates (or simply the free commutative ring in these indeterminates, i.e., polynomials over Z). I'm not embedding anything anywhere, I'm using directly that polynomial ring (more or less: in this particular case, one obviously needs the si to have inverses, so plain is not enough). He actually explicitly refers to "" on the next page.—Emil J. 15:26, 21 October 2010 (UTC)
- Good; I agree with that. Ichtyos, did you essentially mean the same thing with
- "which induces a representation of (where the are commuting indeterminates in some ring R)"? JoergenB (talk) 19:36, 21 October 2010 (UTC)
- Yes; I was essentially just quoting Chein, but that's what I took him to mean.Icthyos (talk) 11:19, 22 October 2010 (UTC)
- "which induces a representation of (where the are commuting indeterminates in some ring R)"? JoergenB (talk) 19:36, 21 October 2010 (UTC)
- Good; I agree with that. Ichtyos, did you essentially mean the same thing with
- When one just refers to something as commuting indeterminates in a context asking for a ring, it is understood that one takes the ring of polynomials in said indeterminates (or simply the free commutative ring in these indeterminates, i.e., polynomials over Z). I'm not embedding anything anywhere, I'm using directly that polynomial ring (more or less: in this particular case, one obviously needs the si to have inverses, so plain is not enough). He actually explicitly refers to "" on the next page.—Emil J. 15:26, 21 October 2010 (UTC)
- He actually says that he proved it to be a faithful representation. I gather that the faithfulness is the hard part.—Emil J. 14:38, 21 October 2010 (UTC)
- The reference to Magnus is [2], and Chein says Magnus proves that the above is a representation in Theorem 6.
- Thanks for the discussion, both of you! Icthyos (talk) 11:19, 22 October 2010 (UTC)
What is Mathematica doing?
So, I am putting an improper integral on a quiz, , and I plugged it in to Mathematica to make sure I didn't make any mistakes. The answer Mathematica gave me was , which is what I got. But, I then simplified , whereas finding the numerical form of this in Mathematica apparently picks a complex cube root of unity here. So, I tried plotting the function from -8 to -2 and Mathematica shows nothing at all, Plot[x^(-1/3), {x, -8, -2}]. Wolfram Alpha shows two graphs, one labeled real part and one labeled imaginary part. What's the deal? StatisticsMan (talk) 16:52, 21 October 2010 (UTC)
- Just a guess, but it's probably doing the principal value. Basically, you can define a cube root function that behaves well except for when z is on the negative real axis. The value of the function on the axis depends on how you define it, but it would have to be one of the complex values. You might try breaking up the interval and using something like where x is negative.--RDBury (talk) 17:41, 21 October 2010 (UTC)
- (ec) Well, if you want to make sense for nonzero complex numbers (you may not care, but how would Methematica know that?), and get the real branches for both positive and negative , you need cuts in both the upper and the lower half-plane. It seems reasonable for Mathematica to always always use the same single cut (say, infinitesimally below the negative real axis) for all non-integral powers. That's the best you can do for almost all exponents anyway. If you want whenever is a possible value, then would need to be discontinuous on the entire real line. –Henning Makholm (talk) 18:08, 21 October 2010 (UTC)
- The problem is that x–1/3 is multivalued. Mathematica is using one of the complex roots as a solution. You should be able to tell it to work over the reals, or to assume that x is a real variable. — Fly by Night (talk) 17:49, 21 October 2010 (UTC)
- The IEEE floating point standard treats the problem by having a special function
rootn(x,n)
wheren
has to be an integer. If you put inpow(-1,1/3)
it would give an error. Dmcq (talk) 22:32, 21 October 2010 (UTC)
what do we know about large primes?
we know they are odd. What other properties do they have? (other than the obvious one!) —Preceding unsigned comment added by 85.181.49.255 (talk) 17:31, 21 October 2010 (UTC)
- All of the cool properties of division/multiplication. If you reverse the digits, it will not be divisible by 9. If you add the digits up (recursively until there is only 1 number left), it will not be 9. See divisibility rule for more. -- kainaw™ 17:55, 21 October 2010 (UTC)
- (What kind of "property" are you looking for? Something like Fermat's little theorem? Or deterministic primality tests? Number theory is full of things that have been proved to be true specifically for primes. They are too numerous to list. –Henning Makholm (talk) 17:58, 21 October 2010 (UTC)
- For a large prime, it is true that no positive integer other than 1 and itself divides it. That's a cool property shared by all large primes. Also, no large, odd prime is divisible by 3, 5, 7, 11, or 13 (saying they are odd is just saying they are not divisible by 2). StatisticsMan (talk) 18:08, 21 October 2010 (UTC)
- For that matter, there are no large even primes that are divisible by 3, 5, 7, 11, or 13 either (I have a wonderful proof of this, which unfortunately is WP:OR). –Henning Makholm (talk) 18:12, 21 October 2010 (UTC)
- No large prime is divisible by any positive integer other than 1 and itself; not just 3, 5, 7, 11, or 13. If it were then it wouldn't be prime! — Fly by Night (talk) 13:42, 22 October 2010 (UTC)
- Well, the fact that only 1 and itself divides is what I meant above with "other than the obvious one". What I'm really interested is what Makholm has said "Number theory is full of things that have been proved to be true specifically for primes. They are too numerous to list." Surely he could have listed like 10 of them!! I am really interested in things that are going to be true for large primes, but not for every large number, other than the fact that they have two factors... 85.181.49.255 (talk) 18:49, 21 October 2010 (UTC)
- For a large prime, it is true that no positive integer other than 1 and itself divides it. That's a cool property shared by all large primes. Also, no large, odd prime is divisible by 3, 5, 7, 11, or 13 (saying they are odd is just saying they are not divisible by 2). StatisticsMan (talk) 18:08, 21 October 2010 (UTC)
- So things like Fermat's little theorem, Wilson's theorem, Goldbach's conjecture, the prime number theorem, or the fact that the order of every finite field is a power of a prime? You can find more in Category:Prime numbers. —Bkell (talk) 19:08, 21 October 2010 (UTC)
- OK, let us list 10 random prime properites:
- Fermat's little theorem.
- Wilson's theorem.
- Euclid's lemma
- Primes are the only nonzero numbers that can be the characteristic of a field.
- Primes are
the only numberssuch that two finite groups with this many elements are necessarily isomorphic. - The Sylow theorems
- Eisenstein's criterion
- Only if p is a prime will the p-adic numbers be a field (rather than a ring).
- Quadratic reciprocity
- Touchard's congruence
- –Henning Makholm (talk) 02:35, 22 October 2010 (UTC)
- The Finite group article gives infinitely many counterexamples to #5: Every group of order pq is cyclic when q < p are primes with p-1 not divisible by q. Matthew Auger (talk) 11:07, 22 October 2010 (UTC)
- Ooops! –Henning Makholm (talk) 14:58, 22 October 2010 (UTC)
- The Finite group article gives infinitely many counterexamples to #5: Every group of order pq is cyclic when q < p are primes with p-1 not divisible by q. Matthew Auger (talk) 11:07, 22 October 2010 (UTC)
- They are hard to factor, which is the basis of RSA encryption. They get rarer and rarer as a consequence of the prime number theorem. 67.158.43.41 (talk) 22:23, 21 October 2010 (UTC)
- Actually primes are very easy to factor. If you give me a large prime I'll factor it for you immediately, even without a computer. :-) —Bkell (talk) 22:39, 21 October 2010 (UTC)
- I bow down to you. I'm not worthy. :) --Kinu t/c 22:42, 21 October 2010 (UTC)
- Haha, whoops. I of course meant numbers "near" them are hard to factor, ex. pq for large primes p and q., Interestingly, though, primality testing is "easy" inasmuch as it can be done in polynomial time in the number of digits in the input using the AKS primality test. That is, you can test whether or not a given large number is prime "quickly". 67.158.43.41 (talk) 00:19, 22 October 2010 (UTC)
- I bow down to you. I'm not worthy. :) --Kinu t/c 22:42, 21 October 2010 (UTC)
- Actually primes are very easy to factor. If you give me a large prime I'll factor it for you immediately, even without a computer. :-) —Bkell (talk) 22:39, 21 October 2010 (UTC)
- Bertrand's postulate is an interesting property, there's another prime below 2p. 22:38, 21 October 2010 (UTC)
- The primes contain arbitrarily long arithmetic progressions, which usually occur for very large primes; see the Green–Tao theorem. —Anonymous DissidentTalk 22:48, 21 October 2010 (UTC)
- According to http://primes.utm.edu/largest.html, some large primes are Mersenne primes.
- —Wavelength (talk) 01:04, 22 October 2010 (UTC)
- Depends on your definition of 'large'. There may be no 'large' Mersenne primes if they are finite in number. Dmcq (talk) 10:01, 22 October 2010 (UTC)
All primes are small, because most primes are bigger. Bo Jacoby (talk) 12:25, 22 October 2010 (UTC).
If you take all the prime numbers, square them, take the reciprocal of the numbers, take one minus these numbers, and then multiply all the numbers, you get exactly 6/pi^2:
Count Iblis (talk) 15:01, 22 October 2010 (UTC)
The OP's question may have been better than I though at first. I understand it to ask for properties of each prime individually, that is, something you can do with a singe large number that happens to be prime. Most of what people suggest here are interesting properties of the set of all prime numbers in general. I really had to reach to fill out the 10 properties I quote above without including properties that are trivial rephrasings of each other (and one of them even turned out erroneous, stupid me). It seems to be hard to get very far when you cannot even use the fundamental theorem of arithmetic (which is a collective property of all the primes, rather than a property of each prime individually). –Henning Makholm (talk) 15:18, 22 October 2010 (UTC)
- If the OP's question was really, what is the practical use of knowing that some medium-large number (with, say, hundreds of digits) is prime, then the best answer may be to point to the use of prime numbers in various public-key cryptography schemes. The most significant mathematical underpinnings of these seem to be variations on Fermat's little theorem, the theory of finite fields, and in particular the assumed difficulty of computing discrete logarithms. –Henning Makholm (talk) 15:33, 22 October 2010 (UTC)
October 22
Polynomial degree vs order
Reading both Wikipedia and Wolfram I see that order and degree can be used interchangeably. But I was fairly certain that when I researched the same thing about 5 years ago the definitions were slightly different. In the next paragraph I do not state a fact, I am simply stating my previous (most likely erroneous) understanding.
I remember the "degree" being defined as the exponent of the highest order term, so a parabola would be 2nd degree. The order (and this is where I'm confused) was defined as the number of variables in the particular class of polynomial, so a general parabola would be 3rd order. A general 2nd degree polynomial of the form a*x^2 + b (no linear term), would be 2nd order.
Was it once defined like this or must I have been getting bad information?
196.211.175.2 (talk) 09:56, 22 October 2010 (UTC) Eon
The words degree and order are used interchangeably, according to degree of a polynomial. Bo Jacoby (talk) 12:37, 22 October 2010 (UTC).
- I don't recall seeing the definition of order you described. But it's very possible that this definition is common in a particular place or you've seen it used by a particular author. -- Meni Rosenfeld (talk) 13:58, 22 October 2010 (UTC)
I've always used the word "degree" for things like the exponent in x3, and said that the degree of a polynomial is the exponent of the highest-degree (not highest-order) term. "Order", on the other hand, I use for differential operators; thus one can speak of a second-order differential equation. I think that's the way I was taugth to use those terms in school. But I know that in some eras the word "order" has been used the way I use the word "degree", and that some people are not fastidious. Michael Hardy (talk) 18:43, 22 October 2010 (UTC)
typesetting a long stream of numbers in Latex?
How do I typeset 114813069527425452423283320117768198402231770208869520047764273682576626139237031385665948631650626991844596463898746277344711896086305533142593135616665318539129989145312280000688779148240044871428926990063486244781615463646388363947317026040466353970904996558162398808944629605623311649536164221970332681344168908984458505602379484807914058900934776500429002716706625830522008132236281291761267883317206598995396418127021779858404042159853183251540889433902091920554957783589672039160081957216630582755380425583726015528348786419432054508915275783882625175435528800822842770817965453762184851149029376 so that it wraps and stuff? It just goes off the edge of the page now, so that only one line is visible... 84.153.179.167 (talk) 13:52, 22 October 2010 (UTC)
- Insert \allowbreak in places where you want to allow it to break. You'd probably also need to mark the paragraph with \raggedright, as there will be no stretchable spaces on the lines.—Emil J. 14:14, 22 October 2010 (UTC)
- If you want to allow a line break after every digit, and if you need to do this repeatedly, here's a macro:
\def\longnumber#1{\raggedright$\dolongnumber#1\stop}
\def\dolongnumber#1{\ifx\stop#1$\else#1\allowbreak\expandafter\dolongnumber\fi}
\longnumber{1148130695274254524232833201177...965453762184851149029376}
- —Emil J. 14:28, 22 October 2010 (UTC)
- Now I wonder: does anyone have a clue why the syntax highlighter coloured the first occurrence of "#1" black, the second one pink, and the other three blue?—Emil J. 15:19, 22 October 2010 (UTC)
- I think it is confused by the (apparently) unpaired dollar signs. The anomaly changes somewhat if I replace them with \begin{math} and \end{math}, and goes away if I remove them completely. –Henning Makholm (talk) 15:40, 22 October 2010 (UTC)
- I see. Thank you.—Emil J. 16:40, 22 October 2010 (UTC)
- I think it is confused by the (apparently) unpaired dollar signs. The anomaly changes somewhat if I replace them with \begin{math} and \end{math}, and goes away if I remove them completely. –Henning Makholm (talk) 15:40, 22 October 2010 (UTC)
- Now I wonder: does anyone have a clue why the syntax highlighter coloured the first occurrence of "#1" black, the second one pink, and the other three blue?—Emil J. 15:19, 22 October 2010 (UTC)
if you had to brute-force a number but could be told some of the digits
If you had to brute-force a number but could be told some of the digits, would you rather have the n most significant or the n least significant digits - or wouldn't it make a difference? —Preceding unsigned comment added by 93.186.23.239 (talk) 14:07, 22 October 2010 (UTC)
- If you do not know (or suspect) anything else about the number, it makes no difference. If you do, it depends on the other expected properties of the number.—Emil J. 14:37, 22 October 2010 (UTC)
- all right, you're trying to factor a large composite number into its exactly two large random primes, and I'm telling you about the n most or least significant digits of the larger of them. Is the only reason you would prefer the former that the last digit is odd, so you already have slightly more information about it? Or would you have any other reason to prefer one over the other? Thank you.
- You also know that it is coprime with 5 (if the digits are decimal digits). This is already a good reason (just oddness of primes essentially halves your search space if given the most significant digits), but I would also prefer the most significant digits because then the brute force requires me to sieve primes out of an interval, which is somewhat easier then to sieve them from the set of numbers with given ending (however, this is only relevant if you are really going to find the factors by brute force; if you are using a more sophisticated factoring algorithm, it may have quite different preferences as to which extra information on the factors is more useful, and it may well happen that it cannot use either of them).—Emil J. 15:37, 22 October 2010 (UTC)
- I am struggling to see how you can say anything useful about the factorisation of the number if you are only told some of its digits and that it is a semiprime. For example, if you know the number is 1231x, it could be 13 x 947 = 12311, or 7 x 1759 = 12313 or 109 x 113 = 12317 or 97 x 127 = 12319. Gandalf61 (talk) 16:04, 22 October 2010 (UTC)
- No, that wouldn't be an example of the kind the OP is describing. An example may be that you need to factor 12317 and know that one of the factors is 11x. –Henning Makholm (talk) 16:19, 22 October 2010 (UTC)
- Yes (OP here, my IP has changed though) . If you were trying to factor 5725797461 would you rather know that the larger of them begins 75... or ends ...679 -- you only get to learn one of these things. Extrapolating, would you rather learn the first or the last hundred digits of a 1000-digit prime factor of an even larger composite number you were trying to factor into its exactly two large prime factors? 84.153.222.131 (talk) 18:03, 22 October 2010 (UTC)
- I am struggling to see how you can say anything useful about the factorisation of the number if you are only told some of its digits and that it is a semiprime. For example, if you know the number is 1231x, it could be 13 x 947 = 12311, or 7 x 1759 = 12313 or 109 x 113 = 12317 or 97 x 127 = 12319. Gandalf61 (talk) 16:04, 22 October 2010 (UTC)
- You also know that it is coprime with 5 (if the digits are decimal digits). This is already a good reason (just oddness of primes essentially halves your search space if given the most significant digits), but I would also prefer the most significant digits because then the brute force requires me to sieve primes out of an interval, which is somewhat easier then to sieve them from the set of numbers with given ending (however, this is only relevant if you are really going to find the factors by brute force; if you are using a more sophisticated factoring algorithm, it may have quite different preferences as to which extra information on the factors is more useful, and it may well happen that it cannot use either of them).—Emil J. 15:37, 22 October 2010 (UTC)
- all right, you're trying to factor a large composite number into its exactly two large random primes, and I'm telling you about the n most or least significant digits of the larger of them. Is the only reason you would prefer the former that the last digit is odd, so you already have slightly more information about it? Or would you have any other reason to prefer one over the other? Thank you.
Quotients
I'm looking for a command in either Maple or Maxima to compute the quotient of a polynomial ring by a gradient ideal. I can do some simple examples by hand, but I don't know a general method. For example, let ƒ(x,y) = x2 + yk where k is a positive integer. I have
Provided that the gradient vector field has an algebraically isolated zero at x = y = 0 the result will be a finite dimensional real vector space. Algebraically isolated zero means that the zero is isolated when x and y are considered as complex variables, and not just real. For example, can anyone calculate the vector space (called the local algebra) when ƒ(x,y) = x3 + xyk?
More importantly, does anyone know a command in either Maple or Maxima to compute the quotient? — Fly by Night (talk) 15:10, 22 October 2010 (UTC)
- I know what an ideal is, and I know what a gradient is, but what is a "gradient ideal"? Neither article so much as mentions the other word. –Henning Makholm (talk) 15:46, 22 October 2010 (UTC)
- The thing in the denominator of my quotients above. For a function ƒ : Rn → R the gradient ideal of ƒ is given by
- It should really be thought of as an ideal in the ring of analytic function germs, but you can get away with thinking of R[x,y] instead. It's quite a common term. Google Scholar returns about 140 hits. — Fly by Night (talk) 15:54, 22 October 2010 (UTC)
- It confused me that it looked as if "gradient ideal" was linked to something that ought to explain that particualar construction. Also, I'm not used to the angle bracket notation here; the texts I grew up with used round parentheses for "the ideal generated by such-and-such". –Henning Makholm (talk) 16:09, 22 October 2010 (UTC)
- Ah, I see. Looking around on Google seems to show a mixture of rounded and angled brackets for the gradient ideal. Maybe it's not common in traditional ring theory, but the gradient ideal is an application of ring theory to another subject. So you'd have non-ring-theorists using rings and ideals. Maybe that would explain the non-familiar notation. — Fly by Night (talk) 17:59, 22 October 2010 (UTC)
- It confused me that it looked as if "gradient ideal" was linked to something that ought to explain that particualar construction. Also, I'm not used to the angle bracket notation here; the texts I grew up with used round parentheses for "the ideal generated by such-and-such". –Henning Makholm (talk) 16:09, 22 October 2010 (UTC)
- In your example : When you quotient out (xy), all mixed terms drop out, so you have, R(1,x,y,x²,y²,x³,y³,...) left. Now quotient out (3x²+y^k). Then you can rewrite y^(k+1) as -3x²y which drops out because xy=0. Similarly, x³=x(-1/3)y^k=0. Then you're left with R(1,x,x²,y,y²,...,y^k), but x² is (-1/3)y^k, so your final space is R(1,x,y,y²,...,y^k).
- This is hardly an example of a general method, of course. And the multiplicative structure of the quotient ring is lost when you write it simply as a vector space, but I suppose that is what you want? –Henning Makholm (talk) 16:09, 22 October 2010 (UTC)
- It's a method to calculate the index at a singular point of a gradient vector field. You work out the quotient and then make a multiplication table of the spanning elements. You fill it in, modulo the gradient ideal. Then you find a linear map that takes the class of the determinant of the Hessian of ƒ to 1. The simplest one would send [det(Hessƒ)] to 1 and every other class to 0. This maps changes the k-by-k multiplication table into a k-by-k matrix. The signature of which is the index of the singular point of the gradient vector field. See here if your interested. (N.B. They use round brackets for ideals in this article.) — Fly by Night (talk) 18:12, 22 October 2010 (UTC)
- You might want to try the SINGULAR CAS. I believe that it specialise in this sort of calculation.--Salix (talk): 19:44, 22 October 2010 (UTC)
- I've just looked at the website, and it looks perfect. Thanks a lot Salix. I shall download it now. — Fly by Night (talk) 20:04, 22 October 2010 (UTC)
- You might want to try the SINGULAR CAS. I believe that it specialise in this sort of calculation.--Salix (talk): 19:44, 22 October 2010 (UTC)
- It's a method to calculate the index at a singular point of a gradient vector field. You work out the quotient and then make a multiplication table of the spanning elements. You fill it in, modulo the gradient ideal. Then you find a linear map that takes the class of the determinant of the Hessian of ƒ to 1. The simplest one would send [det(Hessƒ)] to 1 and every other class to 0. This maps changes the k-by-k multiplication table into a k-by-k matrix. The signature of which is the index of the singular point of the gradient vector field. See here if your interested. (N.B. They use round brackets for ideals in this article.) — Fly by Night (talk) 18:12, 22 October 2010 (UTC)
Scenario: Traveling to Pandora as seen in James Cameron's movie Avatar
I've been trying to figure out how long it would take to get to the fictional moon Pandora from Earth in the movie Avatar—at a specific faster-than-light speed—, but the unit conversions have been tedious. My problem is based on a few assumptions and definitions. I've laid them out as follows:
- Assumptions & Considerations:
- Assume that fast than light travel is possible.
- Either assume that time dilation does not occur or is not a concern.
- Either assume that I'm not killed by the speed or that it is not a concern.
- Assume their is no need for acceleration/deceleration time.
- Definitions:
- The distance from Earth to Pandora is 4.37 light years.
- One year is 31,557,600 seconds.
- The speed of light is 186,282.397 miles per second.
- Givens:
- The speed I'm traveling at is 5 trillion miles per nanosecond.
- Problems:
- I would like the speed converted to miles per second—no rounding please.
- I would like to know how long it would take me to get their—no rounding please.
- I would like a real world comparison as to how quick that time is (example, 0.25 seconds is the blink of an eye).
--Melab±1 ☎ 19:02, 22 October 2010 (UTC)
- Google is neat for unit conversions: 4.37 light years / 5e12 miles/ns, assuming short-scale trillions. –Henning Makholm (talk) 19:16, 22 October 2010 (UTC)
- Thank you, but I need all three of my problems answered and I also do not know if Google's unit conversions use the definitions I have defined. --Melab±1 ☎ 19:48, 22 October 2010 (UTC)
- Well, OK. 5 trillion is usually 5x1012, and this occurs 109 times per second, giving an exact speed of 5x1021 miles per second. At your definition, a light year is 5.87862537 x 1012 miles (for a standardised definition, see light year). That makes a total distance of 2.56895929 × 1013 to a level of accuracy that's only true if Pandora was exactly 4.37 light years away. Divide the latter by the former, and you get 5.13791857 nanoseconds. That's less than the time it takes light to cover two metres, to give an example. That really short period of time is quite hard to quantify in everyday terms. Hope I've done this right, - Jarry1250 [Who? Discuss.] 20:23, 22 October 2010 (UTC)
- I agree with the above calculations. Mine are... 5 trillion miles per nanosecond = 5*10^12 miles / (10^-9 sec) = 5*10^21 miles / sec = 5,000,000,000,000,000,000,000 miles / sec. 4.37 light years * 31,557,600 seconds / year * 186,282.397 miles / second = 5,878,625,371,567.2 miles * 4.37 = 25,689,592,873,748.664 miles. Distance = rate * time, so time = distance / rate (using the nonrelativistic approximation you seem to want). Here, travel time is 25,689,592,873,748.664 miles / (5*10^21 miles / sec) = 0.0000000051379185747497328 sec = 5.1379185747497328 nanoseconds. For scaling, about 5.14 nanoseconds in a second is the same ratio as about 2.5 minutes in a millenium. It should be noted that you're asking us to use wayyyyy too many significant figures in these calculations, and the speed is incredibly high. 67.158.43.41 (talk) 05:00, 23 October 2010 (UTC)
- Well, OK. 5 trillion is usually 5x1012, and this occurs 109 times per second, giving an exact speed of 5x1021 miles per second. At your definition, a light year is 5.87862537 x 1012 miles (for a standardised definition, see light year). That makes a total distance of 2.56895929 × 1013 to a level of accuracy that's only true if Pandora was exactly 4.37 light years away. Divide the latter by the former, and you get 5.13791857 nanoseconds. That's less than the time it takes light to cover two metres, to give an example. That really short period of time is quite hard to quantify in everyday terms. Hope I've done this right, - Jarry1250 [Who? Discuss.] 20:23, 22 October 2010 (UTC)
- Thank you, but I need all three of my problems answered and I also do not know if Google's unit conversions use the definitions I have defined. --Melab±1 ☎ 19:48, 22 October 2010 (UTC)
- But faster than light travel isn't possible. Traveling exactly at the speed of light isn't possible either, unless you have zero mass or infinite energy: your mass increases with respect to your speed, so the faster you go the more energy you need to go faster. Close to the speed of light you would need almost infinite amounts of energy to accelerate your almost infinite mass. See this section of the Special Relativity article. Some theories hypothesise about anti-particles with negative mass travelling faster than the speed of light. Why don't you just assume the speed to be s, and the distance to by d? — Fly by Night (talk) 10:01, 23 October 2010 (UTC)
Generalization of ordinals
In ZFC, ordinals are equivalence classes of sets, since every set can be well-ordered in ZFC. In ZF+~AC, there are sets that cannot be well-ordered. Is there any way to organize these sets into equivalence classes that provide some notion of size similar to the ordinals? 99.237.180.170 (talk) 19:07, 22 October 2010 (UTC)
- I'd quibble with the way you put it. There's no such thing as "sets in ZFC" or "sets in ZF+~AC"; there are only sets, and AC is either true of them, or it is not.
- That said, yes, you can certainly get objects corresponding to cardinality, working in ZF alone. What you do is take equivalence classes up to the existence of a bijection. Unfortunately these are proper classes, so you use the Scott trick to cut them down to sets. --Trovatore (talk) 19:13, 22 October 2010 (UTC)
- I think your terminology quibble is more Platonic than the mainstream mathematical culture makes it fashionable to be. But in that case, if you accept that the ZF axioms are true, then both of ZFC and ZF¬C will be consistent and thus have models. And the elements of such a model's universe are usually known as "sets" in that model, even though you may prefer to reserve the word for something more metaphysical. –Henning Makholm (talk) 19:31, 22 October 2010 (UTC)
- Well, those objects are sets in the sense of the model, not in the sense of the theory. It's a critical distinction. --Trovatore (talk) 21:08, 22 October 2010 (UTC)
- Like Henning Makholm, I stand by my original use of the word "sets." I don't see what you mean by "sets in the sense of the theory." To me, sets are objects described by the axioms of a set theory. I did consider using cardinality as you suggested, but I was hoping for something more specific. Maybe something with another axiom added to ZF¬C might work better for my purposes. Any more suggestions? 76.67.74.196 (talk) 22:04, 22 October 2010 (UTC)
- The correct phrasing is ZFC proves that every set can be wellordered, not every set can be wellordered in ZFC. --Trovatore (talk) 23:17, 22 October 2010 (UTC)
- I suspect what you mean is the correct phrasing is
- "{Every set can be well-ordered} in ZFC
- rather than
- "Every set can be {well-ordered in ZFC}.
- (The latter being meaningless.) Michael Hardy (talk) 16:57, 23 October 2010 (UTC)
- No, I really don't. The mathematical standard is to talk like a Platonist whether you actually are one or not. This saves an immense amount of grief when it comes to things like expositions of the incompleteness theorems.
- It's far cleaner in such cases to use the standard language, where a flat assertion of a statement is the same as the claim that the statement is true (disquotational truth) rather than provable. Then you can always make some sort of disclaimer afterwards about what interpretation you really have in mind. --Trovatore (talk) 17:49, 23 October 2010 (UTC)
- I suspect what you mean is the correct phrasing is
- The correct phrasing is ZFC proves that every set can be wellordered, not every set can be wellordered in ZFC. --Trovatore (talk) 23:17, 22 October 2010 (UTC)
- Like Henning Makholm, I stand by my original use of the word "sets." I don't see what you mean by "sets in the sense of the theory." To me, sets are objects described by the axioms of a set theory. I did consider using cardinality as you suggested, but I was hoping for something more specific. Maybe something with another axiom added to ZF¬C might work better for my purposes. Any more suggestions? 76.67.74.196 (talk) 22:04, 22 October 2010 (UTC)
- Well, those objects are sets in the sense of the model, not in the sense of the theory. It's a critical distinction. --Trovatore (talk) 21:08, 22 October 2010 (UTC)
- I think your terminology quibble is more Platonic than the mainstream mathematical culture makes it fashionable to be. But in that case, if you accept that the ZF axioms are true, then both of ZFC and ZF¬C will be consistent and thus have models. And the elements of such a model's universe are usually known as "sets" in that model, even though you may prefer to reserve the word for something more metaphysical. –Henning Makholm (talk) 19:31, 22 October 2010 (UTC)
Shoelace formula
Why does the shoelace formula work? --75.33.217.61 (talk) 19:59, 22 October 2010 (UTC)
- Our article Shoelace formula explains (um, no it doesn't, but one easily sees that it is the case) how the formula can be expressed as a sum of cross products where each can be seen to be ±2 times the area of the triangle formed by the origin and two successive vertices.
- The easiest case to convince onself of is when the polygon is convex and that the origin lies inside it. Draw lines from each vertex to the origin; this cuts it into a number of triangles that each corresponds to a cross product in the formula, and all of the cross products have the same sign.
- For general polygons, the easiest approach may be to cut the polygon into triangles, and then show by induction on the number of triangles that the formula works and that the sum of the cross products is positive if the vertices are taken in clockwise (or is it anticlockwise? Never mind, one of them will turn out to work) order around the polygon. The base case for a single triangle is just a matter of calculation (the area of triangle ABC is , and the numerator can be rewritten using the distributive and anticommutative laws for the cross product, to give just the terms in the shoelace formula). In the induction case, add one triangle that shares one or two edges with the polygon you already have; the cross products corresponding to the shared edges will cancel each other out. –Henning Makholm (talk) 20:20, 22 October 2010 (UTC)
Reducing Transformation Arrays
Hi all, I'm looking for a way to reduce N 'transformation arrays' to a single transformation array without running through each individual transformation. Here's an example:
Given the alphabet, alpha = "abcd" and the transformation array, t1 = [4, 3, 2, 1]:
f(alpha, t1) = "dcba"
given t2 = [4, 2, 3, 1]
f(f(alpha, t1), t2) = "acbd"
So the first transformation we did was [4, 3, 2, 1] and the second was [4, 2, 3, 1] and the result was [1, 3, 2, 4]
I'm looking for a function, let's call it g(t1, ..., tN) to compute this. For instance:
g([4, 3, 2, 1], [4, 2, 3, 1]) = [1, 3, 2, 4]
Ideally I'd like for g(t1, ..., tN) to have a time complexity of O(n). If I actually do each of the transformations I can achieve O(n*m) where m is the length of the alphabet; thus, a time complexity of this, or worse, won't really help.
I'm not entirely sure if such a function can (does?) exist, but any ideas would be much appreciated!
Thanks!
24.147.249.232 (talk) 20:59, 22 October 2010 (UTC)
- You cannot do better than O(nm); otherwise you don't have time to read all of the inputs!
- (By the way, what you're referring to as a "transformation" is commonly known as a permutation) –Henning Makholm (talk) 21:06, 22 October 2010 (UTC)
- Oh. Good point! I suppose a better question would be what's the optimal way to compute g(...)? I was hoping for a method where I could apply simple operations to each element, but that seems unlikely. The best method I've come up with, thus far, is (apologies for the poor pseudocode formatting!):
- for(i in 0..m)
- for(j in 0..n)
- ndeck[j] = deck[permutations[i][j]]
- for(j in 0..n)
- deck = ndeck
- for(i in 0..m)
- Which is okay, but could be a lot faster if the last copy step didn't need to occur. Thus, perhaps my original question should have asked whether it's possible to do that innermost step in-place.
- 24.147.249.232 (talk) 22:16, 22 October 2010 (UTC)
- Is this really the critical part of your entire program? Remember, premature optimization is the root of all evil. If it is critical, I would suggest unrolling the outer loop twice and shifting back and forth between two arrays:
- for(i in 0,2,4,...,m-1)
- for(j in 0..n)
- decka[j] = deckb[permutations[i][j]]
- for(j in 0..n)
- deckb[j] = decka[permutations[i+1][j]]
- for(j in 0..n)
- for(i in 0,2,4,...,m-1)
- Alternatively, you might also try to trace each element backwards through the permutations:
- for(j in 0..n)
- k = j
- for(i in m..0)
- k = permutations[i][k]
- outj] = k
- for(j in 0..n)
- which seems to faster under an unit cost model, but is likely to have horrible cache locality. –Henning Makholm (talk) 23:00, 22 October 2010 (UTC)
- Does the last copy step need to occur? It obviously depends on what goes on to be done with it and how modularised your code as a whole is, but if you've got space in memory for both deck and ndeck, can't you just carry on using ndeck once you're on the other side of the loop?
- Thinking about it from another direction, would some recursion help? If you had a function to work out the end product of applying two re-orderings sequentially (even on a dummy "initial order" which you could then sub in your real data for at the end), you could just keep piling in pairs: t1 and t2, t3 and t4 ... up to tN-1, tN. Then you do it again to (the result from doing t1t2) and (the result from doing t3 and t4), etc. until you finally end up at the eventual answer. Obviously this is easiest to work for N a power of 2, but it's not difficult for some other N. Then again, that's probably only any practical use in speeding things up if you've got access to a parallel architecture where you can do all the initial pairs in parallel and then gradually bring them together. (So you use N/2 processes in your first go round, then N/4, then N/8 etc.) I'm rusty on all this, but it feels like it ought to be possible. --81.153.109.200 (talk) 10:36, 23 October 2010 (UTC)
- Is this really the critical part of your entire program? Remember, premature optimization is the root of all evil. If it is critical, I would suggest unrolling the outer loop twice and shifting back and forth between two arrays:
GMAT scoring system
This page seems as good as any for my question, so here goes: On the math portion of the GMAT[3] I scored 50/95th percentile. Had I answered the last question correctly (it was not very hard; I had made my choice and was ready to press Next), was it probable that I might've raised my score? Thanks. 67.243.7.240 (talk) 21:31, 22 October 2010 (UTC)
- I presume that by "50/95" you mean a score of 50 at the 95th percentile. You can see in the link you provided how an extra mark would increase the percentile score, assuming that you get one mark per correct answer. 92.24.178.5 (talk) 11:03, 23 October 2010 (UTC)
Logic - deduction of axioms
Is it possible to deduce from and ? I'm studying on a logic theory course and I've been asked whether it is possible to deduce the third axiom we use from the first two? I strongly believe it isn't because otherwise we wouldn't need to take it as an axiom, but I'm not sure how to go about showing it. I know of most of the basic theorems by now (deduction, completeness etc), but I can't see any obvious way to get there. Could anyone please suggest anything? Thankyou, 62.3.246.201 (talk) 23:20, 22 October 2010 (UTC)
- A typical way to prove that one axiom does not follow from the others, would be to exhibit a non-standard interpretation of the connectives that makes the two latter axioms (K and S, among friends) into tautologies but the first one into a non-tautology (while tautologies are still closed under whatever inference rules you have, such as modus ponens). This should be particularly easy because the two axioms you assume do not mention ¬ at all, so the only thing you need from an alternative truth table for ¬ is that it makes into a non-tautology. –Henning Makholm (talk) 23:39, 22 October 2010 (UTC)
I'm sorry, I don't quite follow - what do you mean by 'exhibit a non-standard interpretation of the connectives'? Thankyou for the help :) 131.111.16.20 (talk) 11:53, 23 October 2010 (UTC)
- Okay, let's take it more slowly then. A general method to show that some wff A cannot be proved from B, C, D is to define some set S of "special" well-formed formulas such that
- B, C, D are all in S
- A is not in S
- For each rule of inference (such as modus ponens, which I suspect is actually the only one you have at the moment), if the rule allows you to infer a wff from wffs in S, then the wff you infer is also in S.
- Then it's quite easy to show that no formal proof can ever contain a wff outside S, and in particular there's no formal proof that concludes A. (This ought to remind you of how soundness of the proof system was shown, where S was taken to be "all logically valid wffs").
- I assume that by this point you have seen truth tables for ¬ and → and learned how to use them to determine whether a given formula is a tautology. My suggestion is then to invent your own crazy truth tables for the symbols, and then let S be all of those wffs that would have been "tautologies" under your crazy truth tables. –Henning Makholm (talk) 15:11, 23 October 2010 (UTC)
- Okay, let's take it more slowly then. A general method to show that some wff A cannot be proved from B, C, D is to define some set S of "special" well-formed formulas such that
- The axioms describe properties of the logical symbols and . If you can find an interpretation that is true under two of the axioms, but not under a third, you have shown that the third does not follow from the other two. Try e.g. a three-valued logic. --Stephan Schulz (talk) 12:06, 23 October 2010 (UTC)
October 23
the four-color theorem doesn't seem right to me
I can always come very close to producing a counter-example. What is the probability that the four-color theorem really is wrong, as it seems to me? If there is a simple, intuitive proof, maybe you could tell me it? 84.153.222.131 (talk) 15:40, 23 October 2010 (UTC)
- Have your read our Four color theorem article? Basically, there is no known proof that is small enough to be checked by hand, but several independent computer-assisted proofs and at least one computer-verified formal proof have been constructed, and it is pretty implausible that they are all flawed.
- I cannot imagine any meaningful sense in which one could assign a definite "probability" (other than 0 or 1) to the theorem being untrue. –Henning Makholm (talk) 16:00, 23 October 2010 (UTC)
- See Bayesian inference for a paradigm in which assigning such probabilities makes sense. Robinh (talk) 16:10, 23 October 2010 (UTC)
- I said "I cannot imagine any meaningful sense in which...". Pulling numbers out of one's ass and pretending that they have anything to do with the subject under discussion does not count. –Henning Makholm (talk) 16:53, 23 October 2010 (UTC)
- See Bayesian inference for a paradigm in which assigning such probabilities makes sense. Robinh (talk) 16:10, 23 October 2010 (UTC)
There is a relatively simple proof of the weaker Five color theorem. It does take several big steps to fully understand, though. If you want to give it a shot, here's a list of things you can look at:
- Understand the translation of the map coloring problem into that of coloring graphs. This is explained in this section of the Four color theorem article. If you aren't already familiar with graph theory, you might want to play with those a little; a famous theorem that might help you get acquainted with graphs is the condition under which an Eulerian path exists (though that is not crucial to understanding the proof of the five color theorem).
- Study the Euler characteristic for planar graphs. If a graph is planar, then we can count not only edges and vertices, but regions enclosed by the edges as well. It turns out that these three quantities satisfy a nice rule, V-E+F=2. The proof of this fact is outlined in that article.
- Figure out why the Euler characteristic implies E ≤ 3V − 6 (assuming the planar graph has at least 3 edges). This can be algebraically derived from knowing that every edge touches at most two regions and every region touches at least 3 edges.
- From this fact, prove that every planar graph must have some vertex of degree 5 or less.
- If you are sufficiently well versed in proof by induction, a proof that every planar graph can be colored with 6 colors should follow naturally, knowing that last fact. Turning this into a proof that every planar graph can be colored with 5 colors requires a clever trick, which is explained in the article on the five color theorem. Why can't the same trick be used to prove the Four color theorem?
HTH. --COVIZAPIBETEFOKY (talk) 16:29, 23 October 2010 (UTC)
Help me impress my friend?
Hey I'm trying to pretend I know something about calculus to impress my friend. Technically that's why I want the answer to this question, it's not (my) homework.
So..
1/⅔ × sqrt(X) × constant = 3/2 X
What is the value of constant?? It's from a first year calculus course, I've already taken it, but I don't know where to start on this question. —Preceding unsigned comment added by 174.89.36.96 (talk) 16:07, 23 October 2010 (UTC)
- This is not calculus. It is algebra. You need to isolate the X on the right. Multiply both sides by 2/3 and you'll have a very simple formula. However, the "constant" is not a constant. -- kainaw™ 16:28, 23 October 2010 (UTC)
- (e/c) This isn't calculus, it's algebra. There is no differentiation or integration involved. Looking at your problem you have
- You claim that this is equal to 3x/2, i.e.
- But this doesn't really make sense. You see, x is usally though of as a variable. That means you would solve the equation for x and not k. (k was a constant after all, and was fixed.) A more likely solution would be
- If there's something you forget to mention then let us know. It will change the problem and hence change the answer. — Fly by Night (talk) 16:30, 23 October 2010 (UTC)
Ok thanks a lot guys, I asked if there are more details, but that seems to be the whole question. For now I guess this is resolved, I was hoping I was missing some advanced calculus technique that would let me find a genuine constant.
Hopefully she's impressed —Preceding unsigned comment added by 69.165.137.230 (talk) 17:08, 23 October 2010 (UTC)
- If she knows calculus, she will quickly spot your naive error: your assumption that "any complicated math is calculus." Once you are outed thusly, she will see through your charade and will probably be unimpressed by you (and will likely consider your stunt a poor indicator of character). To avoid this eventuality, you should educate yourself first by reading our articles on algebra and calculus, (and your textbooks) and work out as many practice problems as you can. Girls (or guys) who would be impressed by pseudo-calculus are of dubious quality. You should stick to actually knowing things; this will impress everybody, including those girls who actually know calculus. Nimur (talk) 18:03, 23 October 2010 (UTC)
secure hash
how to securely hash any two numbers each over 100 digits long into a number between 1 - 10 with equal distribution/probability. The easier the hash is to explain/follow, the better. this is not homework. thank you! 84.153.221.42 (talk) 18:41, 23 October 2010 (UTC)