Wikipedia:Reference desk/Mathematics
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
October 10
Number Theory
What is the maximum integer that divides all numbers of the form p^4 - 1 where p can be any prime greater than 5? In how many ways can you solve this question?
- Can we have evidence of you trying to solve this question please? Have you tried particular examples to limit the number and thern tried using modular arithmetic? Dmcq (talk) 12:09, 10 October 2009 (UTC)
- This is not homework. Still I can solve it though through factorisation. Then I find prime factors of p - 1, p + 1 and find the answer. I still want to know whether there are other ways to do it, please. —Preceding unsigned comment added by 58.161.138.117 (talk) 12:22, 10 October 2009 (UTC)
- So you mean this: "For any prime p>5, 24 divides the number p4-1 = (p - 1)(p + 1)(p2+1), for all the three factors are even, and either the first or the second is divisible by 4. Either p - 1 or p + 1 is divisible by 3, since p is not. Finally, if neither p - 1 nor p + 1 is divisible by 5, then p is either 2 or 3 (mod 5), so p2 + 1 is divisible by 5, and in any case 5 enters in the factorization too. Thus 240 divides all p4 - 1, and it is the maximum for (74 - 1) /240 and (114-1 )/240 are relatively prime". --84.221.80.207 (talk) 13:42, 10 October 2009 (UTC)
- Yes. —Preceding unsigned comment added by 58.161.138.117 (talk) 14:30, 10 October 2009 (UTC)
- This is not homework. Still I can solve it though through factorisation. Then I find prime factors of p - 1, p + 1 and find the answer. I still want to know whether there are other ways to do it, please. —Preceding unsigned comment added by 58.161.138.117 (talk) 12:22, 10 October 2009 (UTC)
Do the intuitionists agree that an equation of the form (x − a)(x − b) = 0 has two solutions only?
HOOTmag (talk) 18:32, 10 October 2009 (UTC)
- They certainly would agree that it does not have a set of three different roots. The harder thing is to show that every solution is either equal to a, or is equal to b. I fear that would come down to how the real numbers themselves are treated. If a and b are not equal but also not separated, then it does not seem possible to prove intuitionistically that every root of the quadratic is equal to one of them or the other. But I could be wrong about that last point. — Carl (CBM · talk) 18:48, 10 October 2009 (UTC)
- Since they certainly agree that every number not equal to zero has an inverse number, so they must agree that the product of two given numbers which are not equal to zero is not equal to zero, so they must agree that for every x not equal to a nor to b, (x-a)(x-b) is not equal to zero, hence Q.E.D, right? HOOTmag (talk) 19:39, 10 October 2009 (UTC)
- In symbolic terms, you are arguing that from the assumption (x-a)(x-b)=0, you can prove
- (*)
- I agree that seems fine intuitionistically. But does not imply that a=x intuitionistically (when a and x are real numbers), so (*) will not suffice for the thing we are trying to prove, namely that x=a or x=b. I am not a true expert in intuitionism, so there may be some alternative method of proof, but I don't see it myself.
- In symbolic terms, you are arguing that from the assumption (x-a)(x-b)=0, you can prove
- Since they certainly agree that every number not equal to zero has an inverse number, so they must agree that the product of two given numbers which are not equal to zero is not equal to zero, so they must agree that for every x not equal to a nor to b, (x-a)(x-b) is not equal to zero, hence Q.E.D, right? HOOTmag (talk) 19:39, 10 October 2009 (UTC)
- Heuristically, if you could prove intuitionistically that (x-a)(x-b)=0 implies "x=a or x=b", this would mean that from the knowledge that x satisfies (x-a)(x-b)=0, you could effectively determine which one of a or b the number x is equal to. But if a and b are distinct real numbers that are not separated, I cannot see how such a decision could be possible. But this is just a heuristic argument. And, by changing what one means by "x=a", one might be able to work around this. — Carl (CBM · talk) 21:29, 10 October 2009 (UTC)
- P.S. Note that if you add an apartness condition on a and b, then everything is fine. If a is apart from b, and (x-a)(x-b)=0, then (by the key "comparison property" of apartness), either x is apart from a or x is apart from b. If x is apart from a then (x-a) is apart from zero, so not equal to zero, in which case we get x-b = 0. Similarly if x is apart from b then x-a = 0. But when a is not apart from b, I do not see how to start the proof. When I say "separated" above it is a synonym for "apart". — Carl (CBM · talk) 21:35, 10 October 2009 (UTC)
- Look, when I say that every equation of the form (x − a)(x − b) = 0 has two solutions only, I simply mean:
- (1) For every x equal to a or to b: (x − a)(x − b) is equal to zero.
- (2) For every x other than a,b: (x − a)(x − b) is not equal to zero.
- The intuitionists surely accept (1). My question is about whether they accept (2). Apparently, they do, because they surely agree that the product of two given numbers which are not equal to zero is not equal to zero. Why? because they surely agree that the set of numbers we're talking about (e.g. the rational numbers, or the real numbers, etc.) constitutes a field (hence constitutes an Integral domain, which consequently has no Zero divisor), Right?
- HOOTmag (talk) 23:09, 10 October 2009 (UTC)
- Yes, they would accept that "if and then ". But when you say "(x-a)(x-b) = 0 has only two solutions", I hear the stronger statement "if (c-a)(c-b) = 0 then c = a or c = b". So I am thinking of the contrapositive of the statement you seem to be thinking of. The statement I am thinking of, as far as I can tell, is not intuitionistically provable. — Carl (CBM · talk) 23:43, 10 October 2009 (UTC)
- PS I think the reason I instantly headed for the contrapositive is that the thing you just said doesn't really make sense from an intuitionistic perspective. You said:
- (1) For every x equal to a or to b: (x − a)(x − b) is equal to zero.
- (2) For every x other than a,b: (x − a)(x − b) is not equal to zero.
- But from an intuitionistic perspective there is no reason that every x must fall into one of these two categories. There are some x for which we cannot assert x = a, and simultaneously cannot assert . So an intuitionist would not think that the thing you said actually means that there are only two solutions; it only means that there are only two solutions among those values of x the either equal one of a,b or else are not equal to a and not equal to b. — Carl (CBM · talk) 23:48, 10 October 2009 (UTC)
- As to your first comment: Apparently, for every statements A,B,C which don't include any sign of negation nor of other connectors, if the statement: "If A then (B or C)" is regularly provable, then it is also intuitionistically provable, because the word "or" is interpreted by the intuitionists in a way "weaker" than how the other mathematicians interpret it (i.e. the intuitionistic "or" is the intuitionistic negation of the "and", which is a "weaker" negation).
- As to your second comment, it's a matter of definition: What do I mean by "Only"? when I say to the intuitionists: "A if and only if B", I want them to interpret this: "(1) A if B; (2) not-A if not-B". Similary, when I say to them: "a and b are the only solutions", I want them to interpret this: "(1) a and b are solutions. (2) any number other than a,b is not a solution". Note that the way I define the word "Only" doesn't influence the way they define the word "Not".
- HOOTmag (talk) 00:57, 11 October 2009 (UTC)
- Even in classical mathematics, when I say, "1 and 2 are the only elements of the set A", I mean "1 ∈ A and 2 ∈ A and, for every c, if c ∈ A then c = 1 or c = 2". As I was saying, this simply involves a contrapositive compared to the statement you seem to think of. But the sentence you are now asking about is not what an intuitionist would mean by "an equation of the form (x − a)(x − b) = 0 has two solutions only", which was the question you originally asked. I do not believe an intuitionist would agree with that quoted sentence, nor would they agree that your later clarification is actually equivalent to it, because your later clarification does not rule out other solutions that are neither equal nor not-equal to a or to b, and so does not actually prove to an intuitionist that there are only two solutions. You are free to change your question, of course, but I was answering the original one.
- PS I think the reason I instantly headed for the contrapositive is that the thing you just said doesn't really make sense from an intuitionistic perspective. You said:
- The underlying point of asking whether intuitionists agree with a sentence is that you have to let them interpret that sentence the way that they usually would. For example, no intuitionist would agree that is true in general, but they would always agree that and . So the way you want them to interpet ↔ is simply not the way that they would interpret it.
- As a side note, the intuitionistic negation of a sentence φ is the sentence , and is not intuitionistically equivalent to nor to . The intuitionistic "or" is usually considered stronger than the classical one, because an intuitionistic proof of an "or" statement must explicitly prove one of the two sides, unlike a classical proof, which does not need to show which of the two disjuncts is true. So the intuitionistic "or" is true in fewer cases. Of course I guess you could call this "weaker" but that is the opposite of the terminology in the literature. — Carl (CBM · talk) 01:29, 11 October 2009 (UTC)
- No, I agree with you that any "Or" which is true in fewer cases is a "stronger" Or. I just define the "Or" as a negation of "And", the intuitionists being free to define the negation as they like. Similarly, I don't define: "A is equivalent to B" as I define: "A if and only if B", because when I say: "A is equivalent to B", I mean: "(1) A if B; (2) B if A", whereas when I say: "A if and only if B", I mean: (1) A if B; (2) Not-A if Not-B". Here again, the intuitionists are free to define the negation as they like.
- Back to my original question: when I say: "a and b are the only solutions of the equation", I simply mean: "(1) a and b solve the equation"; (2) any number other than a,b does not solve the equation". I'm sure that every intuitionist would accept both (1) and (2), with respect to my original equation.
- Back to the way you had understood my question: To sum up your opinion, the statement "if (c − a)(c − b) = 0 then c = a or c = b", is not intuitionistically provable. Here I let you interpret (intuitionistically) the "Or" - as you wish, and now let me "sharpen" your opinion summed up above: Assume we have a set such that: (1) a and b belong to the set; (2) any element other than a,b doesn't belong to the set (note that in "my" language I would say that the set contains a and b "only"). Now, your opinion would be as follows: the intuitionists are not convinced (i.e. can't prove) that any element belonging to the set is a or b. Right?
- HOOTmag (talk) 19:19, 11 October 2009 (UTC)
- Yes. — Carl (CBM · talk) 19:30, 11 October 2009 (UTC)
- I guess your opinion will not change if I ask you again about any finite set (and not only about a "pair"-set). According to that, the Axiom of Choice wouldn't have been ituitionistically-provable from ZF, even if this axiom had been limited to finite sets only; Right? HOOTmag (talk) 20:08, 11 October 2009 (UTC)
- The same problem happens for any finite set. Given that is a set of real numbers and that , one cannot prove that, in general, . Further, one cannot prove that there is any natural number r such that there is a bijection from A to {1,2,3,...,r}. — Carl (CBM · talk) 20:47, 11 October 2009 (UTC)
- Again, how about intuitionistically-proving the Axiom of Chioce from ZF, had this axiom been limited to finite sets only? HOOTmag (talk) 21:12, 11 October 2009 (UTC)
- The axiom of choice has a complex relationship with intuitionistic logic. In intuitionistic type theory, the full axiom of choice is intuitionistically valid. In some intuitionistic systems of set theory, however, the axiom of choice implies the law of the excluded middle, and the proof only uses the axiom of choice for a two element set each of whose elements has (in the classical sense) at most two elements. See Diaconescu's_theorem and the much more thorough SEP entry.
- Again, how about intuitionistically-proving the Axiom of Chioce from ZF, had this axiom been limited to finite sets only? HOOTmag (talk) 21:12, 11 October 2009 (UTC)
- The same problem happens for any finite set. Given that is a set of real numbers and that , one cannot prove that, in general, . Further, one cannot prove that there is any natural number r such that there is a bijection from A to {1,2,3,...,r}. — Carl (CBM · talk) 20:47, 11 October 2009 (UTC)
- I guess your opinion will not change if I ask you again about any finite set (and not only about a "pair"-set). According to that, the Axiom of Choice wouldn't have been ituitionistically-provable from ZF, even if this axiom had been limited to finite sets only; Right? HOOTmag (talk) 20:08, 11 October 2009 (UTC)
- Yes. — Carl (CBM · talk) 19:30, 11 October 2009 (UTC)
- So I don't want to make any general claims about the axiom of choice or variants thereof without first establishing exactly what underlying axioms are in play. I was hoping that my comment about bijections went far enough in the spirit of AC. — Carl (CBM · talk) 22:32, 11 October 2009 (UTC)
- You most definitely can prove intuitionistically that and implies , in fact, this is pretty much the definition of . — Emil J. 11:08, 12 October 2009 (UTC)
- Sure, if is an abbreviation for then you can prove the latter from the former. I was using e.g. the notation {a,b} to stand for . In any case I was talking about generalizations of sets of the latter form. I'll watch my notation in the future. The second claim, about the lack of a bijection, applies to both the case I was thinking of and the one you were thinking of. — Carl (CBM · talk) 11:36, 12 October 2009 (UTC)
- So, in your opinion, it's not intuitionistically-provable that is: , Right?
- P.S. this has been my original question, or - at least - what I've meant. HOOTmag (talk) 17:13, 12 October 2009 (UTC)
- Yes. — Carl (CBM · talk) 17:47, 12 October 2009 (UTC)
- Can the intuitionists ever prove the equality of any two sets, one of which is defined by the "{}" with the "|", the other one being defined by the "{}" without the "|"? HOOTmag (talk) 19:25, 12 October 2009 (UTC)
- Of course. , or for less trivial example, . — Emil J. 09:01, 13 October 2009 (UTC)
- Generally, the zero is not considered as belonging to N. Anyways, in Carl's opinion, it's not intuitionistically-provable that is: , Right? HOOTmag (talk) 10:43, 13 October 2009 (UTC)
- Of course. , or for less trivial example, . — Emil J. 09:01, 13 October 2009 (UTC)
- Can the intuitionists ever prove the equality of any two sets, one of which is defined by the "{}" with the "|", the other one being defined by the "{}" without the "|"? HOOTmag (talk) 19:25, 12 October 2009 (UTC)
- Yes. — Carl (CBM · talk) 17:47, 12 October 2009 (UTC)
- Sure, if is an abbreviation for then you can prove the latter from the former. I was using e.g. the notation {a,b} to stand for . In any case I was talking about generalizations of sets of the latter form. I'll watch my notation in the future. The second claim, about the lack of a bijection, applies to both the case I was thinking of and the one you were thinking of. — Carl (CBM · talk) 11:36, 12 October 2009 (UTC)
(outdent) Zero is a natural number, under any non-obfuscated definition of natural number.
Anyway, I can't speak for Carl, but it is intuitionistically provable that , and even , while it apparently is not provable that . The difference stems from the fact that reals do not have decidable equality. — Emil J. 11:31, 13 October 2009 (UTC)
- Hang on. It is actually provable that , and more generally, it is provable that a < b implies (due to the intuitionistically provable fact that ). What is apparently not provable is the statement , since we do not know that a and b are apart or equal. — Emil J. 11:37, 13 October 2009 (UTC)
- Yes. This is what I was trying to say in my posts dated 21:29, 10 October 2009 and 21:35, 10 October 2009. I need to be more clear somehow. — Carl (CBM · talk) 13:46, 13 October 2009 (UTC)
- Your posts were perfectly clear, this was just me not paying enough attention. — Emil J. 13:55, 13 October 2009 (UTC)
- Yes. This is what I was trying to say in my posts dated 21:29, 10 October 2009 and 21:35, 10 October 2009. I need to be more clear somehow. — Carl (CBM · talk) 13:46, 13 October 2009 (UTC)
There is a distinction between and in intuitionistic mathematics that is not present in classical mathematics. In classical mathematics, one freely identifies the set of natural numbers with its image inside the real numbers. In intuitionistic mathematics, the types matter. The natural numbers are usually treated as a decidable set: for any two natural numbers a,b, either a=b or . This is not true for real numbers; it is possible for two real numbers to be neither equal not not-equal, and moreover it is possible for them to be not-equal but have no positive distance between them. Only when a and b are real numbers with that latter property do I see any difficulty in proving that is the same set of real numbers as . — Carl (CBM · talk) 11:43, 13 October 2009 (UTC)
Here's a sort of counterexample to the original statement . First, observe that it is equivalent to . For definiteness, I'll assume a definition of reals as sequences {an}n∈N of rationals such that |an − am| ≤ 2−n + 2−m, with a = b iff |an − bn| ≤ 21−n. Let P be a decidable predicate on natural numbers (meaning ), and define by induction
It is easy to see that a = {an} is a real number, and a = 0 implies . Likewise, let Q be another decidable predicate on natural numbers, and define
Again, b = {bn} is a real, and b = 0 implies . On the other hand, implies , which easily gives ab = 0. All in all, if we could intuitionistically prove , then we could also intuitionistically prove
and we can't do that. — Emil J. 12:34, 13 October 2009 (UTC)
And to make it even more concrete: let T be whatever formal theory we want the statement to be formalized in. Assume that T is recursively axiomatizable, has the disjunction property, proves , and supports the argument above as well as a modicum of arithmetic, we will show that T is inconsistent.
By Gödel's diagonal lemma, we can find two arithmetical formulas P(x), Q(x) such that P(n) is equivalent to the negation of
- "n is a code of a T-proof of and there is no m < n which is a code of a T-proof of ",
and Q(n) is equivalent to the same thing with P and Q reversed. Clearly P and Q are decidable (in both the recursion-theoretic and intuitionistic meanings). Since no number can code proofs of two distinct formulas, we can prove . By the reasoning above, T can also prove , and since it has the disjunction property, it proves or . In metatheory, we can find the smallest m which is a T-proof of either of these two sentences. If m is a proof of , then ¬P(m) is true. Since ¬P(m) is a decidable (in the recursion-theoretic sense) property, it is also provable, hence T is inconsistent. The other case is symmetric. — Emil J. 13:52, 13 October 2009 (UTC)
October 12
Sphere volume and surface area
So, years ago I noticed that the volume of a sphere is and the derivative of this, with respect to r, is , which is the surface area of a sphere. Is there some reason for this, or is it just coincidence? Another easy example to try, which does not have this relationship is a cube with derivative of volume being and surface area being . And, many other surfaces rely on more than one variable. StatisticsMan (talk) 02:51, 12 October 2009 (UTC)
- It's not a coincidence. Imagine the sphere is like an onion, made up of lots of shells. You can get the volume of the sphere by adding up the volumes of the shells, which is roughly (for a thin shell - exactly for an infinitesimally thin shell) the surface area at that radius times the thickness. When you change the radius by a small amount you remove the outmost shell, that means the volume reduces by the volume of that shell, which is the surface area of the sphere, times an infinitesimal thickness, dr. --Tango (talk) 02:59, 12 October 2009 (UTC)
- For a cube, if you take 2r = s, you get a volume of and a surface area of . Also the derivative of is . So it can make a difference which parameterization of the family of cubes you use. There is some information about which shapes make this possible at [1] and [2]. You can find a lot more with google. — Carl (CBM · talk) 03:09, 12 October 2009 (UTC)
- Indeed - using the OP's parameterisation the derivative of volume is the area of 3 sides since rather than a nested family of cubic shells you have a family formed by gradually adding more and more 3-sided caps - the 2D analogue would look something like this:
_____ ____ | ___ || __ ||| _ |||| .|||||
- Bare in mind that the gaps between each cap should be infinitesimal, so there wouldn't be any gaps on the left or bottom sides. Does that make any sense? --Tango (talk) 03:35, 12 October 2009 (UTC)
Let's see if this way of looking at it sheds some light: Say the sphere is expanding. The rate at which the surface moves outward is the rate at which the radius r increases, so it is dr/dt. Now:
- (the rate at which the boundary moves) × (the size of the boundary)
- = the rate at which the volume increases.
For example, suppose at some instant:
- the boundary has an area of 20 square feet; and
- the boundary is moving at 3 feet per minute.
Then:
- 20 square feet × 3 feet per minute
- = 60 cubic feet per minute.
That's how fast the volume is increasing. But of course the volume V increases at a rate of dV/dt. The size of the boundary is the surface area A. So:
- A × dr/dt = dV/dt = (dV/dr) × (dr/dt).
Canceling dr/dt from both sides we get:
- A = dV/dr.
So no, it's not a "coincidence"; it's just what we should expect. Michael Hardy (talk) 05:08, 12 October 2009 (UTC)
- And if SM wishes some more evidence of it not being a coincidence, let's mention some facts on the variation of area and volume, on the line of what Carl said. Let A be a bounded convex in ; let E be the three dimensional unit Euclidean ball; so A+rE is the set of points at Euclidean distance less than r from A (e.g. for a sphere of radius R it's a sphere of radius R+r, and for a cube, it's a cube with smoothed edges). The volume of it writes as a third degreee polynomial in r
- where
- c=surface integral of the mean curvature of A,
- For instance, if A is a sphere of radius R, its surface area is ; its mean curvature is constant, at any point, and the surface integral of it is just so we get the expansion of as it has to be. Note also that the convex A needs not to be smooth; its mean curvature can be defined as a measure, and c is just the total mass of it. For instance, if A is a polyhedron the curvature is concentrated on the edges, and one has
- where is the length of the i-th edge, and is the angle between normal outer directions of the corresponding adjacent faces. --pma (talk) 08:02, 12 October 2009 (UTC)
Note also the twodimensional case. The area of a circle is A = πr2 and the circumference 2πr equals dA/dr. Bo Jacoby (talk) 08:08, 12 October 2009 (UTC).
- I like to think of it in terms of a ball that is repainted. An extremely thin new layer is added, equal to the ball's surface area: dV/dr = surface area. —Anonymous DissidentTalk 11:01, 12 October 2009 (UTC)
Ring Extension
--Shahab (talk) 05:16, 13 October 2009 (UTC)
I have two questions related to ring extensions. A ring extension R' of a ring R is a ring R' which contains R (or a ring isomorphic to it) as a subring. The rings are commutative and with identity in my questions.
- Given a ring R and a nonmonic polynomial over it, is it always possible to extend R so that it contains a root of the polynomial? If so how? The canonical way does not always work. During the adjoining of the root we might kill some elements of the ring. For example: If I want to adjoin the root of 2x-1 defined over Z/(4) to it, then the ring (Z/(4))[x]/(2x-1) doesn't seem to work as the process of adjoining the inverse of 2 actually kills 2 itself (since now 2 ∈ (2x-1)). In fact if ab=0 adjoining a-1 always kills b.
- Regarding the last sentence if a ring has no zero divisors and is so an integral domain then will adjoining a-1 never kill any element. If so, why? Hence is it possible to obtain the fraction field of an integral domain just by adjoining the inverses and not through the usual procedure of taking the equivalence classes?
Thanks.--Shahab (talk) 04:50, 12 October 2009 (UTC)
- I'm still thinking about the first question, but for the second how would you construct the inverses? As the roots of polynomials of the form ax-1? I guess you could do that, but I think the standard way with equivalence classes is easier. For an infinite ring you would have to adjoin an infinite number of elements and (I think), a priori, the result of that depends on the order you adjoin them in. You would need to prove that the order doesn't matter in this case (or define a specific order, which would be rather arbitrary). --Tango (talk) 05:08, 12 October 2009 (UTC)
- Why can't I add them all at once by considering (R[x,y,z...]/(ax-1,by-1,cz-1...) and even if add them one at a time doesn't the third isomorphism theorem ensure that (R/(a))/([b]) (here [b] stands for b+(a)) is isomorphic to R/(a,b) and so the order won't matter? I know the standard way is easier, my question was only whether this way is valid or not?--Shahab (talk) 05:16, 12 October 2009 (UTC)
- I don't know, maybe you can do that... I don't think the 3rd isom. thm. works - you have to apply it an infinite number of times, which complicates matters (it can probably be done, but I expect there are technicalities to deal with - there usually are). User:Algebraist will know, I'm sure he'll be along soon. --Tango (talk) 05:26, 12 October 2009 (UTC)
- The expression is meaningful and behaves the way you expect it to; in particular the "order" does not matter, as there is no "limiting" operation going on here -- the set {x,y,z...} is being adjoined "all at once". I'm not sure what you are asking about the third isomorphism theorem, but whatever it is Tango is probably right that it can't be used here. Eric. 131.215.159.109 (talk) 11:10, 12 October 2009 (UTC)
- How do you define R[x,y,z,...]? I would define it as (...((R[x])[y])[z])..., although it is easy enough to prove that the order doesn't matter for that (it does have to be proven, though), so once you've done that I suppose you might be able to mod out by the entire ideal at once. --Tango (talk) 11:50, 13 October 2009 (UTC)
- R[x,y,z,...] is the ring of multivariate polynomials over R with variables x, y, z, .... There's no need to do it one variable at a time (and it's rather counterproductive). — Emil J. 11:54, 13 October 2009 (UTC)
- Ok, how do you (rigorously) define "multivariate polynomials over R with variables x, y, z, ..."? By far the easiest way I can see is to define it one variable at a time. For a finite number of variables you can do it all at once, but for an infinite number it gets more difficult. I could probably do it, but it would be a mess. --Tango (talk) 12:34, 13 October 2009 (UTC)
- For example, as in polynomial ring#The polynomial ring in several variables, except that the field can be any ring, and since we need an infinite number of variables, we have to add the condition that all but finitely many coordinates of a multi-index are zero. — Emil J. 12:42, 13 October 2009 (UTC)
- Or equivalently: if V is a set of variables, let M be the free commutative monoid with generators V (which is just the direct sum of |V| copies of (N,+)), then R[V] can be defined as the monoid ring R[M]. — Emil J. 12:51, 13 October 2009 (UTC)
- Ok, how do you (rigorously) define "multivariate polynomials over R with variables x, y, z, ..."? By far the easiest way I can see is to define it one variable at a time. For a finite number of variables you can do it all at once, but for an infinite number it gets more difficult. I could probably do it, but it would be a mess. --Tango (talk) 12:34, 13 October 2009 (UTC)
- R[x,y,z,...] is the ring of multivariate polynomials over R with variables x, y, z, .... There's no need to do it one variable at a time (and it's rather counterproductive). — Emil J. 11:54, 13 October 2009 (UTC)
- How do you define R[x,y,z,...]? I would define it as (...((R[x])[y])[z])..., although it is easy enough to prove that the order doesn't matter for that (it does have to be proven, though), so once you've done that I suppose you might be able to mod out by the entire ideal at once. --Tango (talk) 11:50, 13 October 2009 (UTC)
- Why can't I add them all at once by considering (R[x,y,z...]/(ax-1,by-1,cz-1...) and even if add them one at a time doesn't the third isomorphism theorem ensure that (R/(a))/([b]) (here [b] stands for b+(a)) is isomorphic to R/(a,b) and so the order won't matter? I know the standard way is easier, my question was only whether this way is valid or not?--Shahab (talk) 05:16, 12 October 2009 (UTC)
- My query regarding the third isomorphism was this. Let R be a ring and I be an ideal containing some other ideal J. Let R' be the ring R/J. Now by the 3rd isomorphism theorem the ideal I corresponds with an ideal I' in R' and moreover R/I is isomorphic to R'/I'. Now if we consider a,b to be any elements in R and R'=R/(a) then since (a,b) and ([b]) are corresponding ideals so R/(a,b) is isomorphic to R'/([b]). In other words first killing a then killing b is the same as killing a and b together. Hence I concluded that the order won't matter even if we adjoin inverses one at a time. What I was asking Tango is whether this is correct or not.--Shahab (talk) 12:58, 13 October 2009 (UTC)
- For the first question, your example of 2x-1 defined over Z/(4) is a good one. Suppose R is an extension of Z/(4) containing a root r. Then 2r = 1 and so 0 = (2+2)r = 2r + 2r = 1 + 1 = 2. --Matthew Auger (talk) 05:50, 12 October 2009 (UTC)
- Yes. In fact, since the ring (Z/(4))[x]/(2x-1) is isomorphic to Z[x]/(2x-1,4) (i.e. when we make 2x-1=0 and 4=0 in Z[x]) and 2x-1=0 implies 4x=2; 4=0 implies 4x=0 which implies 2=0 and finally 2=0 implies 2x=1=0 so essntially we end up with the zero ring. This is true in general from what you pointed out. So I guess the answer to the first part is no. Thanks--Shahab (talk) 06:12, 12 October 2009 (UTC)
- For the second question, first part, the answer is yes. The usual homomorphism from R to is injective if a is not a zero divisor. For the second part, the answer is yes. It requires a little bit of care to adjoin an arbitrary number of elements but it is fundamentally no different from adjoining a finite number of elements. You will probably find localization of a ring of interest. You can show that the adjoining elements method is equivalent to localizing but it is tedious. Eric. 131.215.159.109 (talk) 11:02, 12 October 2009 (UTC)
- Thanks--Shahab (talk) 05:16, 13 October 2009 (UTC)
Cardinality
What is the cardinality of the set of infinite strings of real numbers? NeonMerlin[3] 06:23, 12 October 2009 (UTC)
- Define "infinite". If you mean sequences of real numbers indexed by natural numbers then it is , which I think is the same as , Beth-two. --Tango (talk) 06:35, 12 October 2009 (UTC)
- Tango, can you explain? My instinct is that should have the cardinality of the continuum. I'm not sure the following construction works, but it might go something like this - consider a function from the real line to the set of all countable sequences of real numbers: Take the decimal expansion of the 1st coordinate to be all the even digits, the decimal expansion of the 2nd to be all those digits divisible by 3 that are not even, and so on - the decimal expansion of the nth real number to be all those digits in the decimal expansion of your original number divisible by the nth prime number not previously taken, etc, etc. RayTalk 06:54, 12 October 2009 (UTC)
- I've got it upside down, haven't I? I meant , which is, indeed, the cardinality of the continuum. --Tango (talk) 06:59, 12 October 2009 (UTC)
- Yeah, I think we've got it now. By the way, thanks for introducing me to cardinal arithmetic - I was familiar with the standard arguments in real analysis, but had never gone further. RayTalk 07:08, 12 October 2009 (UTC)
- I've got it upside down, haven't I? I meant , which is, indeed, the cardinality of the continuum. --Tango (talk) 06:59, 12 October 2009 (UTC)
- Tango, can you explain? My instinct is that should have the cardinality of the continuum. I'm not sure the following construction works, but it might go something like this - consider a function from the real line to the set of all countable sequences of real numbers: Take the decimal expansion of the 1st coordinate to be all the even digits, the decimal expansion of the 2nd to be all those digits divisible by 3 that are not even, and so on - the decimal expansion of the nth real number to be all those digits in the decimal expansion of your original number divisible by the nth prime number not previously taken, etc, etc. RayTalk 06:54, 12 October 2009 (UTC)
Curve fitting
Hi, when fitting a curve to a set of data, why the 'Sum of Least Squares' is minimized. What will happen when we minimize 'Sum(abs(y_i - f(x_i)))' ? and why this quantity is not a goodness of fit parameter?
thanks! Re444 (talk) 06:44, 12 October 2009 (UTC)
- For various reasons, Least squares is the most common (summarizing the lead of the article - least squares fitting corresponds to the maximum likelihood estimator for most common models of noise). The method you describe is actually known as Least absolute deviation, and is also used in certain scenarios. The article has a lot more. Good luck! RayTalk 07:16, 12 October 2009 (UTC)
- To expand: There are two main reasons that lease square methods are so popular:
- If your measured data deviates from the "curve", because of additive Gaussian noise, then the least squares solution corresponds to the maximum likelihood (ML) solution to the problem (the Least absolute deviation solution corresponds to the ML solution if the noise is Laplacian). Since Gaussian noise assumptions are reasonable in many practical scenarios (thanks to the Central limit theorem), using least square approach is often the "right" choice.
- The least square solution is relatively easy to compute for some families of curves, for example when are linear functions.
- That said, one is not limited to just the least square or the least absolute deviation methods. Several other goodness-of-fit measures, such as other other ℓp norms or even general convex/non-convex functionals, can be and have been used depending upon the application. Abecedare (talk) 07:50, 12 October 2009 (UTC)
- And note that p=1 or p= infinity, although quite natural, in general give a minimization problem without unicity, due to the fact that the corresponding norms are not uniformly convex. Lack of unicity is annoying if one wants to assign a solution (here's f) with continuous dependence wrto the data (here the x_i and y_i). What makes so nice the life with the L2 norm is that the dependence of the minimizer from the data is even linear. pma --131.114.72.230 (talk) 10:01, 12 October 2009 (UTC)
- To expand: There are two main reasons that lease square methods are so popular:
- I'll try to give a naive reason. The sum of the least squares represents sort of the error between the estimated curve and the data. Minimizing it beforehand ensures that the curve we end up with has very little error. However the sum of the absolute values of y_i-f(x_i) also represents the error and minimizing it too should intuitively achieve the same end. In practice, however the method of least squares is better then absolute deviation because the sum of the absolute values does not stress the magnitude of each individual error. So there may be one big error in one of the deviations and almost no other errors if you use absolute value method. It is better usually to have a curve which has several small errors rather then a curve which has one large one. Small errors are reasonably bound to occur due to noise but it isn't easy explaining a large error away. To penalize large absolute errors, they are squared before the minimization. This is of course only a naive explanation. The mathematical reasons are as given above.--Shahab (talk) 08:01, 12 October 2009 (UTC)
- The least squares method can be solved with calculus while least total error (absolute values) can't. However, with the advent of the Simplex method and modern computers it's perfectly feasible to do least total error if you want to. Least squares will go out of it's way to accommodate outlying data points, which may be something you want or something to avoid depending on the situation.--RDBury (talk) 15:02, 12 October 2009 (UTC)
The mean value between, say, 1 and 3, is x = (1+3)/2 = 2. The sum of squares of deviations: (x−1)2+(x−3)2 is minimized for x = 2, while the sum of absolute deviations |x−1|+|x−3| is constantly = 2 for 1≤x≤3. So the minimum sum of absolute deviations does not select the mean value. Bo Jacoby (talk) 15:25, 12 October 2009 (UTC).
Thanks a lot every body! Re444 (talk) 23:04, 14 October 2009 (UTC)
Interesting sounds
As we all know, a pure sine tone with frequency ν can be produced by sending the waveform
to a speaker. More interesting sound effects are obtained if the argument of the sine function is replaced by a less trivial function of t, e.g. a polynomial, or if an even more complicated expression y(t) is chosen. I am seeking suggestions for interesting sounds. I would appreciate all individual suggestions, as well as links to external (WWW) documents discussing different waves. --Andreas Rejbrand (talk) 10:33, 12 October 2009 (UTC)
- Note I changed y(x) to y(t) in your equation, since the x didn't make sense in that place. Define "interesting"? Are you just trying to produce nice sounds or is there an engineering application? A few examples from electrical engineering: amplitude modulation and frequency modulation as heard on your radio. A chirp signal (I'll blue-link it to chirp now) Asin[at + bt2] is often used in interferometry and acoustic signal processing to determine the frequency response of a system over a given frequency range. I'm sure there other examples. Zunaid 13:22, 12 October 2009 (UTC)
- No, I am merely interesting in sounds that sound interestingly. I will try your examples. --Andreas Rejbrand (talk) 17:25, 12 October 2009 (UTC)
- You will want to investigate synthesis techniques. Refer to Miller Puckette's "Theory and Technique of Electronic Music". Note that just considering simple models such as compositions of periodic functions generally don't create interesting sounds over time. You need sounds that vary in timbre, amplitude and phase over time to really get things going. Subtractive and Additive synthesis models are a good place to start. —Preceding unsigned comment added by 94.171.225.236 (talk) 19:19, 12 October 2009 (UTC)
- No, I am merely interesting in sounds that sound interestingly. I will try your examples. --Andreas Rejbrand (talk) 17:25, 12 October 2009 (UTC)
- Try Shepard tone -- SGBailey (talk) 08:38, 13 October 2009 (UTC)
- This is what I call an interesting sound: A⋅sin(ω⋅(sin(t)⋅t)). --Andreas Rejbrand (talk) 20:08, 18 October 2009 (UTC)
No solutions to equation similar to FLT
Hi all,
I'm looking for the easiest/quickest way to show has no solutions in , unless I'm being stupid in which case somebody please point out a solution! (This is indirectly related to a question paper but not an actual question on it, just me trying to satiate my own curiosity)
Thanks very much :) —Preceding unsigned comment added by 82.6.96.22 (talk) 15:57, 12 October 2009 (UTC)
- First, show that if it has a nonzero rational solution, then it also has a nonzero integer solution, and furthermore one where gcd(a, b, c) = 1 (which in particular implies that a and b can't be both even). Then consider what is the remainder of both sides modulo 4. — Emil J. 16:06, 12 October 2009 (UTC)
- (edit conflict) It might go something like this. There are two cases: either a and b are both divisible by 3, or neither are (because the right hand side has to be divisible by 3). If they are both divisible by 3, then you have a contradiction, because you have an even number of powers of three on the left hand side and an odd number of powers of 3 on the right hand side. If neither are, then one of must be 1 mod 3, and the other must be 2 mod 3. This is impossible, since there are no numbers whose squares are 2 mod 3 (1 mod 3 squared is just 1 mod 3, but so is 2 mod 3 squared). RayTalk 16:08, 12 October 2009 (UTC)
- oops. I realized I addressed the question in . Yes, first you must show a rational solution implies an integer solution, which is not difficult :) RayTalk 16:09, 12 October 2009 (UTC)
- If c is even, you do have an even number of powers of 3 on the right-hand side. — Emil J. 16:57, 12 October 2009 (UTC)
- Oh, I see, you mean that the maximal power of 3 which divides the RHS has odd exponent. But then it does not follow so trivially that the exponent for the LHS has to be even if both a and b are divisible by 3. You have to apply a generalization of the reasoning used in the case when they are not divisible (to rule out the possibility that a = 3kc, b = 3kd, where 3 does not divide c, d, but it divides c2 + d2). — Emil J. 17:23, 12 October 2009 (UTC)
- Right. Oops :) RayTalk 17:50, 12 October 2009 (UTC)
- oops. I realized I addressed the question in . Yes, first you must show a rational solution implies an integer solution, which is not difficult :) RayTalk 16:09, 12 October 2009 (UTC)
Brilliant, thanks very much all :) —Preceding unsigned comment added by 82.6.96.22 (talk) 16:24, 12 October 2009 (UTC)
Markov chain-like model
In our article on Markov chain, it states that you do not need to know the past to predict the future. Everything is recorded in the present state. What is the name of a similar model in which the past is required to be known to predict the future? Is there a Markov model which consider the "present" to be the last few states, but not the entire history? -- kainaw™ 19:27, 12 October 2009 (UTC)
- Such a random process can be treated as a Markov chain by redefining it slightly, so that what was the last few states becomes the present. This is how Markov text generators work, for example: a process that was Markov on individual words wouldn't be able to generate real-looking text, but one that's Markov on the level of sequences of a few words can. Algebraist 19:36, 12 October 2009 (UTC)
- Isn't that just a "Markov chain of order m" as defined in the article? -- Meni Rosenfeld (talk) 20:47, 12 October 2009 (UTC)
- Damn, I knew I should've looked at the article more closely. Algebraist 20:59, 12 October 2009 (UTC)
- Thanks. I didn't read the whole article. I read the beginning and it makes it very clear that Markov chains do not look at past states. I didn't expect that halfway down it would sneak in that sometimes they do look at past states. -- kainaw™ 23:25, 12 October 2009 (UTC)
- Note that Markov chains of order m are not Markov chains, any more than Gaussian integers are integers. Our articles often mention variations on their main subject. -- Meni Rosenfeld (talk) 20:16, 13 October 2009 (UTC)
- I don't see why it's a departure. To turn your Markov chain of order m into a chain of order 1, you can just expand what you define as the current state to include the information about the last m choices. Rckrone (talk) 17:56, 14 October 2009 (UTC)
- Of course you can reduce MCOOm to normal Markov chains. That does not mean they are the same thing. Mathematics is full of concepts which can be easily reduced to simpler \ more common ones, but are still introduced because they are interesting, offer additional insight, and are a more natural way to think of some problems. -- Meni Rosenfeld (talk) 21:18, 14 October 2009 (UTC)
- Any Markov chain of order m is isomorphic to some Markov chain of order 1. There's no stronger form of identity than that. You said "Markov chains of order m are not Markov chains" but that seems to be false since they form a subset, or at the very least there's a natural embedding. On the other hand some Gaussian integers aren't integers. A more apt comparison might be to say that all integers are Gaussian integers. Rckrone (talk) 22:12, 14 October 2009 (UTC)
- I mean I understand what you mean, that the conventional way to think about Markov chains of order m breaks the "rules" of Markov chains. But the truth is that the distinction is entirely superficial, since structurally they still fall into the definition of a Markov chain. Rckrone (talk) 22:18, 14 October 2009 (UTC)
- [ec] My point with the Gaussian integers example was to highlight a tricky usage of language. The name "Gaussian integer" seems to suggest that they are a special case of integers, which is of course false - i is a Gaussian integer which is not an integer. Likewise, the name "Markov chain of order m" might suggest that they are a special case of Markov chains, which is also false - in a typical MCOO 2, depends on given , which makes it not a Markov chain.
- kainaw's statement "sometimes [Markov chains] do look at past states" suggested he was fooled by this, and I wanted to clarify that Markov chains never look at past states.
- The existence of an embedding of MCOOm in Markov chains is irrelevant to that point. -- Meni Rosenfeld (talk) 22:31, 14 October 2009 (UTC)
- Of course you can reduce MCOOm to normal Markov chains. That does not mean they are the same thing. Mathematics is full of concepts which can be easily reduced to simpler \ more common ones, but are still introduced because they are interesting, offer additional insight, and are a more natural way to think of some problems. -- Meni Rosenfeld (talk) 21:18, 14 October 2009 (UTC)
- I don't see why it's a departure. To turn your Markov chain of order m into a chain of order 1, you can just expand what you define as the current state to include the information about the last m choices. Rckrone (talk) 17:56, 14 October 2009 (UTC)
- Note that Markov chains of order m are not Markov chains, any more than Gaussian integers are integers. Our articles often mention variations on their main subject. -- Meni Rosenfeld (talk) 20:16, 13 October 2009 (UTC)
- Thanks. I didn't read the whole article. I read the beginning and it makes it very clear that Markov chains do not look at past states. I didn't expect that halfway down it would sneak in that sometimes they do look at past states. -- kainaw™ 23:25, 12 October 2009 (UTC)
What is a Mexican pyramid?
If you've seen them, they look sort of like an Egyptian pyramid without its top. Actually, the structures are more like steps, but I'm saying the shape would be like an Egyptian pyramid as seen from a distance, with the top part removed to make a smaller pyramid.Vchimpanzee · talk · contributions · 19:31, 12 October 2009 (UTC)
- Frustum. —JAO • T • C 19:35, 12 October 2009 (UTC)
- Thank you. I looked under pyramid (geometry) and didn't see anything.Vchimpanzee · talk · contributions · 19:45, 12 October 2009 (UTC)
- ... and if you want to read a non-mathematical article, we have Step pyramid. Dbfirs 07:19, 13 October 2009 (UTC)
- Thank you. I looked under pyramid (geometry) and didn't see anything.Vchimpanzee · talk · contributions · 19:45, 12 October 2009 (UTC)
October 13
Vectors and matrix algebra dot products et al
I have the X, Y, Z co-ordinates of 3 points in space for which I need to determine the following
1. The co-ordinate of a circles centre which passes through all 3 points
2. The co-ordinates of the point from 1. above, translated normal to the surface described by the original 3 points by a distance X.
It has been a very long time since I've used any vector or matrix so I'm a little muddled.
Any help would be great.
- Solve first the special case where Z=0 for all three points. The general case is translated into the special case by changing coordinates. Bo Jacoby (talk) 07:03, 13 October 2009 (UTC).
- ... I don't think the OP was wanting the projection onto the XY plane? You have three vectors: OX, OY and OZ. The centre of the circle lies at the intersection of the perpendicular bisectors of the sides of the triangle XYZ, so you need to find vectors perpendicular to two sides (say XY and XZ) and passing through the middle of these sides, then find where these lines intersect. This is rather messy, so someone else will probably come up with a simpler method. For part 2, the normal to the plane of the circle is given by the vector product of any two vectors in the plane of the circle, such as XY and XZ (subtract co-ords to get these), find the vector product, divide by it's modulus, multiply by distance x, then add the result to answer to part 1. Dbfirs 07:35, 13 October 2009 (UTC)
- ... (later) ... If you are happy with a "magic formula" for the centre of the circle in part 1, it is given by the vector
where
... but I recommend that you work through the long method to help your understanding of vector equations of lines. The vector equation of each perpendicular bisector is given by
b = OM + kp where O is the origin, M is the mid-point of a side, p is any vector perpendicular to the side, and k is an arbitrary constant (to be determined when you find where the lines intersect). Dbfirs 07:52, 13 October 2009 (UTC)
- (edit conflict). I was suggesting to solve a special case, not a projection. The center (X,Y) of the circle has the same distance to the three points (XA,YA), (XB,YB), and (XC,YC), so (X−XA)2 + (Y−YA)2 = (X−XB)2 + (Y−YB)2 = (X−XC)2 + (Y−YC)2. Solving these two equations gives expressions for the two unknowns X and Y. First do the squares: X2 − 2XXA + XA2 + Y2 − 2YYA + YA2 = X2 − 2XXB + XB2 + Y2 − 2YYB + YB2 = X2 − 2XXC + XC2 + Y2 − 2YYC + YC2. Then subtract X2 + Y2 giving − 2XXA + XA2 − 2YYA + YA2 = − 2XXB + XB2 − 2YYB + YB2 = − 2XXC + XC2 − 2YYC + YC2. Now the equations are linear and the solution is straightforward. Bo Jacoby (talk) 08:39, 13 October 2009 (UTC).
- Yes, I see. I suppose it is possible to extend this 2-D solution to 3-D, but the OP's heading suggested that they wanted a vector solution. Dbfirs 10:32, 13 October 2009 (UTC)
- The center of the circle is a symmetric function of the 3 points. Your formula does not look symmetric. Can that be helped? Bo Jacoby (talk) 16:10, 13 October 2009 (UTC).
- . — Emil J. 16:20, 13 October 2009 (UTC)
- Thanks. I did make one error above, I should have said p is any vector perpendicular to the side in the plane of the triangle. I'm sure there should be a neater vector method for part 1, but I can't see it. My method is good for understanding vectors, but seems too long. There is a co-ordinate geometry formula for 3-D like Bo Jacoby's formula. I suppose I could derive it using my vector method if I had the patience, but it is a good exercise for the OP if he or she wants to understand vectors. Are there any experts who can see a short-cut? Dbfirs 16:44, 13 October 2009 (UTC)
- . — Emil J. 16:20, 13 October 2009 (UTC)
An Almost-Primality Test
Is there any general way known to determine whether a number that fails a test for being prime is a product of only two primes without actually checking up to the number's third root for a factor? Is it just as efficient--so far as is known--to find a factor as to determine that it has at least three knowing that it has at least two? There are fast algorithms for determining whether a number is prime or not without actually finding a proper divisor. Once this is done with an affirmative (composite) result, I am left to find a factor in what may be a very long time when all I really want is the knowledge that there are or are not only two factors.Julzes (talk) 10:00, 13 October 2009 (UTC) For example, right now my computer is checking the factorization of 1111101010111100101111100000100011563 and it looks like it will take a week.Julzes (talk) 10:06, 13 October 2009 (UTC) I have a further question on this. I'm curious to know what the first two consecutive bases in which the above number is prime, so if anybody wants to run a search on that I would appreciate it. If it's likely that no such pair exists, I would like a good explanation also. My inclination is toward believing that such a pair probably does exist, but I haven't actually reasoned it out quantitatively.Julzes (talk) 10:10, 13 October 2009 (UTC) It is prime in both base 591 and 593, but aside from its being prime in base 2 and prime if we also allow it to be considered as in base 1, that's the closest I've come to what I'm looking for.Julzes (talk) 10:17, 13 October 2009 (UTC)
- Would I be right to say that your number 1111101010111100101111100000100011563 is given by
- 1,610,516,746,593,217,788,093,321,222,359,707,265,648,849,509,955,413,660,168,610,383,205,686,470,714,500,112,576,660,497,611,201,902,70310?
- And if so, where did you get your desire to factorise such an exotic number? ~~ Dr Dec (Talk) ~~ 17:30, 13 October 2009 (UTC)
Without checking all 103 digits, it looks the same. I'm actually not so interested in the factorization as in knowing it has two prime factors and not three. I'm just following through on some research in great detail whose impetus will seem rather strange. It comes from reading the h2g2 Entry number of the article on the constellation Reticulum as a base-thirteen number. Got some rather nice arithmetical stuff from doubling the base conversion process. In this base-two case, the most interesting thing was the string of almost primes running from bases 29 through 33, along with the fact that before base 29 there are only the primes for the cases of 'base 1', base 2, base 7, and an almost-prime at base 16 (Every other smaller base has at least three prime factors). I've made an extensive set of listings for the other bases up to base 96, and I'm right now just in the process of extending the list for base 2.Julzes (talk) 23:06, 13 October 2009 (UTC)
- The Dickman-de Bruijn function gives an indication of how likely a random number x is to have no prime factors above
xux1/u for given u. For u=3 it's about 5%. For a 103-digit number with 3 factors you could probably crank out a complete factorization with the elliptic curve method in a practical amount of time. 75.62.0.233 (talk) 23:41, 13 October 2009 (UTC)
What you just said doesn't quite make sense. I have no problem with the last sentence ('practical' is an irrelevant concept here anyway), but x obviously cannot have a factor greater than x3.Julzes (talk) 00:22, 14 October 2009 (UTC)
- Sorry, misstated, see correction. Also this article. 01:16, 14 October 2009 (UTC)
Well, the number almost certainly has only two factors, but I'm still waiting.Julzes (talk) 09:16, 14 October 2009 (UTC) Still waiting. And now I've got my comp also simultaneously running for bases 1611 and 1813. It looks like I'm going to have at least two to add to the alpertron.com records list. There is something wrong with my Adobe Acrobat, so I haven't been able to open that article. I appreciate it, though. I guess it's unlikely anyone will come up with an almost-primality test any time soon, but perhaps there is a way to quicken things. I could probably declare the base-1563 case done, as it has no factor smaller than 35 digits (the borderline I need to consider), but I may as well wait it out and see just how large its smallest factor is.Julzes (talk) 04:31, 16 October 2009 (UTC)
Vectors in meterology
How are vectors used in meterology?Accdude92 (talk) (sign) 13:36, 13 October 2009 (UTC)
- Wind velocity would be a vector (well, a vector field, strictly speaking - a vector for each point in the atmosphere). There will be other uses too, but that is the most obvious one. --Tango (talk) 16:40, 13 October 2009 (UTC)
- Any scalar field (e.g. temperature, air pressure, moisture, …) gives rise to a vector field; namely the gradient vector field. The gradient vector field has many useful physical interpetations. ~~ Dr Dec (Talk) ~~ 17:40, 13 October 2009 (UTC)
percent comparisons
Suppose I start with 100. Then 120 is 20% greater, 150 is 50%, 200 is 100%, and 300 is 200% greater. But 300 is also 3 times 100. So 200% greater is also 3 times greater. Is this seeming discrepancy merely a language artifact, or is there something deeper here? --Halcatalyst (talk) 19:13, 13 October 2009 (UTC)
- "200% greater" is "3 times as great" and logically it would also be "2 times greater", but language isn't always logical and often uses "times greater" and "times as great" as synonyms (but only when no percent signs are involved); see this Q&A or this one, for instance. Certainly just a language quirk. —JAO • T • C 19:24, 13 October 2009 (UTC)
- "200% of" is "2 times", "200% greater" is "3 times". I think "X times greater" is ambiguous and should be avoided. --Tango (talk) 19:32, 13 October 2009 (UTC)
- Agreed, though it seems to be pretty much a lost cause. Even worse (to my ear) are such usages as "three times smaller" (meaning one-third as big, probably). AndrewWTaylor (talk) 09:54, 14 October 2009 (UTC)
- I recall hearing "100% smaller" once, I have no idea what that was meant to mean (from context it didn't mean "reduced to zero", I think it was probably intended to mean "half", but why the speaker ever thought it did, I have no idea...). --Tango (talk) 10:23, 14 October 2009 (UTC)
- Agreed, though it seems to be pretty much a lost cause. Even worse (to my ear) are such usages as "three times smaller" (meaning one-third as big, probably). AndrewWTaylor (talk) 09:54, 14 October 2009 (UTC)
- "200% of" is "2 times", "200% greater" is "3 times". I think "X times greater" is ambiguous and should be avoided. --Tango (talk) 19:32, 13 October 2009 (UTC)
Some probability questions...
1. I have a bag with 6 balls numbered one to six inside. What is the probability that I will pull the balls out in numerical order?
2. What is the probability if I do that again but this time I label the first ball as 1st (so ball marked "1" is correct) but placed sixth (so ball marked "6" is also correct); the second ball as 2nd (so ball marked "2" is correct) and placed fifth (so ball "5" is also correct)? And so on (3rd ball out must be either no.3 or number 4; 4th ball out must be either ball no.4 or no.3... etc to the end)
3. What is the probability for exercises 1 and 2 if I increase the number of balls to 7?
In case I am accused of asking a homework question, I'll tell you why I ask. I watch poker on TV and there is usually either 6 or 7 players at the table. Just for fun I try to predict the order in which people will be knocked out of the game. I used to do this by just writing down their placing. But lately I have been trying to "double" my chances (though I'm sure that isn't the truth in practice) by allowing myself to be correct using the two labels rather than just the one. I sense I may not have asked this question entirely clearly so please ask if you need me to clarify. --79.72.61.252 (talk) 23:38, 13 October 2009 (UTC)
- For question 1, there are 720 permutations of the 6 balls and only one of them will satisfy the requirement, so the probability of getting that permutation (based on usual assumptions about drawing randomly) is 1/720. I don't understand question 2. In a given hand of poker, seat position has a real effect in one's winning chances--you can't assume independent draws. 75.62.0.233 (talk) 23:47, 13 October 2009 (UTC)
- Thanks for your answer to question 1. That is much lower probability of success than I would have expected.
- To clarify question 2, I am bringing out the numbers again but this time they can be in this order:
- 1st ball can be either ball 1 or 6
- 2nd ball can be either ball 2 or 5
- 3rd ball can be either ball 3 or 4
- 4th ball can be either ball 4 or 3
- 5th ball can be either ball 5 or 2
- 6th ball can be either ball 6 or 1
- So given that there are now two correct balls at each stage what are my chances of succeeding at bringing them out in an order that is correct? —Preceding unsigned comment added by 79.72.61.252 (talk) 11:56, 14 October 2009 (UTC)
- The chance of the 1st ball being right is 2/6=1/3. If that ball is right, the chance of the 2nd being right is 2/5. If that is right the chance for the 3rd is 2/4=1/2. After that only one of the options will be possible since the other will have already been used, so the chances for the 4th, 5th and 6th balls are 1/3, 1/2 and 1. To get the chance of them all being right we multiply those together. That gives us 2/180=1/90. That is quite a lot more likely than the 1/720 we had before (in fact, it is 8 times likely, that is 23 since the first 3 balls are twice as likely to be right). --Tango (talk) 12:11, 14 October 2009 (UTC)
- I don't think this is right. Once you know the value of the first ball, there is only one ball left out of the five remaining that can follow it. So we'd get the same probabilities as before, except for the first ball, which would be twice as likely to be correct. So the probablility of getting one of those sequences is 1/360 (or 2/720 - note there are 720 permutations and only two are correct). Readro (talk) 12:28, 14 October 2009 (UTC)
- No, it would only be two permutations if the series had to be either 1,2,3,4,5,6 OR 6,5,4,3,2,1, however this allows for more permutations than that, such as 6,2,4,3,5,1 etc. --79.72.61.252 (talk) 14:08, 14 October 2009 (UTC)
- I don't think this is right. Once you know the value of the first ball, there is only one ball left out of the five remaining that can follow it. So we'd get the same probabilities as before, except for the first ball, which would be twice as likely to be correct. So the probablility of getting one of those sequences is 1/360 (or 2/720 - note there are 720 permutations and only two are correct). Readro (talk) 12:28, 14 October 2009 (UTC)
- The chance of the 1st ball being right is 2/6=1/3. If that ball is right, the chance of the 2nd being right is 2/5. If that is right the chance for the 3rd is 2/4=1/2. After that only one of the options will be possible since the other will have already been used, so the chances for the 4th, 5th and 6th balls are 1/3, 1/2 and 1. To get the chance of them all being right we multiply those together. That gives us 2/180=1/90. That is quite a lot more likely than the 1/720 we had before (in fact, it is 8 times likely, that is 23 since the first 3 balls are twice as likely to be right). --Tango (talk) 12:11, 14 October 2009 (UTC)
- So, therefore, if there were seven balls, the chance of pulling them out in the correct order is 1 in 7! = 5040. The chance of the numbers being correct according to the second system is 1 in 5040/8 = 630. More generally, where N is the number of balls, the odds are 1 in N! and 1 in N!/[2^INT(N/2)]. Warofdreams talk 16:03, 14 October 2009 (UTC)
- Thanks everyone. --79.72.61.252 (talk) 11:40, 16 October 2009 (UTC)
October 14
Constructor
A constuctor has a 0.80 probability of making $70000, a probability of loosing $20000 and a probability of breaking even. what is the expected value? —Preceding unsigned comment added by 24.0.239.50 (talk) 02:00, 14 October 2009 (UTC)
- You have to know all three probabilities. See expected value#Examples for some examples of how to compute expected value. 66.127.53.204 (talk) 02:38, 14 October 2009 (UTC)
- Although you can find a range for the expected value. At the low end, breaking even has a probability of 0, and at the high end, it has a probability of 0.20. Rckrone (talk) 04:52, 14 October 2009 (UTC)
- If you assume that the three cases you give are all possibilities you can use the fact that the probabilities would then sum to 1 in order to give the expected value as a function of one of the two probabilities that are not given. Taemyr (talk) 05:05, 14 October 2009 (UTC)
- Probably a bit much for this but for a constructor doing one project at a time it can make more sense to use the geometric mean to give the expected nett worth rather than the normal arithmetic one. This is explained in Rate of return Dmcq (talk) 09:19, 14 October 2009 (UTC)
- I'm not sure how the geometric mean applies there. I could imagine wanting to find the expected value of a joint probability distribution of the profit/loss across a bunch of different projects, but that's more complicated than the geometric mean. Alternatively, see Kelly criterion for an approach to treating each project as a straight gamble and deciding how much to bet on it given a specified bankroll. 66.127.53.204 (talk) 10:13, 14 October 2009 (UTC)
- Yes that was what I was saying, if the constructor does one project at a time that gives a more useful estimate of the value. I don't why you say that and then say you're not sure how the geometric mean applies there. Dmcq (talk) 21:50, 14 October 2009 (UTC)
- I'm not sure how the geometric mean applies there. I could imagine wanting to find the expected value of a joint probability distribution of the profit/loss across a bunch of different projects, but that's more complicated than the geometric mean. Alternatively, see Kelly criterion for an approach to treating each project as a straight gamble and deciding how much to bet on it given a specified bankroll. 66.127.53.204 (talk) 10:13, 14 October 2009 (UTC)
- Probably a bit much for this but for a constructor doing one project at a time it can make more sense to use the geometric mean to give the expected nett worth rather than the normal arithmetic one. This is explained in Rate of return Dmcq (talk) 09:19, 14 October 2009 (UTC)
- There is some missing data here. What is the probability of tightening $20000? -- Meni Rosenfeld (talk) 21:05, 14 October 2009 (UTC)
If x, (0≤x≤0.20), is the untold probability of loosing $20000, then the expected value is μ = 0.80·$70000−x·$20000. So $52000≤μ≤$56000. The geometric mean exp(0.80·log(70000)+x·log(−20000)+(0.20−x)·log(0)) makes no sense to me. Bo Jacoby (talk) 08:09, 16 October 2009 (UTC).
- The constructor would start off with some amount of money like $60000 and end up with say $130000 or $40000. If the constructor started with $20000 or less then bankruptcy is a possible outcome. Since bankruptcy stops any gain for years and years that should mean the value of a deal where that is a possible outcome is zero, in fact one would count it as a low amount as there s always a way back from it. Dmcq (talk) 01:53, 17 October 2009 (UTC)
Polynomial ideals
Hi. I've got this problem, from an Algebraic Geometry book:
Consider the polynomial and the ideal . Compute the dimension of as a complex vector space.
So, I've done the problem, and I'm pretty sure the answer is 3, but I think there's something I'm missing. I just used the 3 polynomials to compute a Gröbner basis, and although the process is slightly arduous, the basis turns out to be quite simple: . From that, I can see that the only 3 elements of the standard basis of that aren't in are 1, x and y.
Here's my question: Is there any simpler way to do this? In particular, is it possible to get any mileage from the fact that the polynomials generating the ideal are one polynomial and its two first order partial derivatives? I don't know why the book author would pose the question that way unless there's some way to use that information. I hope this question makes sense. -GTBacchus(talk) 03:02, 14 October 2009 (UTC)
- Isn't xy also not in the ideal , making the vector-space dimension 4, or am I getting confused here?
- I can't imagine any easier way to tackle this problem. It feels like much the kind of computational problem Groebner bases is meant for. When and I see the ideal , my instinct is to look for multiple roots of f, but I don't see that helping you any here. What strikes me as odd is that this problem seems to have hardly any connection with algebraic geometry. Since the ideal I isn't radical, the space is not isomorphic to the affine coordinate ring (or whatever it's called) of Z(I), as Z(I) is the single point (0, 0), which has affine coordinate ring .
- Out of curiosity, what book are you using?
- (Disclaimer: I am very much a novice at algebraic geometry and could easily be overlooking a deeper connection.) Eric. 131.215.159.109 (talk) 05:04, 14 October 2009 (UTC)
- Thanks for catching my silly error. There are 3 quadratic monomials in C[x,y], as you say. (There are also 3 types of people: those who can count, and those who can't. D'oh!)
The book is Introduction to Algebraic Geometry, by Brendan Hassett (ISBN 978-0-521-69141-3). I think I see what you mean about the lack of interesting geometry in the problem. We're in chapter 2, which is just introducing Gröbner bases; varieties and coordinate rings are defined in chapter 3. -GTBacchus(talk) 05:45, 14 October 2009 (UTC)
- Ah, that explains it. Skip ahead to the interesting stuff. My own book (Robin Hartshorne's Algebraic Geometry) has the opposite problem -- it's ridiculously dense, I can scarcely make any headway at all. Otherwise a good book though. Eric. 131.215.159.109 (talk) 05:51, 14 October 2009 (UTC)
- moreover he can play shakuhachi... ;-) --pma (talk) 20:35, 16 October 2009 (UTC)
- Ah, that explains it. Skip ahead to the interesting stuff. My own book (Robin Hartshorne's Algebraic Geometry) has the opposite problem -- it's ridiculously dense, I can scarcely make any headway at all. Otherwise a good book though. Eric. 131.215.159.109 (talk) 05:51, 14 October 2009 (UTC)
- Thanks for catching my silly error. There are 3 quadratic monomials in C[x,y], as you say. (There are also 3 types of people: those who can count, and those who can't. D'oh!)
Newton's Method
If I'm trying to use Newton's method to find the zero of a function over an interval, is there any way I can get an upper bound on the number of iterations I'll need to use to get to within a fixed accuracy? Like, if I were to use the bisection method on an interval [a,b], and I wanted to get the zero to within 10^-5, I could just solve |b - a|/2^n <= 10^-5, and that would tell me the maximum number of iterations n I'd need to use; I want something like that for Newton's method. I've looked at the Wikipedia page, but don't see anything like that.Deadlyhair (talk) 04:21, 14 October 2009 (UTC)
- It depends on the behavior of the function and its derivatives. You can even concoct examples where it oscillates forever between two values or stays stuck on one value: see Newton's_method#Counterexamples. But for reasonable functions where the first derivative doesn't change sign between guesses, it converges much faster than bisection. There's tons of analysis of its rate of convergence in numerical analysis texts. 66.127.53.204 (talk) 04:48, 14 October 2009 (UTC)
- If you really need an upper bound on the number of steps then it's best to just stick with the bisection method. Even in cases where you know Newton's method converges, it can converge arbitrarily slowly.--RDBury (talk) 13:13, 14 October 2009 (UTC)
- The usual assumption for the Newton method on an interval is that the function be smooth and convex (or concave) in the interval. In that case one gets a quadratic rate of convergence (so n iterations give an error less than exp(-C2n)
exp(-Cn2). Usually one starts with the bisection method (that has linear convergence, i.e. error less than exp(-Cn) ), till one gets close enough to the zero, so that the conditions for the quadratic convergence of the Newton iteration hold, and then one switch to it. --pma (talk) 19:40, 14 October 2009 (UTC)- I'm fairly certain this is actually . See also my response below. -- Meni Rosenfeld (talk) 20:59, 14 October 2009 (UTC)
- Yes sorry you (and 195) are right! --pma (talk) 21:24, 14 October 2009 (UTC)
- I'm fairly certain this is actually . See also my response below. -- Meni Rosenfeld (talk) 20:59, 14 October 2009 (UTC)
- The usual assumption for the Newton method on an interval is that the function be smooth and convex (or concave) in the interval. In that case one gets a quadratic rate of convergence (so n iterations give an error less than exp(-C2n)
- If you really need an upper bound on the number of steps then it's best to just stick with the bisection method. Even in cases where you know Newton's method converges, it can converge arbitrarily slowly.--RDBury (talk) 13:13, 14 October 2009 (UTC)
- Just to give you an idea on what the bound may look like - if f is sufficiently smooth and is sufficiently close to the root , then , so . Solve for n to get your bound - the important part of the result will be where is your error goal. -- Meni Rosenfeld (talk) 20:59, 14 October 2009 (UTC)
- Newton's method has local quadratic convergence. Quadratic means that (roughly) the number of correct decimal places doubles with every iteration and local means that that only holds once you are close enough to the solution. What close enough means is unfortunately not quite as easy to determine. You do *not* need concavity or convexity for Newtons method, however a zero slope at the intersection with the y-axis will harm you (giving only linear convergence). There are many ways to make Newton's method globally convergent, but they all will result in a linear convergence rate away from the solution and a quadratic rate close to it. So an estimate based on the hybrid bisection/Newton suggested above is likely about the best you can expect.195.128.250.90 (talk) 21:08, 14 October 2009 (UTC)
The short answer to the OP's question is 'no'. If he is in the lucky situation where bisection works, (continuity and change of sign over an interval), and he wants an upper bound on the number of iterations needed, then why use Newton's method? Bisection has a better worst-case behavior than Newton's method. Bo Jacoby (talk) 16:13, 15 October 2009 (UTC).
Thank you all for your help.Deadlyhair (talk) 17:23, 16 October 2009 (UTC)
Russell's paradox
I am browsing some articles about paradoxes. Can Arthur Prior's resolution for Liar paradox "This statement is false", that every statement includes an implicit assertion of its own truth, be applied to Russell's and other related paradoxes, such as Barber paradox and Grelling-Nelson paradox?
I think it would be "Every set silently includes itself as a member." or something for Russell's paradox. And if there are, what are weak points of the way of resolution? Like sushi (talk) 12:42, 14 October 2009 (UTC)
- Your proposal can't help: Just think about whether Russel's set contains itself as a member. If it does contain - as you propose for every set (hence for Russel's set as well) - then that contradicts Russel's definition for the set, as a set which doesn't contain any set containing itself as a member. HOOTmag (talk) 13:08, 14 October 2009 (UTC)
- I think what the article "Russell's paradox" has is "a set containing exactly the sets that are not members of themselves" (Is it the same?). What I think about it is (as in the case of Liar paradox) if every set silently contains it self, "the sets that are not members of themselves" should be rewritten as "the sets that are not members of themselves and are members of themselves", which does not exist. So the set does not contain anything (But as every set silently contains itself, it does contain it self, or if "exactly" means not even itself, it does not exist).
- Like sushi (talk) 02:02, 15 October 2009 (UTC)
- While such a inclusion would resolve the problem of Russell's paradox, ie. it would directly tell us that the set in question does not exist. It will not resolve the implications of Russells paradox. Since we already knew that Russel's set doesn't exist. The problem is that there isn't any direct way from the definition of a set to determining wether a description of a set corresponds to a set that can exist. While your resolution would resolve Russel's set, it would not resolve all such paradoxial sets, or if it does it would dramatically reduce the expressive power of our languag. This because of Gödel's incompleteness theorems. Taemyr (talk) 04:56, 15 October 2009 (UTC)
- Also I think that either your proposed resolution would be unsound, or it would break stuff. Depending on what you mean by the inclusion beeing silent. Because either you do stuff like saying that the empty set contain a member, ie. itself. Or you say that every set is a member of itself is an assertion S.S∈S without actually including the relevant element. Taemyr (talk) 05:04, 15 October 2009 (UTC) could someone fix my tex-code?
- While such a inclusion would resolve the problem of Russell's paradox, ie. it would directly tell us that the set in question does not exist. It will not resolve the implications of Russells paradox. Since we already knew that Russel's set doesn't exist. The problem is that there isn't any direct way from the definition of a set to determining wether a description of a set corresponds to a set that can exist. While your resolution would resolve Russel's set, it would not resolve all such paradoxial sets, or if it does it would dramatically reduce the expressive power of our languag. This because of Gödel's incompleteness theorems. Taemyr (talk) 04:56, 15 October 2009 (UTC)
- I think the first ofGödel's incompleteness theorems may be dealt in simular way.
- The part that seems to be the explanation of it is:
-
- "For each consistent formal theory T having the required small amount of number theory, the corresponding Gödel sentence G asserts: “G cannot be proved to be true within the theory T”. If G were provable under the axioms and rules of inference of T, then T would have a theorem, G, which effectively contradicts itself, and thus the theory T would be inconsistent. This means that if the theory T is consistent then G cannot be proved within it. This means that G's claim about its own unprovability is correct; in this sense G is not only unprovable but true. Thus provability-within-the-theory-T is not the same as truth; the theory T is incomplete.
- If G is true: G cannot be proved within the theory, and the theory is incomplete. If G is false: then G can be proved within the theory and then the theory is inconsistent, since G is both provable and refutable from T."
- If we assume that every sentence silently asserts itself to be provable to be true,
- G should be rewritten as “G can be proved to be true and cannot be proved to be true within the theory T”.
- Then it says "A and not A", and is false. That means it is provable to be false. And G is false for the italic part in “G can be proved to be true and cannot be proved to be true within the theory T”. (though it is false in "silent part").
- Like sushi (talk) 10:06, 15 October 2009 (UTC)
- When you say "That means it is provable to be false", you are assuming that T is complete - that T can prove all true statements such as the statement "G is false". But this is the whole point of Gödel's first incompleteness theorem - if T were complete then we can prove that G is true and at the same time we can prove that G is false, therefore T is inconsistent. Conversely, if T is consistent, then T cannot be complete. Gandalf61 (talk) 10:22, 15 October 2009 (UTC)
- Not so, we do not need a complete system in order to prove that "Not(A and not A)" is a theorem, even without any information on what A is. The problem is that the statement "G and G is provable in T" is a stronger statement than just "G". So by assuming that ever statment includes the addendum "and this is provable in T" we get in a situation where we don't just have to prove the statement, but we also have to prove the provability of the statement. Which is easy in the meta-language, you just produce the proof. But that is the meta language, which is a different system. Taemyr (talk) 10:36, 15 October 2009 (UTC)
- Also note that Gödel the statement that produces is a statement that is equivalent with G. It is not G itself, the common thought at the time was to avoid paradox by having systems that where incapable of referring to itself. One of the things Gödel notes is that this is impossible if the system is capable of stating any thruths about arithmetic. The actual statement G is purely an arithmetic statement. It just happens to be constructed in a way so that G is true if and only if G can not be proven. Taemyr (talk) 10:49, 15 October 2009 (UTC)
- Not so, we do not need a complete system in order to prove that "Not(A and not A)" is a theorem, even without any information on what A is. The problem is that the statement "G and G is provable in T" is a stronger statement than just "G". So by assuming that ever statment includes the addendum "and this is provable in T" we get in a situation where we don't just have to prove the statement, but we also have to prove the provability of the statement. Which is easy in the meta-language, you just produce the proof. But that is the meta language, which is a different system. Taemyr (talk) 10:36, 15 October 2009 (UTC)
- When you say "That means it is provable to be false", you are assuming that T is complete - that T can prove all true statements such as the statement "G is false". But this is the whole point of Gödel's first incompleteness theorem - if T were complete then we can prove that G is true and at the same time we can prove that G is false, therefore T is inconsistent. Conversely, if T is consistent, then T cannot be complete. Gandalf61 (talk) 10:22, 15 October 2009 (UTC)
- @Like Sushi, look: Russel's paradox assumes the following intuitive premise: For every definition, there exists the set containing just all elements satisfying that definition. Having assumed that, Russel found a definition which doesn't let the correspondent set exist! this is Russel's paradox: Assuming the existence of such a set - is a contradiction! HOOTmag (talk) 11:51, 15 October 2009 (UTC)
- The empty set does not include itself as a member, silently or otherwise. — Carl (CBM · talk) 11:44, 15 October 2009 (UTC)
- I included that as an example of how the statement "Every set includes itself as a memeber" would break stuff. Taemyr (talk) 11:50, 15 October 2009 (UTC)
- You're right, I missed it. Actually, this thread is making me think I should look into proposed "resolutions" of Russell's paradox. — Carl (CBM · talk) 12:15, 15 October 2009 (UTC)
- I included that as an example of how the statement "Every set includes itself as a memeber" would break stuff. Taemyr (talk) 11:50, 15 October 2009 (UTC)
- This is a different way from what I proposed before. Can the first of Gödel's incompleteness theorems be considered in this way?
- Once again, the part of explanation seems to be
-
- "For each consistent formal theory T having the required small amount of number theory, the corresponding Gödel sentence G asserts: “G cannot be proved to be true within the theory T”. If G were provable under the axioms and rules of inference of T, then T would have a theorem, G, which effectively contradicts itself, and thus the theory T would be inconsistent. This means that if the theory T is consistent then G cannot be proved within it. This means that G's claim about its own unprovability is correct; in this sense G is not only unprovable but true. Thus provability-within-the-theory-T is not the same as truth; the theory T is incomplete.
- If G is true: G cannot be proved within the theory, and the theory is incomplete. If G is false: then G can be proved within the theory and then the theory is inconsistent, since G is both provable and refutable from T."
- Gödel sentence G asserts: “G cannot be proved to be true within the theory T”.
- If G were provable (to be true), G would effectively contradict itself. So G cannot be proved (to be true). Then it can be true but unprovable, or is false. G can be "false but is provable to be true".
- Like sushi (talk) 08:08, 16 October 2009 (UTC)
- First of all be very carefull about what something means when you say that something is false within a formal system. Often the system is defined by what it means for something to be true or false. Gödel's incompleteness theorems can be resolved by allowing something to be true but unprovable. This means that the system is incomplete, which is a bad thing but many feels it is a price we have to pay. It can also be false but provable, which usually means that the system is inconsistent. This can be very bad, for example clasical logic breaks down when faced with an inconsistency. Having one you can prove any statement. However some feels that logical systems that can reason in a good manner even in the presence of inconsistensy is usefull for when we are reasoning from assumptions that might be faulty. See paraconsistent logic for further treatment. however even when we are operating in a paraconsistent logic we do not actually want inconsistency. Taemyr (talk) 08:35, 16 October 2009 (UTC)
- The variations of Russell's paradox seems to be in the form
- The <V>er that <V>s all (and only those) who don't <V> themselves
- The result is if the <V>er <V>s itself, it does not <V> itself, and if the <V>er does not <V> itself, it does <V> itself. So in either case of it <V>s or does not <V>, it may be taken to <V> and not <V>. It is in "A and not A", so false.(it is already assumed that A, and resulted in not A, this seems to be the way)
- If an assumption has some false statement as a result, it seems to be right to judge it false.
- The <V>er that <V>s all (and only those) who don't <V> themselves
- can be a false assumption in that the <V>er does not <V> all (and only those) who don't <V> themselves.
- Like sushi (talk) 08:50, 16 October 2009 (UTC)
- Yes, the barber described in Barber paradox does not exist. Russell's paradox is a paradox only if you assume that every set that can be defined must also exist. If you accept that it's possible to describe sets such that the description correspond to no existing set then there is no paradox. However it creates the obvious question, when you describe a set, how do you know that a set satisfying your description exists? Taemyr (talk) 09:30, 16 October 2009 (UTC)
- The variations of Russell's paradox seems to be in the form
- Sorry. About the first of Gödel's incompleteness theorems, in saying "false but is provable to be true", I had an illusion that "provable to be true" includes "true". It must be otherwise, that "true" includes "provable to be true".
- And I have no clue about how to answer "the obvious question", "how can we know that a set of descripion exists?". Could you give me any hint about it or examples of inexistable sets? (Please don't just link to ZFC or something, it is too complicated for me to understand the reason)
- Like sushi (talk) 10:36, 16 October 2009 (UTC)
- Provable usually implies true. Otherwise it's usually seen as a problem with the proof system. However if you want something to be false but provable then either you must get rid of that implication, so you must accept that provable does not necesarry imply true, or you must accept that something can be both true and false.
- Well Russells set, the set of all sets that are not members of themselves, is an example of a set that can not possibly exist. Unless I am mistaken it's not possible to creat a general method to find wether or not a proposed set exists or not. Taemyr (talk) 11:16, 16 October 2009 (UTC)
- I have a suspicion about one of what are written in paraconsistent logic.
- Having a single inconsistency, can we really prove any statement?
- "if P and its negation are both assumed to be true, then P is assumed to be true, from which it follows that at least one of the claims P and some other (arbitrary) claim A is true. However, if we know that either P or A is true, and also that P is not true (that is true) we can conclude that A, which could be anything, is true"
- My suspicion is about "if we know that either P or A is true, and also that P is not true (that is true) we can conclude that A,... is true".
- P is not true, but is also true. so "P or A" is already true just with P, so A can be either true or false.
- Like sushi (talk) 12:18, 16 October 2009 (UTC)
- Disjunctive syllogism, that is going from (A or B) And Not B, to A, is valid in clasical logic. So the inference you quote is valid in clasical logic.
- For making a system paraconsisten it's fairly common to choose to get rid of Disjunctive syllogismm for precisely the reasons you qoute. See Paraconsistent_logic#Tradeoff. The problem is that when you do so you obtain a weaker system, so you can prove less in it. Taemyr (talk) 12:31, 16 October 2009 (UTC)
- By the way, about Gödel's incompleteness theorems, I think saying "Being true does not guarantee provabilty" is the easiest way to present it to non-specialists.
- Like sushi (talk) 12:45, 16 October 2009 (UTC)
- Given the theory T is consistent then G is unprovable and true. Does that not mean G has been proven? If it means that G has been proven, If T is consistent there is an inconsistency.
- Or if so, G would be provable to be true and unprovable to be true, "A and not A", so is false. Then it must be false but provable to be true. —Preceding unsigned comment added by Like sushi (talk • contribs) 01:19, 17 October 2009 (UTC)
- Like sushi (talk) 01:07, 17 October 2009 (UTC)
- What you can prove, without assuming very much at all, is that if T is consistent, then GT is true. This is not the same as proving that GT is true, without the assumption that T is consistent.
- Nevertheless, we know that for all consistent T, GT is indeed true, and we know this precisely because we can prove the implication that if T is consistent then GT is true.
- This is more confusing to state than it is to understand — once you see it, it's completely straightforward, but it's still hard to say in a way that doesn't sound like you're pulling a fast one. --Trovatore (talk) 01:28, 17 October 2009 (UTC)
- Is it that we don't know if T is consistent, so can not prove G to be true?
- Like sushi (talk) 02:16, 17 October 2009 (UTC)
- That's a fair way of putting it for the general case, sure.
- In specific cases, we may well "know" that T is consistent, for whatever value of "know" is being considered, and we will therefore also know that GT is true. But we still won't be able to prove it from T. We may be able to formalize a proof of GT, but it won't be a proof in T; it will be a proof in some theory of higher consistency strength. --Trovatore (talk) 02:21, 17 October 2009 (UTC)
- "Given the theory T is consistent then G is unprovable and true." is not a proof in T, that G is true? (I think it may be for "T is consistent" is not in T.)
- Like sushi (talk) 02:46, 17 October 2009 (UTC)
- Exactly, this is not a proof in T that GT is true, because given that T is in fact consistent, T cannot prove that T is consistent. That's the content of Gödel's second incompleteness theorem, and this exchange here is essentially the proof of the second incompleteness theorem. --Trovatore (talk) 03:09, 17 October 2009 (UTC)
- Gödel's incompleteness theorem#Second incompleteness theorem does not have enough explanation about why "given that T is in fact consistent, T cannot prove that T is consistent"
- Why cannot it?
- Like sushi (talk) 06:29, 17 October 2009 (UTC)
- For exactly the reason you intuited in your message of 02:46! If T could prove T is consistent, then since T can also prove that, if T is consistent, then GT is true, it would follow (by modus ponens) that T could prove GT is true. But we know that, if T is in fact consistent, then T cannot prove GT. --Trovatore (talk) 06:40, 17 October 2009 (UTC)
- Well, thank you. I think I understand it now.
- Like sushi (talk) 07:19, 17 October 2009 (UTC)
- For exactly the reason you intuited in your message of 02:46! If T could prove T is consistent, then since T can also prove that, if T is consistent, then GT is true, it would follow (by modus ponens) that T could prove GT is true. But we know that, if T is in fact consistent, then T cannot prove GT. --Trovatore (talk) 06:40, 17 October 2009 (UTC)
- Exactly, this is not a proof in T that GT is true, because given that T is in fact consistent, T cannot prove that T is consistent. That's the content of Gödel's second incompleteness theorem, and this exchange here is essentially the proof of the second incompleteness theorem. --Trovatore (talk) 03:09, 17 October 2009 (UTC)
- About "Having a single inconsistency, can we really prove any statement?"
- Again, in paraconsistent logic:
- "if P and its negation are both assumed to be true, then P is assumed to be true, from which it follows that at least one of the claims P and some other (arbitrary) claim A is true. However, if we know that either P or A is true, and also that P is not true (that is true) we can conclude that A, which could be anything, is true"
- My suspicion is, in this time, about "at least one of the claims P and some other (arbitrary) claim A is true". Can A not be either true or false independent of P?
- Like sushi (talk) 02:16, 17 October 2009 (UTC)
- I would strongly advise you to forget about the concepts of "true" and "false" when thinking about paraconsistent logics and Gödel's theorem, at least at first. In a classical logic, every proposition P has an opposite ¬P. Ideally, for every proposition P, you want the formal system to prove either P or ¬P, but not both—i.e. you want it to "settle every question". Such a system is said to be consistent and complete. If neither P nor ¬P is provable for some P, the system is incomplete. If both are provable, it's inconsistent. What Gödel's theorem shows is that in any formal system that contains enough arithmetic, there's a proposition P that's provable if and only if ¬P is provable, so the system is either inconsistent or incomplete. That's all. The stuff about P being true or false is interpretation on top of that, it's not something that Gödel proved. In nonclassical logics this binary pairing of P and ¬P doesn't work any more because there's no dualizing operation that exchanges them. Instead of going P ⇒ ¬P ⇒ P ⇒ ¬P ⇒ ⋯, negation goes P ⇒ ¬P ⇒ ¬¬P ⇒ ¬¬¬P ⇒ ⋯. So you no longer have a black-and-white true-and-false world of yes-no questions. Just because you have P and ¬P doesn't mean you've necessarily covered all the bases. -- BenRG (talk) 10:55, 17 October 2009 (UTC)
- For paraconsistent logics, which really have no interpretation, it might make sense to forget about truth. If for no other reason that, once you start thinking about truth, you'll realize that paraconsistent logic is bogus. (Oh, not completely bogus — it probably does model certain aspects of commonsense reasoning that deserve to be modeled — but bogus in the sense that it's not a useful view of mathematical reality.)
- But I completely disagree with you that you shouldn't think about truth in order to understand the Gödel theorems. On the contrary, keeping track of what's actually true, separate from what's provable in T, is the most effective way to avoid getting confused. As to what "Gödel actually proved", you have to keep in mind the times — philosophers in the thirties tended to view truth with some suspicion, and it's quite likely that Gödel wanted to avoid that fight. --Trovatore (talk) 19:03, 17 October 2009 (UTC)
- Sorry, the explanation was not suffice. In saying "in paraconsistent logic" I just wanted to show the source. I wanted to link to Definition section, which shows "principle of explosion" (the explanation of which I quoted) in classical logic. The question in the post above is about that.
- Like sushi (talk) 12:14, 17 October 2009 (UTC)
(Deindenting). A logic is a set of rules for what constitutes a valid inferrence within that logic. In Classical logic we are allowed to go from (A or P) and ¬P, to A. There really isn't anything more to say, if you create a system where Disjunctive syllogism is an invalid inference then you create a system that can prove less. This might be a tradeoff you are willing to make if you can't know that your assumptions are not contradictory. But it is a different system. Taemyr (talk) 14:24, 17 October 2009 (UTC)
Lottery: 6 numbers out of 49, expected values in case of large jackpots
Let’s assume that one lottery ticket costs 1$. I can calculate the probabilities of matching 6, 5, 4 or 3 numbers (cases when prizes are paid). 40% of money paid into lottery is for prizes. 44% of total prizes goes for the main prize (for matching 6 numbers), 8% for the second prize (matching 5 numbers). There is a cumulation/jackpot (If no one wins the main prize the money goes for next drawing). Is there a way to calculate when (in terms of the size of the jackpot) buying a lottery ticket gives expected value greater than zero? —Preceding unsigned comment added by 213.158.197.100 (talk) 13:28, 14 October 2009 (UTC)
- Sorry, I don't understand. Does 44% of the jackpot go to the first prize (i.e. all six numbers) and 8% of the jackpot go to the second prize (i.e. five numbers)? Why do you say that 40% goes to prizes and what are the percentages for four and three numbers? If you know the percentage of the jackpot awarded to three, four, five and six numbers then just calculate the size of the jackpot that makes the expected payoff more than $1. Zain Ebrahim (talk) 14:09, 14 October 2009 (UTC)
100$ - money paid as lottery tickets; 40$ - total prize fund; 17.6 $ - money for the main prize (44% * 40$); Percentages for four and three numbers are not stated —Preceding unsigned comment added by 213.158.197.100 (talk) 14:44, 14 October 2009 (UTC)
- Since the expected winning is a function of the relative amounts paid to the third and fourth prizes, the most I can see is a range of jackpots. Use the hypergeometric distribution to calculate the probabilities. The upper bound of the range will be when 48% of the prize fund goes to the third place (i.e. the less likely) and the lower bound when 48% goes to fourth place. Zain Ebrahim (talk) 15:16, 14 October 2009 (UTC)
- Ignoring the possibility of sharing the jackpot, it should be this easy: For each $1 you invest, you can expect to win back $.40, if there's no jackpot. So the expected jackpot winning must exceed $.60, which means that the jackpot must be at least $.60/P(6), where P(6) is the probability of matching 6 numbers. To take jackpot sharing into account, we would also need to know how many people are playing. —JAO • T • C 09:54, 15 October 2009 (UTC)
October 15
simple formula for fractions in computer code
i am writing a simple computer program that has a draggable slider in it. this slider at its minimum can have a value of 1 and at its maximum it can have a value of 1000. now, i have another object, let's say it is a light. when off, it is black and has a value of 1, it rises through shades of grey and at complete brightness (pure white) it has a value of 100.
i want this light to interact according to the slider between the numbers 400-500. so if the slider is valued at 1 to 400, the light would be black, between 401-499 would be shades of grey and from 500 to 1000 would be pure white.
what mathematical formula, without the use of code, would i use to solve this? i want my light to interpret the numbers 1-100 the same way my slider interprets its value of 400-500 in its range of 1-1000.
i hope i made this clear enough D:
thanks! Bonusbox (talk) 01:42, 15 October 2009 (UTC)
- I'm not sure what you mean by "without the use of code". Do you mean, no "if" statements? Are you looking for a single mathematical expression that could be translated into a single line of mathematics-only code?
- In general, what you want to do is take your slider's value (1-1000) and subtract 400 from it. Then, if the result is less than 0, make it zero. If the result is greater than 100, make it 100.
- If you have MIN and MAX functions that return the smallest and largest passed values (respectively), then you could do something like this (make sure that your math operations are done in signed integers):
- lightValue = MIN(MAX(sliderValue - 400, 1), 100)
- Note that slider value 401 translates to light value 1, which you initially said was black. That's a bit different from what you said later, "between 401-499 would be shades of grey".
- Does that help? –RHolton≡– 04:29, 15 October 2009 (UTC)
- excellent. i guess code was required. in actionscript it turned out to be: lightValue = Math.min(Math.max(sliderValue - 400, 1), 100);
- all working now though. thanks! Bonusbox (talk) 21:51, 15 October 2009 (UTC)
- Code is not strictly speaking required. You could use polynomial interpolation. Assuming the slider only gives integer values you would get a degree 999 polynomial. This might be prone to overflow though. Taemyr (talk) 09:40, 16 October 2009 (UTC)
- I was wondering if there was a strictly math-based (no code) approach (no matter how impractical). Would polynomial interpolation result in a one to one mapping? –RHolton≡– 13:30, 16 October 2009 (UTC)
- Code is not strictly speaking required. You could use polynomial interpolation. Assuming the slider only gives integer values you would get a degree 999 polynomial. This might be prone to overflow though. Taemyr (talk) 09:40, 16 October 2009 (UTC)
- Well, let's see... You are only interested in integer values, right? Thus you have a function that is defined in 999 points (x; y) and no other points really matter. So, you can act as if you wanted to get some values between them - in such case polynomial interpolation would give you exact fit in those 999 points (that is, if we could somehow avoid the rounding errors). Of course, finding a polynomial with a degree of 998 (and then calculating its value) can safely be considered to be "impractical" in presence of such a simple alternative... --Martynas Patasius (talk) 16:50, 17 October 2009 (UTC)
October 16
cbse mathematics question papers
i want cbse 11th maths chapterwise question papers. Please help me....... —Preceding unsigned comment added by 59.92.241.69 (talk) 12:45, 16 October 2009 (UTC)
- I think "cbse" means Central Board of Secondary Education. I do not understand the rest of your request. Do you want study questions? –RHolton≡– 13:44, 16 October 2009 (UTC)
- It sounds like OP's asking for past papers? Vimescarrot (talk) 22:38, 16 October 2009 (UTC)
The Limiting Function of a Fourier Series
So given a Fourier series, we can say a lot about the function from it was generated. Things like convergence (in various norms), if the function is even, odd, or neither, if it is continuous, differentiable, how many times is it continuously differentiable, even things like the period of the function. My question is, given a Fourier series, is there some sort of an "inverse" operation we can do to retrieve the original function exactly? Is there any way to tell what function was used to generate the given Fourier series? My questions in particular is about this
What is this converging to? Thanks! -Looking for Wisdom and Insight! (talk) 23:25, 16 October 2009 (UTC)
- I'm not exactly sure what you're asking. I mean, there exist various manipulations you can do on series to rewrite them as other series, but the series you've given seems to be a perfectly good closed-form expression for a function to me. RayTalk 02:03, 17 October 2009 (UTC)
The answer is no. For example, look at two periodic functions each with one jump discontinuity, differing from each other ONLY in that one is continuous from the right and the other from the left at that jump. They both have the SAME Fourier series. Michael Hardy (talk) 02:17, 17 October 2009 (UTC)
...but I should add that there is an inverse operation if, instead of pointwise defined functions, you look at Fourier series of certain sorts of generalized functions, or Fourier series of well-behaved measures. Michael Hardy (talk) 02:19, 17 October 2009 (UTC)
- This seems like a circular question. The function that generates the series above is g(x). In other words series converges to a function that generates it. If your asking if there is some way to generate a closed form expression for the function given the series then the answer is probably no.--RDBury (talk) 14:22, 17 October 2009 (UTC)
"series converges to a function that generates it" is well known to be false. Michael Hardy (talk) 14:40, 17 October 2009 (UTC)
We have an article on this: Convergence of Fourier series. Michael Hardy (talk) 14:47, 17 October 2009 (UTC)
- I should have added "with assumptions about absolute convergence etc." The terms in the series given are all O(1/n4) so I kind of assumed that would be implied by the context.--RDBury (talk) 15:03, 17 October 2009 (UTC)
The fact remains (as I pointed out above) that more than one function generates the same series. Isn't that what the question asked? Michael Hardy (talk) 15:10, 17 October 2009 (UTC)
- My interpretation of the question is he wants an expression for the function that generates the series. What I should of said is that g(x) is one such function. But I think what the original poster was really looking for is a closed form expression and I doubt it exists. Not really sure why this is turning into an argument, you clearly know a lot more about the subject than I do and I've already agreed that I should have been more careful about the statement I made.--RDBury (talk) 15:30, 17 October 2009 (UTC)
OK, maybe I'm getting distracted by the language that was used. The poster used the word "inverse". The way I am accustomed to think of it, summing the series is the inverse operation. So if the question is whether the inverse operation can tell you the original function, the answer is that there are certainly limits on how far you can take that, and the question of what the limits are is complicated.
But if the poster meant "How do you sum the series?", that's another matter. In this case n&sqrt;5 looks as if it might not be easy to deal with, but I'm not sure. Michael Hardy (talk) 16:59, 17 October 2009 (UTC)
Rewrite the sines and cosines as complex exponentials; can then be written in closed form quite easily in terms of polylogarithms. It's doubtful that it can be reduced to elementary functions, but I'm not sure. Fredrik Johansson 17:22, 17 October 2009 (UTC)
- I see your point. I was thrown by the n root 5 as well but you can expand the product into functions of . If it was n3 in the denominator then it looks like you could get a piecewise polynomial function by repeated integration of the sawtooth function, but the parity is wrong for that in this case.--RDBury (talk) 18:27, 17 October 2009 (UTC)
October 17
Quadratic forms on real matrices
Hi all - I've been given a problem to show that the map is a quadratic form on Matn(), and find its rank & signature (where tr(A) denotes the trace of A).
The first part is no problem - I'm using the definition of a quadratic form where Q is a q.f. iff and the map is bilinear, so that's fairly simple. However, I don't even understand how to begin finding the trace or the signature - with quadratics like I know to write them in the form xTAx for vector x=(x,y,z) and symmetric matrix A, but I'm in a little over my head here, how do I get started when I'm working with n*n matrices instead?
Many thanks, Mathmos6 (talk) 01:08, 17 October 2009 (UTC)
- Well you can write it out as a quadratic: If A=[a,b;c,d] then tr(A^2) = a^2 + d^2 + 2bc. The matrix of the quadratic form is just a permutation matrix: [1,0,0,0;0,0,1,0;0,1,0,0;0,0,0,1]. If A=[a,b,c;d,e,f;g,h,i], then tr(A^2) = a^2 + e^2 + i^2 + 2bd + 2cg + 2fh. JackSchmidt (talk) 04:37, 17 October 2009 (UTC)
- You can write it out as a quadratic for fixed n, but I think the OP wants it for general n. --Tango (talk) 09:54, 17 October 2009 (UTC)
If A = (aij) is an n × n matrix then, as the trace article shows:
This gives an explicit expression for the quadratic form. The trace and signature should follow from calculations. Find the symmetrix matrix, say M such that vMvT = tr(A2) where
Finding M shouldn't be all that difficult. For example, in the case where n = 2 then tr(A2) = a112 + 2a12a21 + a222 and so
The eigenvalues are given by −1, 1, 1, and 1, the eigenvectors are (0,−1,1,0), (0,0,0,1), (0,1,1,0), and (0,0,0,1) respectively. The rank and signature follow. ~~ Dr Dec (Talk) ~~ 11:32, 17 October 2009 (UTC)
If A is symmetric and B is skew-symmetric then tr(AB)=0. Moreover, Q(A):=tr(A2) is positive definite on the space of symmetric matrices, and it is negative definite on the space of skew-symmetric matrices. So it's a non-degenerate quadratic form with signature ( n(n+1)/2 , n(n-1)/2 ). --pma (talk) 21:26, 17 October 2009 (UTC)
Good way to remember meet vs join?
I can never remember which of Meet and Join is which (and the symbols and ). Any suggestions for a good way to remember? — Matt Crypto 17:29, 17 October 2009 (UTC)
- Meet looks like intersection which looks like an 'n' and join looks like union which looks like a 'u'. Does that help?--RDBury (talk) 18:10, 17 October 2009 (UTC)
- The meet of two sets is where they meet, i.e. the intersection (and has a symbol resembling the intersection symbol). The join of two sets is the two sets joined together, i.e. the union (and again, the symbol is similar). Algebraist 18:15, 17 October 2009 (UTC)
- (ec) One way I can think of is that if you have a power set with partial order x ≤ y for , then and .
- More generally, are like 'A' for "and" and are like 'U' for "union". Rckrone (talk) 18:28, 17 October 2009 (UTC)
- Let's add and . Note that cup-like symbols denote inductive limits while cap-like symbols, like the product, denote projective limits. Also, it may help the mnemonic to recall that (like is for "Union"), stands for the Latin conjunction vel, "or" (indeed, x is in iff x is in A, or in B). --pma (talk) 21:47, 17 October 2009 (UTC)
Certain conformal map projections
At Talk:Dymaxion map, I posted a hastily scrawled question, which I then amended a few minutes later. This seems like a better place to find an answer than that page.
The Dymaxion map is a world map on the surface of an icosohedron (or maybe in some cases some other polyhedron?). It can unfold to make a flat map, perhaps with less distortion (except maybe along the edges and certainly at the vertices) than some other projections of the whole world, or nearly the whole world, onto a page.
The simplest way to do that would be 20 separate stereographic projections, I would think. Stereographic projections are conformal.
So that would make the Dymaxion map conformal except at the vertices and possibly along the edges.
My guess is it's not conformal along the edges.
And obviously it could not be conformal at the vertices.
Two questions:
- Is my guess about the edges right? (Maybe I'll fiddle with this and see if I can find the answer before anyone answers here; it doesn't seem like a hard question.)
- If so, a possibly harder question to answer: is there some other way to do it, i.e. not stereographic, that makes it conformal everywhere except at the 12 vertices?
Michael Hardy (talk) 23:04, 17 October 2009 (UTC)
October 18
Getting pi with a limit
I was reading Method of exhaustion, and I wanted to try out Archimedes' method for calculating pi with a limit. Finding the limit as the number of sides of a regular polygon approaches infinity of the ratio of its area to its apothem squared gave me:
Obviously, this equals pi, but I have no clue how to show that. I only have a very basic understanding of limits; could someone show me how to evaluate that expression to obtain pi? Thegreenj 00:04, 18 October 2009 (UTC)
- If you know about derivatives, your limit is just the derivative of tan(πx) at x=0, π indeed. That's not Archimede's method, of course. He compared the perimeter of the inscribed and of the circumscribed regular n-gons, that are very close for large n -and π is in between. If n is a power of two, meaning that you double the vertices recursively, the formula for the perimeter is somehow simpler, giving rise to Viète's formula. --78.13.138.118 (talk) 00:20, 18 October 2009 (UTC)