Wikipedia:Reference desk/Mathematics: Difference between revisions
Line 232: | Line 232: | ||
:Why does the variable have to be ''x''? Surely the labelling of a variable is unimportant. I could replace ''y'' with a heart and still have the same ODE: |
:Why does the variable have to be ''x''? Surely the labelling of a variable is unimportant. I could replace ''y'' with a heart and still have the same ODE: |
||
::<math>a\frac{dv}{d\hearts} = b + c\frac{d^2v}{d\hearts^2}</math> <small><span style="font-family:Kristen ITC; color:#FF6600;">~~ [[User:Dr Dec|<span style="color:#006600;">Dr Dec</span>]] <span style="color:#009999;">([[User talk:Dr Dec|Talk]])</span> ~~</span></small> 19:47, 10 November 2009 (UTC) |
::<math>a\frac{dv}{d\hearts} = b + c\frac{d^2v}{d\hearts^2}</math> <small><span style="font-family:Kristen ITC; color:#FF6600;">~~ [[User:Dr Dec|<span style="color:#006600;">Dr Dec</span>]] <span style="color:#009999;">([[User talk:Dr Dec|Talk]])</span> ~~</span></small> 19:47, 10 November 2009 (UTC) |
||
:::even nicer! --[[Special:Contributions/84.221.208.19|84.221.208.19]] ([[User talk:84.221.208.19|talk]]) 20:44, 10 November 2009 (UTC) |
|||
== Which school are you? == |
== Which school are you? == |
Revision as of 20:44, 10 November 2009
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
November 4
Complete vector spaces under the uniform norm
Hi all:
I'm wondering whether the following spaces are complete under the uniform norm , and I'm a little concerned my arguments are too rough and I'm neglecting things which could lead me to be wrong because i haven't thought it through well enough, I'd appreciate it if someone could point out areas where I'm going wrong, which I have no doubt there will be!
i) The space of bounded continuous functions .
ii) The space of continuous functions with as
iii) The space of continuous functions with for sufficiently large.
My thoughts:
i) I know that on the bounded interval (, the space is complete, and so presumably the same is true of any interval [a,b] by stretching and translating. The functions fn (Cauchy sequence) converge to their pointwise limit on these intervals, and since the interval is arbitrary, the functions have a limit f which is continuous on any interval [a,b] and thus bounded on it too. Thus, the only manner in which f can be unbounded is as x tends to (± infinity): however, if this is the case (WLOG take + infinity), then since f is the pointwise limit of the fn, and the sequence is Cauchy which implies that, if as n tends to infinity the fn have arbitrarily increasing upper bounds (pointwise convergence to an unbounded function) and we can make the fn arbitrarily close for large enough n > some N by the Cauchy property, the fn all tend to infinity as x tends to infinity, since they must remain as close to each other as we require by choice of N. Thus the space is complete.
ii) The functions are continuous and by their definition must be bounded, so by (i) they converge to a bounded continuous function in the limit, and because this function is in fact their pointwise limit, if it does not tend to 0 as |x| tends to 0, then the fn do not either, and so by a contradiction argument, f too must tend to 0, and so the space is complete again.
iii) I believe I've constructed a series of functions which are 'truncations' of f(x)=1 if |x|<1, 1/x if |x|>1 and which in the limit tend to this f(x) which is not equal to 0 for sufficiently large |x|, but the sequence is Cauchy because as m,n tend to infinity, fm and fn agree with each other precisely, for larger and larger values of |x|, and thus tends to 0, so the seq. is Cauchy and hence this is not a complete space.
Sorry if any of that doesn't make sense, please try to muddle through :) I do intend to refine this argument more when I write the question up in full, but since it's quite long I'd rather sort things out in rough before I dive in fully rigorously - I have, no doubt, made some stupid mistakes, and your help would be greatly appreciated! (And thanks for reading this far, too.) Typeships17 (talk) 02:50, 4 November 2009 (UTC)
- i) This looks right but I think you can tighten it up a bit. To show the pointwise limit f is bounded, pick any ε > 0. There exists an N such that ||fN - fn|| < ε for all n > N. Suppose fN is bounded by M, then you can show f is bounded by M + ε.
- ii) It's not true in general that the pointwise limit of a sequence of functions that all go to 0 as x goes to infinity must go to 0 as x goes to infinity. For example let gn(x) = 1 for |x| < n and gn(x) = 0 otherwise. However for any ε > 0 you can find N as before, and then some B such that |fN(x)| < ε for all |x| > B, and then you should be able to argue that |f(x)| < 2ε for all |x| > B.
- iii) This looks good.
- Rckrone (talk) 04:09, 4 November 2009 (UTC)
- Wow, I wasn't expecting an answer so quickly, thanks very much! :D I'll get to writing them up properly then! Typeships17 (talk) 04:16, 4 November 2009 (UTC)
- As a further remark, note that the space (iii) is dense in the space (ii), and different from it, so it's not complete. Both (i) and (ii) are closed subspaces of the space B(R) of all bounded functions R→R, so (i) and (ii) are complete since the latter is complete. For analogous cases, you may find it useful the general result (same proof) that for any set S and any Banach E the space B(S,E) with the sup norm is still a Banach space (this way you get at once the completeness of a lot of norms based on the sup norm: dual and operator norms, &c). Although (iii) is not complete with the sup norm, it has another natural structure of TVS as inductive limit (in the category TVS) of the inclusions C(K)→C(R), K compact. This topology is locally convex, non metrizable, sequentially complete. --pma (talk) 08:28, 4 November 2009 (UTC)
Two parallelogram areas theorem
When two parallelograms lie between two parallels and same base, their areas are equal. I have got problem with the proof. When the figures are like this, it can be done by proving the triangles congruent (Side-angle-angle) but when the figures are like this, how do we do it? Srinivas 15:32, 4 November 2009 (UTC)
- What about adding congruent triangles to both, so as to obtain congruent trapezoids. --pma (talk) 15:53, 4 November 2009 (UTC)
- (e/c) If all else fails, you can introduce sufficiently many other parallelograms in between so that the first case applies to each pair of neighbours, and use transitivity of equality. — Emil J. 15:57, 4 November 2009 (UTC)
- You can also make the same congruent triangle argument with triangles ACE and BDF. You just need to do some subtraction to get to the areas you want. Rckrone (talk) 18:56, 4 November 2009 (UTC)
Incompleteness theorem
The incompleteness theorem says that in any mathematical system, there are statements which can be neither proved nor disproved. Is there any way to prove that a statement is unprovable by this theorem? --75.50.49.177 (talk) 23:05, 4 November 2009 (UTC)
- I'm afraid I don't know what you're really asking. You'll probably need to phrase it more precisely before anyone can help. In the meantime you might take a look at our article on Gödel's incompleteness theorems and see if it addresses your question. --Trovatore (talk) 23:12, 4 November 2009 (UTC)
- The incompleteness theorem states that some statements are neither provable nor disprovable. However, is there any way of determining which statements those are? --75.50.49.177 (talk) 23:14, 4 November 2009 (UTC)
- For a given theory T satisfying the hypotheses, the (modified) proof of the first incompleteness theorem gives an effective way of finding a sentence GT such that, if T is consistent, then GT is true, yet neither provable nor refutable from T. Sorry I can't make it less complicated than that. All the stipulations are necessary in order to have a correct statement of the situation, so you'll just have to work your way through them. -Trovatore (talk) 23:21, 4 November 2009 (UTC)
- That says how to construct a statement which is unprovable. However, given a statement (e.g. the Riemann Hypothesis), is it possible to determine whether or not that statement is provable? --70.250.215.137 (talk) 02:41, 5 November 2009 (UTC)
- Not in general, no. Well, at least not by any fixed mechanical procedure. In technical terms, the set of theorems of T is computably enumerable but not computable. --Trovatore (talk) 02:44, 5 November 2009 (UTC)
- As Trovatore says, not in general, no. But there are some specific examples where undecidability can be proved: one of the most famous problems of the early 20th century was to prove or disprove the continuum hypothesis. This turns out to be both unproveable and un-disprovable (also known as independence) if you start from the usual axioms of set theory. The proof of independence was done in the 1960's by Paul Cohen and doesn't use the incompleteness theorem directly (it uses a complicated technique called forcing). That proof was considered a very major result. For other examples, see the article about independence that I linked to. 69.228.171.150 (talk) 05:39, 5 November 2009 (UTC)
- Not in general, no. Well, at least not by any fixed mechanical procedure. In technical terms, the set of theorems of T is computably enumerable but not computable. --Trovatore (talk) 02:44, 5 November 2009 (UTC)
- That says how to construct a statement which is unprovable. However, given a statement (e.g. the Riemann Hypothesis), is it possible to determine whether or not that statement is provable? --70.250.215.137 (talk) 02:41, 5 November 2009 (UTC)
- For a given theory T satisfying the hypotheses, the (modified) proof of the first incompleteness theorem gives an effective way of finding a sentence GT such that, if T is consistent, then GT is true, yet neither provable nor refutable from T. Sorry I can't make it less complicated than that. All the stipulations are necessary in order to have a correct statement of the situation, so you'll just have to work your way through them. -Trovatore (talk) 23:21, 4 November 2009 (UTC)
- The incompleteness theorem states that some statements are neither provable nor disprovable. However, is there any way of determining which statements those are? --75.50.49.177 (talk) 23:14, 4 November 2009 (UTC)
- There are plenty of mathematical systems in which every statement can be either proved or disproved. Examples include the theory of real closed ordered fields and the theory of dense linear orderings without endpoints. There are several hypotheses in Gödel's incompleteness theorem and they are all important for the result. — Carl (CBM · talk) 03:28, 5 November 2009 (UTC)
Hmm, it occurs to me that I gave the answer sbout the theorems being c.e. but not computable without a proof immediately in mind, and when I thought of one, it requires T to be Sigma-1 sound. (The proof is, you can reduce the halting problem to the theorems of T -- given a Turing machine that you want to know whether it halts, just decide whether T proves that it halts. The usual hypotheses of the Gödel theorems suffice to show that, if the TM halts, then T proves that it halts, but for the converse you need that T is Sigma-1 sound. Does anyone see how to eliminate this hypothesis? I flat don't believe that the theorems of, say, PA+~Con(PA) form a computable set, even though that theory proves that certain Turing machines halt that actually don't halt.) --Trovatore (talk) 05:24, 5 November 2009 (UTC)
- As you're saying, no consistent completion of PA is computable, whether that completion is sound or not. Syntactically, you get rid of the soundness assumption by using Rosser's trick. But there is also a completely computational proof, related to the concept of a PA degree. We can prove that given any completion T of PA and any nonempty class C in Cantor space 2ω, there is an element of C computable from T. So if you take T to be a class with no computable elements, you obtain a corollary that there is no computable completion of PA. — Carl (CBM · talk) 11:25, 5 November 2009 (UTC)
- Well, unless I'm missing something, that's a little different question. I wasn't asking about completions. The question is, given a consistent r.e. theory extending PA and closed under logical consequence, can you show that that theory is not computable? --Trovatore (talk) 11:34, 5 November 2009 (UTC)
- Yes, and asking it to extend Q rather than PA is more than enough. — Emil J. 11:39, 5 November 2009 (UTC)
- Rosser's trick is one way to prove it, and the other argument by Carl works too: you just have to observe that every decidable theory has a decidable completion. The usual way to prove the theorem along the lines of your proof above is to take a recursively inseparable pair of r.e. sets instead of the halting problem. — Emil J. 11:44, 5 November 2009 (UTC)
- Why does every decidable theory have a decidable completion? --Trovatore (talk) 20:02, 5 November 2009 (UTC)
- You can do this explicitly. The language is countable, so fix an enumeration of all sentences. Then go along the list, and for each sentence φ use your decision procedure to determine which of φ and ~φ is provable from your theory so far. If one of them is, add it to the theory, if neither is, add φ. This gives a completion which is decidable by construction (since you can decide a sentence just by repeating the construction) Thus all you have to show is that if T is decidable, then T union {φ} is decidable, and this is obvious (to see if θ follows from T u {φ}, check if φ→θ follows from T). Algebraist 21:39, 5 November 2009 (UTC)
- Nice; thanks. --Trovatore (talk) 22:19, 5 November 2009 (UTC)
- You can do this explicitly. The language is countable, so fix an enumeration of all sentences. Then go along the list, and for each sentence φ use your decision procedure to determine which of φ and ~φ is provable from your theory so far. If one of them is, add it to the theory, if neither is, add φ. This gives a completion which is decidable by construction (since you can decide a sentence just by repeating the construction) Thus all you have to show is that if T is decidable, then T union {φ} is decidable, and this is obvious (to see if θ follows from T u {φ}, check if φ→θ follows from T). Algebraist 21:39, 5 November 2009 (UTC)
- Why does every decidable theory have a decidable completion? --Trovatore (talk) 20:02, 5 November 2009 (UTC)
- Exactly. I think I did misread the original question. For PA (or any consistent extension of Q) the set A of formulas φ such that and the set B of formulas θ such that are already recursively inseparable r.e. sets, that is, there is no computable set C including A and disjoint from B. Any consistent decidable extension of PA would give such a set, so there is no such extension. So PA (and Q) is "essentially undecidable". — Carl (CBM · talk) 12:48, 5 November 2009 (UTC)
- Well, unless I'm missing something, that's a little different question. I wasn't asking about completions. The question is, given a consistent r.e. theory extending PA and closed under logical consequence, can you show that that theory is not computable? --Trovatore (talk) 11:34, 5 November 2009 (UTC)
November 5
Provable or Disprovable or Independent?
The above discussion about Godel's incompleteness theorem prompted a question I've had for a while: Is it true that any statement can either be proved from a set of axioms, disproved, or shown to be independent of those axioms?
Here's my reasoning: I've heard people suppose that, in addition to those possibilities, it might be undecidable whether or not something is undecidable, or undecidable whether or not it's undecidable that it's undecidable, and so on ad infinitum. However, if something is (undecidable)^n, then clearly there exists no proof of that statement, and there also exists no proof of its negation. But that means it is simply undecidable: it is independent of the axioms. It seems, then, that there is no possibility for a statement's undecidability to be undecidable-- given any problem, there are three options: we can prove it, disprove it, or give a proof of its independence. Is this valid? Do I need any conditions on a theory for this argument to apply? 140.114.81.68 (talk) 09:20, 5 November 2009 (UTC)—Preceding unsigned comment added by 140.114.81.68 (talk) 09:19, 5 November 2009 (UTC)
- If you do things this way, you have to keep careful track of where you're proving things. For example, the Goedel sentence GPA for Peano arithmetic (PA), cannot be proved or refuted in PA. Also, PA cannot prove that GPA is independent of PA.
- However the slightly stronger theory PA+"PA is consistent" can prove that GPA is independent of PA, and also that GPA is outright true.
- The thing to keep in mind is that "independent" is not a truth value. It's a statement about whether something is provable or refutable from a specific formal theory, and the theory always needs to be specified. Truth sempliciter on the other hand is a much more robust notion; the Goedel sentence of a consistent theory, though not provable in that theory, is always true, and you don't have to specify the theory that makes it true: It's just true, period. Remembering this is a good anchor when learning the Goedel stuff; it helps keep you from floating off into confusion. --Trovatore (talk) 09:35, 5 November 2009 (UTC)
- Also within any given consistent system of sufficient power there will be statements that we can not prove, nor prove the negation, nor give a proof that neither of the previous proofs exists. Taemyr (talk) 10:11, 5 November 2009 (UTC)
- If something is (undecidable)^n in T, as you say, then it is in particular undecidable in T (because if something is decidable in T, it is also provable decidable). However, the conclusion does not follow: there is no reason why we should be able to prove in T that the statement is (undecidable)^n (indeed, we never can do that, because undecidability of anything implies consistency of T). — Emil J. 11:52, 5 November 2009 (UTC)
In a certain sense, it is true that a particular sentence φ will be either provable from PA, disprovable from PA, or independent of PA. This is just tautologous. But it does not mean that "we can prove it, disprove it, or give a proof of its independence". There are many statements of interest where we can do none of those three things at present. Also, it will usually be true that the proof of independence is carried out in a metatheory rather than the theory at hand, as others have pointed out above. — Carl (CBM · talk) 13:46, 5 November 2009 (UTC)
Relation of "contiguity space" and "proximity space"
Are the usual meanings of "contiguity space" and "proximity space" essentially the same? At Talk:Contiguity space we have a suggestion that they are the same, mainly on the basis of a google result that included "proximity space or contiguity space". Essentially I am asking if the proposed redirect and renaming would conflict with established usage, or with what should be on Wikipedia. An article on proximity space already exists. If "contiguity space" is a separate idea we could still move out the stuff related to probability measures, but that would leave a very poor article. Melcombe (talk) 11:47, 5 November 2009 (UTC)
Lie Brackets
I put a question about Lie brackets here. Basically, I don't understand how they are useful and so why someone would have come up with them. —Ben FrantzDale (talk) 12:16, 5 November 2009 (UTC)
- The Lie bracket of vector fields has a nice geometric interpretation. Let M be a smooth Riemannian manifold and consider the C∞(M,R)-module of smooth vector fields over M. Let X and Y be smooth vector fields on M. We can consider the new vector field [X,Y] given by [X,Y] = XY − YX. If we flow along X a distance of ε and then along Y a distance of ε we reach a point p ∈ M, then if we flow first along Y a distance of ε and then along X a distance of ε we reach a point q ∈ M. The difference between p and q is given by ε2[X,Y]. More formally: "The Lie bracket [X,Y] of two vector fields X and Y measures the O(ε2) gap in an incomplete quadrilateral of O(ε) arrows made alternatively from εX and εY."[1] So it's measuring how movement is commutative. In our real world experience if we move two paces east and then two paces north we end up at the same point as if we had taken first two paces north and then two paces east. This wouldn't be the case for more exotic manifolds and more exotic vector fields. ~~ Dr Dec (Talk) ~~ 15:22, 5 November 2009 (UTC)
- ^ Penrose, R (2005), The Road to Reality: A Complete guide to the Laws of the Universe, Vintage Books, ISBN 0-099-44068-7
- Thanks. What I don't understand is how that is useful, beyond characterizing the local non-flatness of the manifold. When would I use it to solve a problem? Would it just be to answer questions about the manifold itself? Perhaps I'm just over my head and need to fill in more understanding to ask meaningful questions :-) —Ben FrantzDale (talk) 19:34, 6 November 2009 (UTC)
- Well, you've answered your own question there. The torsion and curvature tensors both use the Lie bracket in their definitions. They're both very fundamental and interesting objects in differential geometry. Asking where the Lie bracket might be useful or when you might use it to solve a problem is kind of like, to a much lesser extent, asking how addition is useful or how you might use addition to solve a problem: they're both mathematical constructions with their own uses. I know that addition is far more useful and commonplace than the Lie bracket, but I hope you get my point. ~~ Dr Dec (Talk) ~~ 19:54, 6 November 2009 (UTC)
- In addition, Lie brackets are a good way of calculating the Lie derivative of one vector field with respect to another one. Lie derivative can be intuitively understood as the speed of change of one vector field in the flow of the other one; this definition (as one of many) is formalised by the last formulae of this section. — Pt (T) 19:21, 9 November 2009 (UTC)
- Well, you've answered your own question there. The torsion and curvature tensors both use the Lie bracket in their definitions. They're both very fundamental and interesting objects in differential geometry. Asking where the Lie bracket might be useful or when you might use it to solve a problem is kind of like, to a much lesser extent, asking how addition is useful or how you might use addition to solve a problem: they're both mathematical constructions with their own uses. I know that addition is far more useful and commonplace than the Lie bracket, but I hope you get my point. ~~ Dr Dec (Talk) ~~ 19:54, 6 November 2009 (UTC)
- Thanks. What I don't understand is how that is useful, beyond characterizing the local non-flatness of the manifold. When would I use it to solve a problem? Would it just be to answer questions about the manifold itself? Perhaps I'm just over my head and need to fill in more understanding to ask meaningful questions :-) —Ben FrantzDale (talk) 19:34, 6 November 2009 (UTC)
Transporting tensors on Lie groups
I have a question about parallel transport of tensors on Lie groups. Thanks. —Ben FrantzDale (talk) 18:02, 5 November 2009 (UTC)
- The exponential map gives a mapping from a Lie algebra to its Lie group. For the connexion between the two you might like to read this section of this article. Moreover, given your last question about Lie brackets, you might like to read this section of this article. Also, you might like to read this article. ~~ Dr Dec (Talk) ~~ 18:36, 5 November 2009 (UTC)
November 6
Counterexample
Could anyone suggest a nice example of a continuously differentiable function on R^2 which has only 1 stationary point - a local minimum (or maximum), but which is not a global minimum (or maximum)?
I keep looking for functions with minimum at 0,0 but I can't seem to find anything which increases locally in all 4 directions, i.e. along both axes, and yet doesn't produce any more stationary points when it eventually decreases somewhere. I found one in a paper by Ash and Sexton but it was quite unpleasant and I was hoping to find something nice and neat. Any ideas?
Thanks. 82.6.96.22 (talk) 00:50, 6 November 2009 (UTC)
- I think (x3 - x)ey + e2y works. Rckrone (talk) 01:31, 6 November 2009 (UTC)
- As an aside, increasing along both axes doesn't guarantee a local min. For example x4 + y4 - (x+y)4/4 - (x-y)4/4 has only a saddle point at (0, 0). Rckrone (talk) 02:02, 6 November 2009 (UTC)
In polar coordinates: unless I misunderstand your question or am not thinking straight, try
That should have a local minimum at the origin but oscillate unboundedly as you get further away. 69.228.171.150 (talk) 06:10, 6 November 2009 (UTC)
- A further note. Consider a diffeo h of Rn with an open subset A of Rn itself, for instance a ball, h:Rn→A. If f: Rn→R is any differentiable function, the composition g(x):=f(h(x)) is a differentiable function whose critical set has the same structure (same number, index, nullity, level) of the critical points of f in A. Of course, the latter may be whatever. If the global max and min points of f belongs to the boundary of A, then g has also the same sup/inf as f. So for instance there is a function on Rn with critical points of prescribed level, index, and cardinality, and the interval f(Rn) may also be prescribed. --pma (talk) 09:35, 6 November 2009 (UTC)
- With regards to :, would that not have more than 1 stationary point though? 82.6.96.22 (talk) 14:23, 6 November 2009 (UTC)
- Oh yes, I did misunderstand. Hmm, I'll think about whether I can fix it by suitably twisting up the shape while keeping the equations reasonably neat. 69.228.171.150 (talk) 22:50, 6 November 2009 (UTC)
- With regards to :, would that not have more than 1 stationary point though? 82.6.96.22 (talk) 14:23, 6 November 2009 (UTC)
Try this section of the maxima and minima article. It says that ƒ : R2 → R given by ƒ(x,y) = x2 + y2(1 − x)3 has its only critical point at x = y = 0 but that this isn't a global minimum because ƒ(4,1) = −11 < ƒ(0,0) = 0. ~~ Dr Dec (Talk) ~~ 15:44, 6 November 2009 (UTC)
Polynomials
When you say R[x]/<x^2+1> is isomorphic to the complex numbers, do you mean that because [x]^2+[1]=[0], where the brackets denotes the respective equivalence classes, and x is the polynomial f(x)=x? Is the following correct: [x]^2=[x^2] and hence [x]^2+[1] = [x^2+1] = [0], so [x] is like the imaginary unit? Najor Melson (talk) 05:38, 6 November 2009 (UTC)
- If R is a ring and if I is an ideal of R, the map sending r to the coset r + I is a surjective ring homomorphism. In particular, using your notation, [x]2 = [x2] is true, as well as your other assertions. Consider the surjective homomorphism defined on R[x] that sends a polynomial f to its image at i (the imaginary unit). Since R[x] is a principal ideal domain, and since x2 + 1 is irreducible over the real numbers, the kernel of this homomorphism is generated by x2 + 1. In particular, . Furthermore, since the image of x under this isomorphism is i, x does indeed "act like i" in the given quotient ring (ismorphisms preserve the behaviour of elements). Hope this helps. --PST 05:53, 6 November 2009 (UTC)
It means not only that [x]^2+[1]=[0], but more generally there is a one-to-one correspondence between these equivalence classes and the complex numbers, such that multiplication and addition of equivalence classes correspond exactly to multiplication and addition of complex numbers. Michael Hardy (talk) 16:58, 6 November 2009 (UTC)
ordinal strength
The ordinal strength of PA is ε0. How does one figure it out for PA+Con(PA)? Thanks. 69.228.171.150 (talk) 08:36, 6 November 2009 (UTC)
- It depends on the particular way you choose to measure ordinal strength. There is a trivial answer for measures based on computable functions, such as the smallest ordinal α such that every provably total computable function is β-recursive for some β < α: since the set of p.t.c.f. of a theory is unaffected by addition of true -sentences, PA + Con(PA) has the same ordinal as PA, ε0, in this measure. I'm afraid the answer is more complicated for measures sensitive to the -fragment of the theory. For example, the -ordinal measure by Beklemishev[1][2] gives ε0·2 for PA + Con(PA). — Emil J. 16:03, 7 November 2009 (UTC)
- Thanks, that's interesting. I am naive about this stuff and thought that Gentzen had shown that PA couldn't prove ε0 was well-ordered, but that adding TI(ε0) was enough to prove Con(PA), so the ordinals had to be different. The Beklemishev article looks informative and I'll try to read it. 69.228.171.150 (talk) 21:06, 7 November 2009 (UTC)
- It is true that PA cannot prove ε0 was well-ordered, and that primitive recursive arithmetic plus transfinite induction on ε0 proves PA is consistent. The thing that is not obvious (or perhaps non-intuitive) is that if you add Con(PA) to PA as an axiom, you obtain no new provable ordinals. This is because the two measures of logical strength, namely:
- T is stronger than S if T proves Con(S)
- T is stronger than S if the proof-theoretic ordinal of T is higher than the proof theoretic ordinal of S
- are not actually equivalent. I do not even know if there is a theorem that (2) implies (1) for some broad class of theories. — Carl (CBM · talk) 13:35, 9 November 2009 (UTC)
- TI(ε0) is a second-order statement. While one often encounters it mentioned sloppily in the context of PA, it only makes sense in second-order arithmetic, or at least arithmetic with an extra uninterpreted predicate, so properly speaking it gives an ordinal analysis of ACA0 or PA(X) rather than PA. Having said that, this is basically -ordinal analysis, which is even less sensitive to additional axioms than the computable functions approach (i.e., -analysis) I mentioned above. ACA0 + Con(PA), as well as ACA0 + T for any
r.e.set T of arithmetical sentencesof bounded complexityconsistent with PA, has the same ordinal ε0 as PA in this setting. The reason is that TI(ε0) implies not only Con(PA), but in fact the full uniform reflection principle for PA. — Emil J. 14:08, 9 November 2009 (UTC)- You can also consider restricted forms of TI as a schema for classes of arithmetical formulas, this works already in the first-order language. If I got the subscripts right, -TI(ε0) is not provable in any consistent extension of PA by a set of -sentences, in particular PA + Con(PA) does not prove -TI(ε0). — Emil J. 17:06, 9 November 2009 (UTC)
- I did get the subscripts wrong, or rather, badly suboptimal. The (hopefully) correct property is that for any n ≥ 0, -TI(ε0) is equivalent to -TI(ε0), which is in turn equivalent to -RFN(PA), and consequently is not provable in any consistent extension of PA by a set of -sentences (whereas it is easy to see to be expressible as a single -sentence). In particular, already -TI(ε0) is unprovable in PA + Con(PA) (or in any consistent extension of PA by -sentences). — Emil J. 13:05, 10 November 2009 (UTC)
- You can also consider restricted forms of TI as a schema for classes of arithmetical formulas, this works already in the first-order language. If I got the subscripts right, -TI(ε0) is not provable in any consistent extension of PA by a set of -sentences, in particular PA + Con(PA) does not prove -TI(ε0). — Emil J. 17:06, 9 November 2009 (UTC)
- Thanks, I just spotted the after-Nov-7 posts. I didn't realize that PA and PA+X could have the same ordinal if PA doesn't prove X. It is counterintuitive because of the natural picture of these theories being well-ordered as you add or take away axioms. I also hadn't thought much about how to formalize TI(ε0). The book I was reading [3] may have handwaved it, but I'll go back and see if I can figure it out. 69.228.171.150 (talk) 03:51, 12 November 2009 (UTC)
- It is true that PA cannot prove ε0 was well-ordered, and that primitive recursive arithmetic plus transfinite induction on ε0 proves PA is consistent. The thing that is not obvious (or perhaps non-intuitive) is that if you add Con(PA) to PA as an axiom, you obtain no new provable ordinals. This is because the two measures of logical strength, namely:
- Thanks, that's interesting. I am naive about this stuff and thought that Gentzen had shown that PA couldn't prove ε0 was well-ordered, but that adding TI(ε0) was enough to prove Con(PA), so the ordinals had to be different. The Beklemishev article looks informative and I'll try to read it. 69.228.171.150 (talk) 21:06, 7 November 2009 (UTC)
Birth and Death Chain - Markov Processes
Hi all :)
I asked this a while ago but never got an answer, hopefully someone might be able to help me this time. For a fairly simple birth-death chain (1D, with probabilities , , of moving in each direction at node i), what's the easiest way to find (*) P(Xn-> infinity as n -> infinity) - e.g. should I be looking at the expected time of something as n tends to infinity, or P(Xn<k) as n tends to infinity, or the transition matrix P^n etc? Hopefully once I know a sensible way to show (*), the actual question will be fairly simple.
Thanks a lot, Otherlobby17 (talk) 16:28, 6 November 2009 (UTC)
- Could you describe the problem in more words? For example, what is k? 69.228.171.150 (talk) 23:00, 6 November 2009 (UTC)
- I want to find it for values of k in - is there a general formula of any sort? Initially I want to show that the probability is 1 for k=2, but then I've been asked to find the formula for a general k in the positive reals - why, is that not feasible? Other than the , , formulae, I don't really have anything else to explain unfortunately, I mean it's a markov chain along a line (natural numbers, say) with probabilities p_ij of moving from i to j. Sorry but that's pretty much the whole problem! Otherlobby17 (talk) 15:06, 7 November 2009 (UTC)
- It seems you'll have to explain your notation even further. Specifically, your formula for has no mention of , and your is not given any meaning. Is one of the states, or some constant? What exactly do all of these labels mean? Nm420 (talk) 18:44, 7 November 2009 (UTC)
- I'm pretty sure what the OP means is that when the state is currently at i, pi is the probability of it moving to i+1, and qi is the probability of it moving to i-1. No other moves are allowed. Referring to the notation in Birth-death process, pi = λi and qi = μi. k is just a constant as the OP said. The answer to the question presumably depends on k. Sorry I don't have any insight into solving the problem. Rckrone (talk) 19:20, 7 November 2009 (UTC)
- That makes some sense, though I'm afraid there's little aid I can provide in this problem as well. Since the chain is irreducible, all states are either transient or recurrent; that is, the probability that is either 0 or 1. If you can show under what conditions state 0 (say) is transient, then you will be done. 97.83.155.87 (talk) 20:54, 7 November 2009 (UTC)
- I'm pretty sure what the OP means is that when the state is currently at i, pi is the probability of it moving to i+1, and qi is the probability of it moving to i-1. No other moves are allowed. Referring to the notation in Birth-death process, pi = λi and qi = μi. k is just a constant as the OP said. The answer to the question presumably depends on k. Sorry I don't have any insight into solving the problem. Rckrone (talk) 19:20, 7 November 2009 (UTC)
- It seems you'll have to explain your notation even further. Specifically, your formula for has no mention of , and your is not given any meaning. Is one of the states, or some constant? What exactly do all of these labels mean? Nm420 (talk) 18:44, 7 November 2009 (UTC)
- I want to find it for values of k in - is there a general formula of any sort? Initially I want to show that the probability is 1 for k=2, but then I've been asked to find the formula for a general k in the positive reals - why, is that not feasible? Other than the , , formulae, I don't really have anything else to explain unfortunately, I mean it's a markov chain along a line (natural numbers, say) with probabilities p_ij of moving from i to j. Sorry but that's pretty much the whole problem! Otherlobby17 (talk) 15:06, 7 November 2009 (UTC)
- Apologies, I assumed this was fairly standard notation but was obviously mistaken. Yes, at state i (i=0,1,2,...) we may move left by 1 state with probability qi (or Pi,i-1 if you prefer to talk about the transition matrix) and right by 1 state with probability pi (Pi,i+1) - pi+qi=1, so we have only these 2 options at each stage. The relation between pi and qi is the one given above involving k, and I want to find a solution based on the value of k, which is somewhere in (0,infinity) (I am first asked to prove the result is P(Xn-> infinity as n -> infinity)=1 for k=2, and then asked for general k, but obviously the general case would solve my first problem anyway, unless you need the result at k=2 to prove the general case) - when I mentioned pi,j above, what i meant was the probability of moving from i to j - nonzero iff j=i±1, in this case. Does that make sense?
- Also, I follow what you say about all states being either transient or recurrent, but how does that necessarily imply that the probability that is either 0 or 1? Thanks very much all :) Otherlobby17 (talk) 20:59, 7 November 2009 (UTC)
- If a state were recurrent, then the probability of the process returning to it in a finite amount of time is 1; obviously the process cannot return to any particular state if it is growing without bound. That is the intuitive idea behind it anyhow. 97.83.155.87 (talk) 22:06, 7 November 2009 (UTC)
- If a state is recurrent then the probability of the sequence going to infinity is zero, but if all the states are transient, couldn't the probability of the sequence going to infinity still be less than 1? I don't see why it would have to be restricted to 0 or 1 either. Rckrone (talk) 05:57, 8 November 2009 (UTC)
- If all the states are transient, meaning that for any j in N the set is finite almost surely, then (by sigma-additivity) is almost surely a sequence of naturals that takes each value finitely many times, therefore diverges to . Following 97.83 directions, I guess one should start by computing or bounding the probability to go back to 0 after the time n; it's the corner coefficient in the n-fold power of the associated infinite matrix
- However it seems difficult to do it for the given . But for instance, if one finds then 0 has to be transient, as the latter value is a bound for the probability that for at least one n>0 (a more precise bound may be obtained via the inclusion-exclusion formula also). In the lack of better ideas, I'd suggest to make some numerics with various choices of the parameter k (note that only depends on the first n/2 values of or so) . --pma (talk) 15:28, 8 November 2009 (UTC)
- If a state is recurrent then the probability of the sequence going to infinity is zero, but if all the states are transient, couldn't the probability of the sequence going to infinity still be less than 1? I don't see why it would have to be restricted to 0 or 1 either. Rckrone (talk) 05:57, 8 November 2009 (UTC)
Help with Logic Problem
I think i've made progress, but I'm stuck--I was hoping for a nudge in the right direction
I'm only allowed to use the 18 valid argument forms.
Here's what I have so far
- A≣B (premise)
- ~(A ⋅ ~R) ⊃ (A ⋅ S) (premise) /∴ ~(B ⋅ S) ⊃ ~(A ⋅ R)
- (A ⊃ B) ⋅ (B ⊃ A) 1 EQUIV
- A ⊃ B 3 SIMP
- (A ⋅ ~R) v (A ⋅ S) 2 IMP
- A ⋅ (~R v S) 5 DIST
- A 6 SIMP
- B 7, 4 MP
- ~R v S 6 SIMP
but now i'm completely stuck! I have a sneaking suspicion that ADD might help......UGH, any thoughts?
sorry for the double post, i fixed the formatting209.6.48.226 (talk) 18:41, 6 November 2009 (UTC)
- Well, "the 18 valid argument forms" doesn't really ring a bell — you should know that which inference rules are taken as basic is something that varies by presentation. There's a good chance that the specific list of inference rules and their names/acronyms is particular to, perhaps, even just the one professor from whom you're taking the course. It's also possible that this is some more widely recognized list that someone else can help you with, just not me. But it's very far from universal, and you should be aware of that. --Trovatore (talk) 20:24, 6 November 2009 (UTC)
- I hadn't heard of 18 valid argument forms either, but there are a few ghits. 69.228.171.150 (talk) 20:40, 6 November 2009 (UTC)
November 7
Probability question
This is nothing to do with Homework, just an interesting problem that I came across. Assume there are three possible states for a man : He can be healthy, sick, or dead. The probability that he is sick the next day given that he is healthy today is 0.2 (the values don't really matter much here). The probability that he is dead the next day given he is sick today is 0.25. The probability that he is dead the next day given he is healthy today is zero. The probability that he is healthy the next day given he is sick is 0.4. Given he is healthy today, what is the probability that he will never die ? My instinct tells me that the answer is zero, he has to get sick somewhere down the line, and eventually die. Rkr1991 (Wanna chat?) 06:32, 7 November 2009 (UTC)
- Yes, it's 0. Assuming the independence you may observe that the probability to be dead within 2 days is in any case at least p=0.05, so the probability to be still alive after 2n days is not larger than (1-p)n, that goes to 0 exponentially. Yours is an example of a Markov chain. --pma (talk) 07:59, 7 November 2009 (UTC)
- OK. So how do you find the answer to the slightly harder question, what is the probability that he is alive after n days given he is alive today ? Rkr1991 (Wanna chat?) 10:55, 7 November 2009 (UTC)
- (I assume you mean "given he is healthy today" ). Just write the transition matrix relative to the states 1=H, 2=S, 3=D, which is
- P:=
- where pij is the probability to pass from the state i to the state j. Then, in Pn the coefficient p(n)1,3 is the probability to be in 3 at time n starting from 1 at time 0. To write Pn efficiently you should diagonalize P first (you can: it has 3 simple eigenvalues: 1, 0.213, and 0.936), P=LDL-1, Pn=LDnL-1. Check Markov chain for details.--pma (talk) 12:13, 7 November 2009 (UTC)
- OK. So how do you find the answer to the slightly harder question, what is the probability that he is alive after n days given he is alive today ? Rkr1991 (Wanna chat?) 10:55, 7 November 2009 (UTC)
- Great, thanks. Rkr1991 (Wanna chat?) 13:47, 7 November 2009 (UTC)
- Just a quibble here: You say he has to eventually die. Not really. That he will die eventually has probability 1, but is not certain; it is possible for him to remain healthy for all time, but the probability is 0. It's a subtle distinction, but an important one in some contexts. --Trovatore (talk) 01:20, 9 November 2009 (UTC)
- Indeed. He will almost surely die. --Tango (talk) 01:24, 9 November 2009 (UTC)
Moment of inertia of a trapezoidal prism
I'm trying to put together a little tool to calculate a first approximation to the centre of gravity and moments of inertia of a conventional airliner. Most of it is pretty simple: I'll model the fuselage as a solid cylinder (though I'm tempted to try to make it more a combination of the structural mass as a cylindrical shell and the payload mass as a cylinder inside it), the engines also as cylinders, and the moments of inertia around the aircraft CG of the tail parts are dominated by the parallel axis theorem so I can neglect their contributions about their own CGs.
The challenge I'm finding is the wing. It is not small and it is close to the aircraft CG so I need it's own moment of inertia. I would like to take into account the effects of wing sweep, dihedral, and taper (that is, the difference in chord between the wing tip and the wing root). I don't need anything more complex like elliptical wings or multiple taper ratios. Sweep and dihedral I'll deal with by just rotating the moment of inertia matrix once I have it. So what I need is the moment of inertia of a trapezoidal prism. But I can't find any equations for that anywhere.
From the Moment of inertia article:
The convention I am using is that x is rearward, y is out the right wing, and z is up. So the wing is a trapezoid when projected on the x-y plane, with a constant (relateively small) thickness in the z direction. I am happy keeping the density (rho) constant. I expect that the answer will be in the forms of formulae for Ixx, Iyy, Izz about the principal axes (the diagonal of the matrix), with Ixz, Iyz, and Ixy zero and I will have to rotate the matrix to work in my coordinate system.
I would also like some guidance on the appropriate thickness to choose as the z-dimension. Clearly it is airfoil shaped, but I was going to approximate it as rectangular. Given the maximum airfoil thickness, what would be a rough guide to the right prism thickness? Actually, now that I think about it, in truth the height of the airfoil is proportional to the chord as well, that is it decreases from root to tip. But I am happy to neglect that. An alternative to modelling it as a rectangular section would be a diamond section, i.e. a quadrilateral with two sets of two equal sides, and the user of my tool would have to specify the maximum airfoil thickness and the location of the maximum airfoil thickness.
I know I could have asked this on the Science RD, but I hope I've taken the physics out of the question and it is now actually a math question.
Thanks, moink (talk) 07:13, 7 November 2009 (UTC)
- Have you considered numerical integration? If the parameters don't change too drastically, you can do that for various values and fit a function to the results. -- Meni Rosenfeld (talk) 06:30, 9 November 2009 (UTC)
- I could do that, but I am pretty sure there is a closed-form solution. moink (talk) 08:54, 9 November 2009 (UTC)
- Deciding I had to start deriving it myself, I googled "centroid of a trapezoid" to at least not have to derive the centroid, and found this page: [4] which has two of the three answers I need. The third can't be too terribly different from a rectangular prism. moink (talk) 08:54, 9 November 2009 (UTC)
- Looking more closely at those, they are area moments of inertia and not mass moments of inertia, but I think they are still helpful as partial solutions to the integral. moink (talk) 09:10, 9 November 2009 (UTC)
Concatenations with preimages
Given a function f : X → Y, how to prove f(f −1(B)) ⊆ B and f −1(f(A)) ⊇ A for all subsets A of X and all subsets B of Y? --88.78.2.122 (talk) 08:52, 7 November 2009 (UTC)
- Just use the definitions of the preimage f −1(B) ("all what goes in B") and of the image f(A) ("where A goes"). So f(f −1(B)) ⊆ B reads "all what goes in B, goes in B" and f −1(f(A)) ⊇ A reads "A goes where also goes all what goes where A goes" .
- Also notice the useful equivalence, for any subset A of X and any subset B of Y:
- f(A) ⊆ B ⇔ A ⊆ f −1(B),
- from which you can deduce both your inclusions starting respectively from f −1(B) ⊆ f −1(B) and f(A)) ⊆ f(A). --pma (talk) 09:03, 7 November 2009 (UTC)
- Just for talking, you may consider the power set of X and Y, namely P(X) and P(Y), as small categories, where objects are subsets, and arrows are inclusions. Then the image map f* : P(X)→P(Y) and the preimage map f* : P(Y)→P(X) are adjoint functors, and of course the inclusions you wrote are just the co-unity and unity of the adjunction f* f*. (Check closure operator also).
- PS: Talking about Html, does anybody know how to get the ℘ a bit higher, and to get a bit lower the star in f* ? (I got it in quite a weird way). And why I can't see ⊣ for ? --pma (talk) 11:25, 7 November 2009 (UTC)
- <span style="vertical-align:20%">℘</span> ℘ (but not abusing Weierstrass' p-function symbol for power set is a better option), ''f''<sub style="vertical-align:-80%">*</sub> f*. Note that fine-tuning like this is very font-dependent, hence you cannot do it in a way which would reliably work for all readers. As for ⊣, there is no such thing among the XHTML 1.0 named character entities[5] (Wikipedia's HTML tidy code will thus remove it even if a particular browser could support it as an extension). The character appears in Unicode on position U+22A3, hence you can get it by ⊣ (or ⊣) in HTML: ⊣⊣ (or simply input the character directly: ⊣). — Emil J. 15:03, 7 November 2009 (UTC)
- Thank you! Nice emoticon :⊣) also --pma (talk) 15:37, 7 November 2009 (UTC)
- <span style="vertical-align:20%">℘</span> ℘ (but not abusing Weierstrass' p-function symbol for power set is a better option), ''f''<sub style="vertical-align:-80%">*</sub> f*. Note that fine-tuning like this is very font-dependent, hence you cannot do it in a way which would reliably work for all readers. As for ⊣, there is no such thing among the XHTML 1.0 named character entities[5] (Wikipedia's HTML tidy code will thus remove it even if a particular browser could support it as an extension). The character appears in Unicode on position U+22A3, hence you can get it by ⊣ (or ⊣) in HTML: ⊣⊣ (or simply input the character directly: ⊣). — Emil J. 15:03, 7 November 2009 (UTC)
plausibility of mathematical completeness
It's the mid 1920's and you're a top mathematician being recruited to the Hilbert school. Your mission: prove Hilbert's famous contention (from "On the Infinite"):
- As an example of the way in which fundamental questions can be treated I would like to choose the thesis that every mathematical problem can be solved. We are all convinced of that. After all, one of the things that attracts us most when we apply ourselves to a mathematical problem is precisely that within us we always hear the call: here is the problem, search for the solution; you can find it by pure thought, for in mathematics there is no ignoramibus.
The subtlety of Gödel's 1931 incompleteness proof may have escaped everyone else, but still, those guys weren't slouches. My question is why could anyone ever have been convinced Hilbert's thesis was true?
Consider a simple enumeration of all the sentences φ1, φ2,... over, say, the language of PA. For each sentence φk make a red dot on the number line at the point k, and write the formula denoted by φk next to the dot. At the end of this process you have an infinite row of red dots, each labelled by a formula. Now go through again and enumerate the theorems t1, t2,... and for each ti, find the dot you made earlier for that formula and flip its color from red to blue. (If it's already blue, don't do anything; this will happen a lot because you will generate all the theorems infinitely often). Also similarly flip the color of the dot for ~ti to blue. In other words you have colored the decidable formulas' dots blue and left the undecidable ones (if there are any) red. Hilbert says that at the end of this, you will have no red dots left, just blue ones. This seems like a pretty elementary description, certainly accessible and hopefully natural to any logician of that time.
Of course we know today that there will necessarily be red dots at the end, and of course we can reasonably say that until 1931, Hilbert could reasonably harbor some hope that there wouldn't be red dots. But what could make him (or anyone) so sure that there'd be no red dots. The process is just a big combinatorial mess, some weird contraption jumping around all over the place flipping dots from red to blue, and the exact pattern of flipping depends intimately on the exact make-up of the contraption (i.e. the content of the theory being studied). You could construct axioms that resulted in only flipping the prime-numbered dots, or whatever. Was there any mathematically reasonable plausibility argument that the dots would end up all blue for a theory like PA or ZF? Did they expect too much, based on the completeteness of simpler theories like Presburger arithmetic? Or was it just a bogus emotional conviction people had back then, that's alien to us now? 69.228.171.150 (talk) 12:22, 7 November 2009 (UTC)
- While waiting for a more technical explanation, my answer to your question is: evidently, at that time it was not at all trivial. Speaking in general, it seems to me that this question reflects a common ahistorical attitude of we contemporary/modern people towards people of the past/ancient people (your post is still correct, I'm just taking the opportunity). We usually say "how could they be so naives to believe something, and not to see what for us is such a triviality". The conclusion is: "we are then smarter than them" (usually only thought), and a certain condescending air. I think I have enough evidence that our brain is not better (experiment: just ask for a formula for the solution of the third degree equation to a mathematician who doesn't know it). If today so many ideas are so easily understandable for so many people of medium intelligence, whereas once they were so difficult and just for a small elite, this seems to me the proof that the people who created these ideas, and those who prepared the ground for them, where giants, and we should have the greatest gratitude and admiration for them. --pma (talk) 14:02, 7 November 2009 (UTC)
- I'm not at all saying it should have been obvious in the 1920's that PA was incomplete. I'm just wondering why anyone would have expected it to be complete. Certainly it was an important question to investigate, but why did almost everyone expect a particular outcome instead of saying "gee, I don't know"? Also I'm talking about the mid 1920's (maybe I should have said late 1920's, since "On the Infinite" was written in 1925), not the era of Frege. The groundwork was pretty well established by then. (Actually, one exception: Emil Post anticipated the incompleteness theorem in 1921, but wasn't able to put a proof together.) There are plenty of statements today that we think are true but for which we don't have a proof (e.g. Riemann hypothesis, P!=NP, FLT until recently, etc). But we can make reasonable plausibility arguments for each of those, even if we turn out to be wrong afterwards. I'm wondering whether there was a plausibility argument for completeness. 69.228.171.150 (talk) 20:58, 7 November 2009 (UTC)
- My guess is that it was based on the notion that mathematical truth derived entirely from axiomatic proof, combined with the optimistic faith that truth is knowable ("we must have no ignorabimus"). Had Hilbert really been thinking of the question in the combinatorial light you present, I doubt he would have been so sure; it was because he invested the combinatorics with meaning that he thought it had to be this way. I have not extensively read Hilbert and am speculating a bit here, but this makes sense to me. --Trovatore (talk) 21:29, 7 November 2009 (UTC)
- I'm not at all saying it should have been obvious in the 1920's that PA was incomplete. I'm just wondering why anyone would have expected it to be complete. Certainly it was an important question to investigate, but why did almost everyone expect a particular outcome instead of saying "gee, I don't know"? Also I'm talking about the mid 1920's (maybe I should have said late 1920's, since "On the Infinite" was written in 1925), not the era of Frege. The groundwork was pretty well established by then. (Actually, one exception: Emil Post anticipated the incompleteness theorem in 1921, but wasn't able to put a proof together.) There are plenty of statements today that we think are true but for which we don't have a proof (e.g. Riemann hypothesis, P!=NP, FLT until recently, etc). But we can make reasonable plausibility arguments for each of those, even if we turn out to be wrong afterwards. I'm wondering whether there was a plausibility argument for completeness. 69.228.171.150 (talk) 20:58, 7 November 2009 (UTC)
Calculus with imaginary constants
Is calculus with imaginary constants, say , just the same as when the constant is real? 131.111.216.150 (talk) 15:43, 7 November 2009 (UTC)
- Broadly. If you want to evaluate an integral like where f is a complex valued function of a real variable and a and b are reals, then you can just write f as the sum of its real and imaginary parts f = u + iv, and then the integral of f is defined to be the integral of u plus i times the integral of v (as long as the integrals exist). However you do have to be a little careful with things like change of variables. Here's an example: consider the real integral . Making the substitution y = ix, so dy=i dx doesn't change the limits and, x^4=(y/i)^4=y^4. So . It follows the integral is zero....but it obviously isn't. Tinfoilcat (talk) 16:38, 7 November 2009 (UTC)
- Making that substitution certainly does change the limits, and the domain integrated over. It changes the integral from one on the real line from -∞ to ∞ to one on the imaginary line from -i∞ to i∞. Algebraist 19:04, 7 November 2009 (UTC)
- It doesn't change the "limits" if you work on the Riemann sphere, where there's only one infinity. But of course it changes the path - that's why there's no paradox here. Tinfoilcat (talk) 19:26, 7 November 2009 (UTC)
- Making that substitution certainly does change the limits, and the domain integrated over. It changes the integral from one on the real line from -∞ to ∞ to one on the imaginary line from -i∞ to i∞. Algebraist 19:04, 7 November 2009 (UTC)
To answer the OP: Yes, you treat complex constants just the same as real constants. For example:
where c1, c2 ∈ C are constants of integration. It's usual to use z as a complex variable instead of the familiar x for a real variable. Calculating complex integrals is a bit different to calculating real ones. Given two real numbers, say x1 and x2, there's only one way to integrate a function between x1 and x2 since the real numbers form a line. When we integrate over the complex numbers we have a lot more freedom. Given two complex numbers, say z1 and z2, we can integrate a function along a path which starts at z1 and ends at z2 since the complex numbers form a plane. In practise we just make some substitutions and calculate a simple integral. Say you want to integrate ƒ(z) = 1/z around the the unit circle, well the unit circle is given by the contour γ = {eit : 0 ≤ t < 2π}. So we make the substitution z = eit (and hence dz = ieitdt) and integrate for 0 ≤ t < 2π, i.e. we calculate
Notice that γ starts and ends at the same point, namely 1, but the integral is not zero (as it would be in the real case). This is where complex analysis starts to get interesting…
~~ Dr Dec (Talk) ~~ 14:21, 8 November 2009 (UTC)
Mathematical Sequences and inductive reasoning
Use inductive reasoning to write the 10th term in each sequence below. THEN write a formula for the nth term.
a) 4, 13, 22, 31, 40...
b) 3, 5, 9, 17, 33...
c) 0, 6, 24, 60, 120...
d) 5, 10, 17, 28, 47, 82... —Preceding unsigned comment added by Whitesox1994 (talk • contribs) 18:36, 7 November 2009 (UTC)
- We can't do homework for you. And even if it is not homework, at least show us your current attempt at solving these problems. You will find some inspiration at arithmetic sequence for the first answer. Zunaid 19:23, 7 November 2009 (UTC)
- Such sequences are often best explored by calculating successive differences between the terms, sometimes continuing the process by calculating differences of the differences. A clue for d: it's growing too quickly for that to be useful.→81.153.218.208 (talk) 19:51, 7 November 2009 (UTC)
- It's useful for (d), it just won't give you the answer by itself. Do the process twice and you'll get a sequence you should immediately recognise. You can get the next term in the sequence from that, but to get a formula from it will require some cleverness. --Tango (talk) 20:18, 7 November 2009 (UTC)
Your question is imprecise and unfortunately illustrates the "sort of mathematics" schools are teaching students these days. In many branches of applied mathematics, sequences will not behave in such a simple manner, and thus basic arithmetic is of no use (as pma notes below, one usually needs a large set of data to extrapolate, and even then, other techniques must be used for extrapolation - one can never be sure that any particular extrapolation of a sequence is correct). In any case:
- (a) The successive differences are: 13 - 4 = 22 - 13 = 31 - 22 = 40 - 31 = 9. Since the difference between two terms of the sequence remains constant, one may conclude that "adding nine to the previous term gives the next term". Now, you should determine a formula for the nth term (which will of course contain n as a variable). Using this formula, compute the tenth term of the sequence.
- (b) The successive differences are: 5 - 3 = 2, 9 - 5 = 4, 17 - 9 = 8, 33 - 17 = 16. Writing the diffences as a sequence - 2, 4, 8, 16..., we observe that the differences are powers of two. Thus the desired formula for the nth term is . Now, compute the tenth term of the sequence using this formula.
- (c) Do this by yourself using the other answers I have given you. To learn mathematics is to do it (and think about it). Furthermore, mathematics cannot be done by watching someone else do it - often one must invent new methods to tackle new questions; methods not previously known. I hope that by doing (c), and by comparing it to the solutions to (a), (b) and (d), you will appreciate this.
- (d) The successive differences are: 10 - 5 = 5, 17 - 10 = 7, 28 - 17 = 11, 47 - 28 = 19, 82 - 47 = 35. Writing the differences as a sequence - 5, 7, 11, 19, 35..., we shall now compute the differences of the differences - 2, 4, 8, 16...; a sequence which you should recognize. With (a), the successive differences remained constant - thus the formula for (a) involved only n (and not n2 or any powers of n). On the other hand, in (b), the differences were powers of 2, and thus we took the summation of all powers of 2 in our formula. Likewise, in this case, the differences of the differences are all powers of 2 - working in a similar manner to (b), the formula for the nth term is where we adopt the convention that an empty sum is necessarily zero.
Although I have given you some of the answers here, we are specifically advised not to do so. Therefore, instead of merely accepting the answers as true, you can do a couple of simple exercises - for instance, check that for the first few values of n, the nth term agrees with the formula I have given. Also, try to invent new sorts of sequences and apply the above methods to those - this will be good practice. Finally, I have not given you the actual answers for (c) and (a), so computing the nth term for these sequences should be your first task - spend at least 30 minutes reflecting on these exercises (and their solutions), if you want to benefit from them (and be able to do future homework without assistance). In the future, we probably will not give you solutions unless you show us your work. Hope this helps. --PST 02:39, 8 November 2009 (UTC)
a) 4, 13, 22, 31, 40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...
b) 3, 5, 9, 17, 33, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,......
c) 0, 6, 24, 60, 120, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,......
d) 5, 10, 17, 28, 47, 82, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...... --pma (talk) 22:44, 7 November 2009 (UTC)
- Unfortunately, we are fighting a losing battle against maths teachers on this one - I think it is best to just give in as far as answering questions on the Ref Desk goes. --Tango (talk) 03:59, 8 November 2009 (UTC)
- I think it's really up to the student to be in control of his learning. Honestly when I was in high school I just can't stand my English classes; I have absolutely no idea what the poems are talking about and it just hurts to be writing essays explaining Keats. Although the education is deteriorating, it does help to separate the more able students from the rest (who can learn by themselves), so I wouldn't say it's totally undesirable. Najor Melson (talk) 06:15, 8 November 2009 (UTC)
It is a good idea to repeatedly compute the negative differences bn=an− an+1. Starting with
5 10 17 28 47 82
you get row by row
5 10 17 28 47 82 -5 -7 -11 -19 -35 2 4 8 16 -2 -4 -8 2 4 -2
Now extend the first column downwards and then compute the columns one by one in the same way. Extending by zeroes you get the answer 444. (This sequence is a polynomial of degree 5).
5 10 17 28 47 82 133 204 299 444 -5 -7 -11 -19 -35 -51 -71 -95 -145 2 4 8 16 16 20 24 50 -2 -4 -8 -14 -4 -14 -26 2 4 6 8 10 12 -2 -2 -2 -2 -2 0 0 0 0 0 0 0 0 0 0
Extending the first column in a periodic way you get 1054:
5 10 17 28 47 82 149 280 539 1054 -5 -7 -11 -19 -35 -67 -131 -259 -515 2 4 8 16 32 64 128 256 -2 -4 -8 -16 -32 -64 -128 2 4 8 16 32 64 -2 -4 -8 -16 -32 2 4 8 16 -2 -4 -8 2 4 -2
Bo Jacoby (talk) 07:57, 8 November 2009 (UTC).
- I'd simplify this method a bit. For the last case, I'd only go down 2 levels, skip worrying about the signs, and align them like so:
5 10 17 28 47 82 149 280 539 1054 5 7 11 19 35 67 131 259 515 2 4 8 16 32 64 128 256
- I also believe you made a mistake in the last term. StuRat (talk) 18:33, 8 November 2009 (UTC)
- Thank you! I now corrected my error. The idea is that the row of function values is transformed into the column of coefficients. T(5,10,17,28,47) = (5,-5,2,-2,2). And vice versa: T(5,-5,2,-2,2) = (5,10,17,28,47). Extending the column of coefficients with zeroes leads to a polynomial extension of the row of function values. T(5,-5,2,-2,2,0,0,0,0) = (5,10,17,28,47,82,149,280,539,1054). The transformation T involves subtraction only, but no multiplication, (and so I was attempted to do it by hand, and made the error). The signs are essential for making T an involution, T=T−1. Bo Jacoby (talk) 00:20, 9 November 2009 (UTC).
- Yes, but my assessment is that an involution is a bit beyond a student who needs help finding the 10th term in the sequence 4, 13, 22, 31, 40..., and I always try to tailor my answers to the target audience. StuRat (talk) 00:51, 9 November 2009 (UTC)
November 8
Equation Help
Hi-I need to solve this equation
a(dv/dy)=b+c(dv^2/dy^2) where a b and c are constants
I really don't know where to begin with this-I've tried using integrating factors, bur still can't do it.Cuban Cigar (talk) 03:17, 8 November 2009 (UTC)
- You wish to solve , or, if we re-write, . Observe that this is a homogeneous linear equation with constant coefficients: . This site should help you obtain the solution. Hope this helps. --PST 03:46, 8 November 2009 (UTC)
- Sorry PST, the equation is homogenous, but is not. Bo Jacoby (talk) 14:33, 8 November 2009 (UTC).
- Well the transformation v=z+by/a should cure that. Dmcq (talk) 18:06, 8 November 2009 (UTC)
I've always liked to use the Laplace transform for problems like this. You start with a differential equation and then turn it, via the Laplace transform, into an algebraic equation. You solve that algebraic equation and then turn the solution, via the inverse Laplace transform, into the solution of the original differential equation. It's a bit hard at first, but once you begin to recognise a few objects and learn a few techniques it'll all get much easier. I got (after a couple of not-quite-right solutions):
where v(0) and v'(0) are the initial conditions for the ODE. If you don't have initial conditions then replace them with arbitrary constants. ~~ Dr Dec (Talk) ~~ 19:15, 8 November 2009 (UTC)
- Sorry Dr Dec, this doesn't take the initial data v(0) and v'(0). --pma (talk) 22:40, 9 November 2009 (UTC)
- Huh? What "doesn't take the initial data v(0) and v'(0)"? ~~ Dr Dec (Talk) ~~ 19:42, 10 November 2009 (UTC)
- I'm just saying that the expression you wrote on the RHS, when computed for y=0, doesn't assume the value that you denoted v(0). Maybe there's a typo somewhere. --pma (talk) 20:03, 10 November 2009 (UTC)
Someone was telling me that I could use Integrating factor for this; but I'm not given a value for the constants, so I'm not sure what the value of the discriminant would be. Does anyone know how to do it using integrating factor method?Cuban Cigar (talk) 06:06, 9 November 2009 (UTC) —Preceding unsigned comment added by Cuban Cigar (talk • contribs) 06:05, 9 November 2009 (UTC)
- Did you try using that substitution I gave above? The answer should be fairly obvious after doing that. You don't need integrating factors or anything like that. Dmcq (talk) 13:05, 9 November 2009 (UTC)
- Btw, ehm, what are z and y in your substitution? --pma (talk) 22:25, 9 November 2009 (UTC)
- I used y instead of x because that's what the original poster used. The w is the new variable instead of v. The solution below has a w that is my w differentiated unfortunately but putting something in instead of w' would be the next step. Dmcq (talk) 00:06, 10 November 2009 (UTC)
- Btw, ehm, what are z and y in your substitution? --pma (talk) 22:25, 9 November 2009 (UTC)
- OP: follow Dmcq's hint! Then, if you also want a bit of theory to approach similar problems, remember that the general solution of a non-homogeneous linear equation (here your differential equation cv" - av' + b = 0) is the sum of (I) the general solution of the associated homogeneous equation (here it is cv" - av' = 0) plus (II) a particular solution of the non-homogeneous equation. Now you have two sub-problems, though easier.
- (I) cv" - av' = 0 is really easy. Call v'=w. Then you want w'=(a/c)w. Write the well known general solution w and then get v as (general) antiderivative of w. You will have 2 arbitrary constants (as they call'em) for the general solution, as it has to be.
- (II) a particular solution for cv" - av' + b = 0. Although there is a general method to do that, often a particular solution is found at a glance looking for particular solutions of special smart form. Here the hint is: try a v linear, that is, of the form v(x)=kx for some suitable constant k. Plug it in the equation and choose the constant k that makes kx a solution.--pma (talk) 22:10, 9 November 2009 (UTC)
Ah yes I got it now thanks all for your help =)Cuban Cigar (talk) 10:35, 10 November 2009 (UTC)
An alternative method - Let , then and you have a first order ODE to solve, rather than a second order. Once you have found u, integrate to get v. Readro (talk) 11:03, 10 November 2009 (UTC)
The original equation should have been . If we're going to criticise posters for lack of precision/rigour (see the question above on the 10th terms of some sequences, for example), then let's use the right notation. Actually, I think that some of the real mathematicians here are far too critical of naive (in their terms) posters, making it unlikely that they'll come back. Ego trip, anyone?→86.132.164.242 (talk) 13:20, 10 November 2009 (UTC)
Hi, 86... Here we are mainly virtual mathematicians. Too critical: you're possibly right but (my) problem is I can't see any criticism towards the posters... could you explain better? The notation for derivative is v', why. And Dr Dec used the variable y just because the OP used y too in his/her first post. --pma (talk) 15:54, 10 November 2009 (UTC)no matter
- Why does the variable have to be x? Surely the labelling of a variable is unimportant. I could replace y with a heart and still have the same ODE:
- ~~ Dr Dec (Talk) ~~ 19:47, 10 November 2009 (UTC)
- even nicer! --84.221.208.19 (talk) 20:44, 10 November 2009 (UTC)
Which school are you?
I've been reading sections on mathematical logic on this page, and I wonder what is the current census. Be aware that I have no formal background in mathematical logic.
I tend to prefer Poincare's style, where mathematics is about imagining weird spaces and having fun, satisfying our intuition, instead of the "barren landscape of logicism". I'm not exactly a intuitionist. I don't have a preference for any particular school, and find right and wrong in every school.I'm guessing most people are platonist? This is not surprising, since when we are thinking we visualize pictures and pretend the objects in question really exist in some other dimension. Najor Melson (talk) 03:28, 8 November 2009 (UTC)
- I went through a maths degree without really hearing about these different schools. I've only heard of them on Wikipedia. I think most mathematicians don't care about them, they just get on with it. It's only logicians that get involved with this kind of thing and they are just one small group of mathematicians. --Tango (talk) 03:56, 8 November 2009 (UTC)
- Although I agree with the above comment, I feel that its truth is a rather unfortunate fact of life. Firstly, logic and axiomatic set theory are interesting in their own right - mathematicians who claim that "they are unimportant" (these mathematicians are around, unfortunately), are ignorant people who do not appreciate the work of others. Furthermore, I am sure that it is beyond the capacity of some of these mathematicians to understand these branches (logic and intuition are different sorts of thinking in some respects). Secondly, logic is a fundamental aspect of mathematics - it can be used to not only clarify intuition, but also to develop insights which cannot be determined by mere intuition. Succintly, as mathematicians, we should respect the work of others even if we feel that it is "not worthwhile"; there are some snobs out there who do not know anything about axiomatic set theory/logic, but still claim that it is "not important". That said, Tango is right that there are few logicians around; my point is that it does not make their work less important to that of other mathematicians. --PST 06:48, 8 November 2009 (UTC)
Yes, I agree with PST. Logic is the far opposite of intuition; I used to have a BIG problem with Cantor's concept of cardinality, but now it's like my left hand. I find intuition fails badly especially in topology (or maybe that's because topology is the only field I'm just familiar with, I'm a learner not a professor), and many times reasoning with pure intuition leads to contradictions, so like you said we need logic to develop our theory further. However, I believe to most logicians proofs should be absolutely formal, and to me that's not only 1000+ pages long but also a "barren landscape". I think they'll sacrifice everything to make sure mathematics is contradiction free. Strangely enough I was planning to be a string theorist, and still am, but somehow got into mathematics and it just won't let go! Najor Melson (talk) 12:18, 8 November 2009 (UTC)
- I don't know any logicians who, in practice, think that "proofs should be absolutely formal". It is obvious to everyone that that path is not open to human mathematicians. There are a lot who think in principle that all our informal proofs should be sort of blueprints for formal proofs, but that's a bit different. Then there's the automated theorem proving community, but again that's not really what we're talking about.
- In practice, logicians use intuition as much as anyone. Of course it's an intuition you develop; the intuitions that you have after learning a subject are different from the ones you have at the start. --Trovatore (talk) 07:40, 9 November 2009 (UTC)
May be the original question has more to do with philosophy of mathematics, rather than logic? (Igny (talk) 03:44, 10 November 2009 (UTC))
Generalization of Fermat's little theorem
What is the generalization of Fermat's little theorem for finite fields as referenced here? Thanks--Shahab (talk) 06:27, 8 November 2009 (UTC)
- They mention it here Finite_field#Analog_of_Fermat.27s_little_theorem. Rckrone (talk) 08:08, 8 November 2009 (UTC)
i dunno, but in my math movie Fermat unveils it with: "say hello to my leetle friend." —Preceding unsigned comment added by 92.230.70.105 (talk) 11:48, 8 November 2009 (UTC)
a trick to make a positive-expectation, but in actuality no risk of having to pay out, lotto?
The reason mathematicians don't like lotteries is that for every dollar you put in (buying tickets), your expected return is as little as 17 cents. Obviously for them, it would be no problem to play a lottery where for every dollar they put in, they would expect to get $100 dollars. As for me, the person who would love to provide this lottery, I would do it on the following terms: The price of the lottery is $1, the chances of winning are 1 in a quadrillion, and the payout is $100 quadrillion. Obviously the governments of the world, collectively, could insure this amount, which is 1000 trillion, although if anyone won they may have to pay it out over a few hundred years. However, they would have no problem taking this risk, as if every single person on Earth transferred all the cash they had into lottery tickets in this scheme, the risk of having to payout would still be completely negligible. If even such a negligible risk would be too substantial to the governments of the world (if in addition to cash, everyone were to also transfer EVERY asset, of any kind, into tickets, the risk of payout might reach 1:100), then they could increase the odds to 1 in 100 quadrillion, the payout it $10,000 quadrillion, and the time to payout to many hundreds of years or more. That would certainly leave them at peace that the money they would receive is "free". Now, I would be very happy to take a 1 in a quadrillion risk for far less than $1, but the problem is that I am not actually in the same position as all of the world's governments, collectively. I simply can't offer the reward. So, my question, is this scheme, to "rip off" autistic mathematicians, who have no sense of practicality but only mathematical definitions such as "expected return" -- in this case, by making my lotto tickets more attractive than any other investment they can make on those purely mathematical grounds --, is this scheme salvageable in some way? Is there some trick I can use to make my payout credible, while remaining far too unlikely for me to seriously worry about? I can't think of any tricks of this kind. Maybe there would be some way to take a very large payout, along with an astronomically small risk of this, and somehow divide it using time or some other method so that in the end the actual cash I have on hand is sufficient for the payout, while the risk of payout remains astronomically small, and at the same time, mathematically, the expected payout remains strongly positive ($100 for every $1 invested)? I am open to any practical solution, including perhaps signing contracts with all the collective governments on the world, who would take the risk of having to pay out over 10,000 years. Hell, it can be payout over 100,000 years, there are at least SOME mathematicians who would simply not see a practical objection to that, not having learned about utility and so on. As you can see this whole vein of thought is not completely serious, however I wonder if you have any thoughts on this matter wihch would make the thing salvageable. Thank you! 92.230.70.105 (talk) 11:29, 8 November 2009 (UTC)
- Actually I think this is a pretty good question to ask a mathematician: "Would you take the risk of having to pay out $100 trillion, with the chances of having to do so being 1 in a trillion, for $20 right now?" For me, that's a free $20 right there. I'm guessing strongly that most mathematicians have a different answer :) —Preceding unsigned comment added by 92.230.70.105 (talk) 11:41, 8 November 2009 (UTC)
- The logic, in general, holds, except for the main flaw that no-one, in whatever position, would play it. There are lots of metrics in which the system falls flat, most notably timeframe of return, and I doubt anyone would take that risk. They'd be better trying to fool less intelligent people, for a short while. Seeing other people win is a very important part of why people play lotteries. In any case, it's much more profitable for a government to run a standard one at take the money of that. (Or, run one to cover the costs of a specific project, perhaps an Olympic bid?) - Jarry1250 [Humorous? Discuss.] 11:45, 8 November 2009 (UTC)
- Yes, time-frame is a problem. Do you have any idea for a trick that could overcome this? 92.230.70.105 (talk) 13:04, 8 November 2009 (UTC)
- The huge payout occurs so late that you expect it to be uninteresting, but your eternal soul must pay for ever. See Faust. Bo Jacoby (talk) 14:53, 8 November 2009 (UTC).
- The opening sentance says that mathematicians don't like to play the lottery; that's not true. The reasoning implies that mathematicians don't like to gamble; that's not true either. If these kind of sweeping statements were true then most doctors (of the medical persuasion) would be a tee-total, non-smokers. ~~ Dr Dec (Talk) ~~ 15:04, 8 November 2009 (UTC)
- You seem to think I deduced my conclusion about mathematicians from first principles; not at all, it is my first-hand experience. 92.230.64.60 (talk) 18:09, 8 November 2009 (UTC)
- The original poster might or might noy know that some mathematicians are aware of subtler concepts than expected value, such as risk (and risk aversion), decision theory, prospect theory and so on, and that they are since the times of Daniel Bernoulli at the very least... [[::User:Goochelaar|Goochelaar]] ([[::User talk:Goochelaar|talk]]) 15:30, 8 November 2009 (UTC)
- I believe this strategy was already tried, only on a somewhat smaller scale. Somebody offered lottery tickets with a very small chance that one person might win a billion dollar prize. While they didn't have the billion dollars to cover the risk, the were able to convince an insurance company, Lloyd's of London, if memory serves, to cover the risk. StuRat (talk) 18:15, 8 November 2009 (UTC)
- The diminishing marginal utility of money is significant here. $100 quadrillion isn't really worth significantly more than $100 billion - you can't actually buy anything you couldn't buy with the lesser amount. That means if you work out the expected *utility* rather than the expected money, it is negative. That is why a sensible mathematician would not go for it. --Tango (talk) 22:19, 8 November 2009 (UTC)
- There are some pretty good articles about how to use the Kelly criterion to figure out how big a payout it takes to make buying a lottery ticket worthwhile, based on your already-available bankroll. Basically it's never worth your while unless you're already a millionaire. Try this one and it looks like a web search finds more. 69.228.171.150 (talk) 03:21, 9 November 2009 (UTC)
- That article suffers a major flaw - it assumes you won't share the jackpot. When there is a large jackpot a lot of people buy tickets which makes it very likely that the jackpot will be shared. You can't ignore the number of tickets sold in the calculation, even if you are only counting the jackpot and not the smaller prizes. Since the jackpot will probably be shared the expectation is probably still negative (we had this discussion not long ago and worked out you need on average 5 or 6 rollovers to get positive expectation, if memory serves - that wasn't a very precise calculation, though). With a negative expectation, the correct bet size is always zero (unless you can start your own lottery and thus bet negatively). --Tango (talk) 03:37, 9 November 2009 (UTC)
Vectors of the same magnitude
Let and be two elements in a normed vector space such that . Is there any nice Latin or Greek word for this condition, as a property of this pair of vectors? Isometric does not feel right (I want it to be a property of linear transformations, surfaces etc.). --Andreas Rejbrand (talk) 14:29, 8 November 2009 (UTC)
- But the term isometry already exists and it is a property of a linear transformation. I don't think that there will be a standard word for two vectors of the same magnitude; it's too simple of a property. The vectors u and v would be two radii of a ball of radius ||u|| = ||v||. ~~ Dr Dec (Talk) ~~ 14:56, 8 November 2009 (UTC)
- Yes, I know that, of course (that was what I meant - to me a "isometry" is a kind of linear transformation or a function between two surfaces). OK. --Andreas Rejbrand (talk) 15:06, 8 November 2009 (UTC)
- What exactly are you looking for: a name for two vectors which have the same length in a given space or the name for two vectors whose lengths are kept equal under a given linear transformation? (I don't think there's a special name for the former. As for the latter, well, u and v belong to the same eigenspace.) ~~ Dr Dec (Talk) ~~ 15:16, 8 November 2009 (UTC)
- Exactly what I wrote, i.e. two vectors with the same norm. --Andreas Rejbrand (talk) 15:24, 8 November 2009 (UTC)
- But you said that you "want it to be a property of linear transformations, ...", by "it" I assumed you meant ||u|| = ||v||. I've un-done your last edit. You shouldn't change previous comments once people have started to reply: everyone will get lost. Either restate your point or strike the mistakes and re-write using <s>strike</s> to yield
strikeand then leave a post to say that you've changed the post after people have replied. If someone has replied while you were writing a reply then leave a (ec) at the start of your post to mean edit conflict. It helps other people follow the thread days, weeks, or even months late. - ~~ Dr Dec (Talk) ~~ 15:38, 8 November 2009 (UTC)
- But you said that you "want it to be a property of linear transformations, ...", by "it" I assumed you meant ||u|| = ||v||. I've un-done your last edit. You shouldn't change previous comments once people have started to reply: everyone will get lost. Either restate your point or strike the mistakes and re-write using <s>strike</s> to yield
- I said "Isometric does not feel right (I want it to be a property of linear transformations, surfaces etc.)." and by that I mean that that the it would not feel right to call the vectors "isometric", because "isometric" already has a very fundamental meaning in linear algebra, as a isometric linear transformation is a distance-preserving linear transformation (with a determinant of absolute value 1). I am sorry that the text was misinterpreted. I actually am a university-level math teacher, and right now I am teaching linear algebra, so I do know the topic. I just wondered if anyone knew a nice word meaning "of the same length" when it comes to vectors in a normed vector space. --Andreas Rejbrand (talk) 15:49, 8 November 2009 (UTC)
- ...and two surfaces are said to be isometric if there is an isometry between them, which is the case iff they have the same first fundamental form... --Andreas Rejbrand (talk) 15:58, 8 November 2009 (UTC)
- The parenthesis "(I want it..." is before the period of the sentence "Isometric does not...", and this supports my intended interpretation. Also, my intended interpretation makes sense. (Indeed, the word "isometric" is already defined for linear transformations and surfaces.) The other interpretation assumes that I do not know anything about math, and you should not assume that... --Andreas Rejbrand (talk) 16:00, 8 November 2009 (UTC)
- Also I am a sysop at the Swedish Wikipedia, so I know how to edit discussion pages. --Andreas Rejbrand (talk) 15:50, 8 November 2009 (UTC)
- OK, it seems like I overreacted a bit here. I am quite stressed today. --Andreas Rejbrand (talk) 16:19, 8 November 2009 (UTC)
- I never heard of a term for that. Possibly beacuse saying "u and v have the same norm" is usually accepted as reasonably short. If for some reason you need to use it many times, and prefer not to express it by the formula , there shoud be an expression borrowed from elementary geometry that should work. (I'm not sure: how does one say when two line-segments have the same length? Congruent, equivalent, isometric? In principle isometric is not wrong, but it may be confusing as you and Dr Dec remarked). --pma (talk) 15:51, 8 November 2009 (UTC)
- Congruent seems to be correct for line segments of the same length. Use IE or Firefox and take a look at this page. ~~ Dr Dec (Talk) ~~ 17:25, 8 November 2009 (UTC)
- I've never heard of a word, but if I were to invent one I'd go with something like "cometric". "iso-" usually mean invariant (which isn't what we want), "co-" (or "con-") means "same" ("concentric" means "same centre", "coincident" means going through the same point, "congruent" means "same shape", etc.). So, "cometric" would mean "same length". --Tango (talk) 18:57, 8 November 2009 (UTC)
- Well indeed the Greek ἴσος means equal, and in this exact meaning is used in all compounds including scientific ones. The prefix co- is from the Latin (cum= with ) and refers more to being associated with / joined together with, rather than having the same quality (think of cosine, cofactor, complement but also the ones you quoted). "Cometric" sounds like a mixed Latin-Greek variant of simmetric (σύν=cum). --pma (talk) 19:29, 8 November 2009 (UTC). Why not then the all-Latin version commensurable ops it already exists ;-) --pma (talk) 19:36, 8 November 2009 (UTC)
Don't know of a standard term either; "equal length" vectors is the shortest description I can think of. Abecedare (talk) 19:44, 8 November 2009 (UTC)
- The term I have seen is "equipollent", but it is not in common use as, of course, "equal length" is a more readily understandable term for the concept. 71.182.248.124 (talk) 18:50, 9 November 2009 (UTC)
- Yes! Should I have to choose a one-word term for ||u||=||v||, this is definitely the best of all, as it refers to magnitude or intensity in general and not only to length.--pma (talk) 08:34, 10 November 2009 (UTC)
Pythagorean Theorem
Why are there so many proofs of the Pythagorean Theorem? Most other theorems only have one or two distinct proofs, yet there are several hundred of the Pythagorean Theorem? --75.39.192.162 (talk) 18:40, 8 November 2009 (UTC)
- It is a very old, very simple theorem that everyone knows. That's probably the reason. It could also be something inherent about the theorem - it is so fundamental to mathematics that it crops up all over the place and can thus be proven using the tools of various different fields. One addition reason - it has so many proofs already that people think it is fun to come up with more. --Tango (talk) 18:49, 8 November 2009 (UTC)
Proof theory
As another question related to the last. I remember hearing about proof theory or something like that (I think it was at the 2007 BMC in Swansea). It's basically the metamathematics of proof. I've got a few questions to do with that:
- Is there a way to formally say that one proof is different to another?
- If so, then is there a way to assign a cardinality to the number of proofs of any given theorem?
- Finally, pushing the boat right out: is there a way of assigning a metric to the space of proofs of a given theorem?
~~ Dr Dec (Talk) ~~ 19:49, 8 November 2009 (UTC)
- If we interpret the second question as talking about known proofs, then the answers to all three questions are "yes". The real question is whether there is some natural/useful way to do it. I could say that two proofs are different if the MD5 hashes of their TeX code are different and use the Hamming distance on the MD5 hashes to define a metric. It would be a completely useless way to do it, though. If we could prove the total number of possible proofs using some useful equivalence relation that would be a fascinating result, but I'm sceptical that it can be done. --Tango (talk) 20:19, 8 November 2009 (UTC)
- No, my question was deeper than this. A proof should be language invariant, so writing it in Spanish or in English should give the same proof (from a proof theory point of view). Clearly the LaTeX code would be quite different. It's more about the methodology, the mathematical machinery, and the like. Not just the prima facie appearance of the proof. I think that 69.228.171.150's post is closer to what I had in mind. ~~ Dr Dec (Talk) ~~ 22:15, 8 November 2009 (UTC)
- I understood that. I was giving an extreme example to demonstrate how your question was flawed. What you want to know is if it is possible (it certainly is and I gave an example) but if it can be done in some natural and interesting way. --Tango (talk) 02:17, 9 November 2009 (UTC)
- No, my question was deeper than this. A proof should be language invariant, so writing it in Spanish or in English should give the same proof (from a proof theory point of view). Clearly the LaTeX code would be quite different. It's more about the methodology, the mathematical machinery, and the like. Not just the prima facie appearance of the proof. I think that 69.228.171.150's post is closer to what I had in mind. ~~ Dr Dec (Talk) ~~ 22:15, 8 November 2009 (UTC)
The question of proof identity is a vigorous topic in proof theory and there is ongoing effort to clarify the concept, but there is no satisfactory definition right now. This paper describes some of the issues. You might look at this site for some references and info. We have a related stub article, deep inference. 69.228.171.150 (talk) 22:04, 8 November 2009 (UTC)
- Great, thanks! I'll try to make some headway reading your links. Thanks again. ~~ Dr Dec (Talk) ~~ 22:23, 8 November 2009 (UTC)
- Can someone explain this to me: Tango said "If we could prove the total number of possible proofs using some useful equivalence relation that would be a fascinating result". Is this the same as saying we're trying to prove certain properties of proofs themselves, like as in a meta-theory? If so will there ever be an end to the succession of meta-theories? I find the apparent circularity very confusing, seeing as proof theory is supposed to prove things about metric spaces and we're now trying to construct a metric space out of proofs. Money is tight (talk) 01:51, 9 November 2009 (UTC)
- Yes, we're talking about metamathematics. I guess you could study metametamathematics if you wanted to, but I expect it would be classified as just part of metamathematics. For the most part, it would be indistinguishable from metamathematics because metamathematics is all about using mathematical techniques to study maths itself. --Tango (talk) 02:17, 9 November 2009 (UTC)
- Proof theory normally doesn't prove things about metric spaces. It proves things about proofs. Gödel's incompleteness theorems are probably the most famous theorems of proof theory. Of course, since (according to proof theory) proofs are mathematical objects, it's reasonable to want to assign a metric topology to them, as Declan was asking, so you can tell if two proofs are the same/almost the same/etc. As the link I gave shows, proof identity is an active research area, but complicated. 69.228.171.150 (talk) 04:25, 9 November 2009 (UTC)
- Can someone explain this to me: Tango said "If we could prove the total number of possible proofs using some useful equivalence relation that would be a fascinating result". Is this the same as saying we're trying to prove certain properties of proofs themselves, like as in a meta-theory? If so will there ever be an end to the succession of meta-theories? I find the apparent circularity very confusing, seeing as proof theory is supposed to prove things about metric spaces and we're now trying to construct a metric space out of proofs. Money is tight (talk) 01:51, 9 November 2009 (UTC)
Declan, you might also be interested in Hilbert's 24th problem (also mentioned in the Straßburger link above). 69.228.171.150 (talk) 04:32, 9 November 2009 (UTC)
- Excellent. I'll take a look at this in the morning. Thanks for all of your pointers; I've found them most enlightening. ~~ Dr Dec (Talk) ~~ 22:35, 9 November 2009 (UTC)
Bad assumption ?
Given the assumption that an event has a 1% probability, but the measured results show that this event occurred in 2 out of 3 trials, what is the probability that this assumption was correct ?. StuRat (talk) 20:07, 8 November 2009 (UTC)
- This problem is underdetermined, as we are not given the prior probability of the assumption being correct.
If we have thatIf we have a prior distribution on the probability then it's a simple application of Bayes' formula. - The likelihood of the assumption being correct - which is completely different, but may be of interest nonetheless - is .
- The beta distribution is also relevant for this kind of problem. -- Meni Rosenfeld (talk) 20:35, 8 November 2009 (UTC)
My advice is to ignore prior probability. Here is an explanation of it anyway: if you put a billion normal, fair coins and one double-headed (heads/heads, ie comes up heads 100% of the time) coin into a giant bag and then draw a coin at random and test to see it is fair, getting the result: "heads, heads, heads, heads, heads" (five in a row) then what is the probability you just tested a fair coin? Obviously the normal, reasonable response is to think that the odds are overwhelming that you picked one of the billion normal ones, even though the result you got with the (very probably) normal coin only appears 1/32 of the time with a fair coin. You have this reasonable response because because 1/32 is nothing compared with 1/1billion! In this case, the prior probability makes you quite sure that the coin is fair, even though without it, you would begin to suspect it is unfair. But still, this kind of devilry is irrelevant, and you should do what I do and ignore prior probability. 92.230.64.60 (talk) 21:03, 8 November 2009 (UTC)
- Why should one ignore prior probablity? Because William Feller was ignorant concerning it? Michael Hardy (talk) 05:12, 9 November 2009 (UTC)
- See Prosecutor's fallacy for the sort of problems that this reasong leads to. Taemyr (talk) 21:08, 8 November 2009 (UTC)
- (ec) Ignoring prior probability results in miscarriages of justice (really, courts have made that mistake and thrown innocent people in jail due to giving too much weight to DNA results, etc.). If you ignore the prior probability, you get the wrong answer. It is as simple as that. If you have spent years getting data that support the assumption that the probability is 1% and then you do three more tests that seem to contradict that assumption then you shouldn't reject the assumption. However, if you just guessed the probability based on no evidence then you should reject it as soon as there is evidence that contradicts it. --Tango (talk) 21:14, 8 November 2009 (UTC)
- 92 has given some great reasons why the prior distribution must not be ignored. But I believe his point was that with this kind of low-probability events, "side-channel attacks" become significant. If I pick up a coin, toss it 20 times, and obtain heads every time, I'll begin to suspect that whoever told me that the pile has a billion fair coins lied. Of course, that too can be encoded in the prior belief and updated in a Bayesian way. -- Meni Rosenfeld (talk) 08:18, 10 November 2009 (UTC)
Let me add some detail to this problem. We have a large number of items (say a million) and know that precisely 1% of the objects is of the desired type. We then draw 3 items, and 2 of the 3 turn out to be the desired item. We don't know if the draw was truly random or had some bias. How can we determine that ? It certainly seems likely that there is bias in the draw, but how do we quantify the chances ? StuRat (talk) 21:20, 8 November 2009 (UTC)
- We use hypothesis testing. We choose a null-hypothesis: "The draw was fair." We then choose a desired confidence level, say 95%. We then work out the probability of the event (or a more surprising one - it's important to include that, although it doesn't make much difference in this case) happening assuming that hypothesis: 3*0.01*0.01*0.99+0.01*0.01*0.01=0.000298. We then observe that 0.000298<1-0.95=0.05. That means we should reject the hypothesis and assume (until we get more evidence) that the draw was biased. In fact, it's worth noting that a single one of the desired type being drawn would be enough to reject the hypothesis in this case - the probability of at least one of them being draw (assuming the hypothesis) is about 3%. --Tango (talk) 21:34, 8 November 2009 (UTC)
- Thanks. Can we then express that as a probability ? Something like "the probability that this was a fair draw is _____" ? StuRat (talk) 22:01, 8 November 2009 (UTC)
- If we can, it is beyond my limited statistical training. --Tango (talk) 22:02, 8 November 2009 (UTC)
- Following on from Tango's analysis, our article on Type 1 and type 2 error might help, but I'm too tired to think straight at present. Dbfirs 22:24, 8 November 2009 (UTC)
- What about this? Surely that very much does assign probabilities that hypotheses are true, starting with prior probabilities?--Leon (talk) 08:54, 9 November 2009 (UTC)
- Excellent point. I did know that, I wasn't thinking straight! I've moved your comment down to keep things in chronological order. --Tango (talk) 22:41, 9 November 2009 (UTC)
- No, I'm afraid you were thinking completely straight (except perhaps for the part where you mentioned hypothesis testing, which is highly controversial). As I tried to explain, Bayesian inference relies on the availability of a prior distribution for the variables of interest. The choice of prior becomes less significant as more data is collected - and it can be sufficient to just choose a maximum entropy distribution - but in this case we only have 3 data points.
- Since StuRat avoided specifying his prior belief that the draw was fair - and the distribution of the bias if it exists - Bayes is inapplicable. He may be considering something simple like "the draw is fair with probability 0.5, and if it's not, it has a probability of p to come up with a special item, where p is distributed uniformly", in which case my calculations show that given the data, the probability of the draw being fair is 0.00118659. -- Meni Rosenfeld (talk) 08:10, 10 November 2009 (UTC)
- Hypothesis testing in controversial? So much for the stats modules I did at A-level... But I wasn't thinking straight - the answer I should have given way "No, we can't calculate that probability without knowing the prior probability. If you give us that, then we can do it." --Tango (talk) 17:59, 10 November 2009 (UTC)
- Excellent point. I did know that, I wasn't thinking straight! I've moved your comment down to keep things in chronological order. --Tango (talk) 22:41, 9 November 2009 (UTC)
- If we can, it is beyond my limited statistical training. --Tango (talk) 22:02, 8 November 2009 (UTC)
- Thanks. Can we then express that as a probability ? Something like "the probability that this was a fair draw is _____" ? StuRat (talk) 22:01, 8 November 2009 (UTC)
wolfram alpha
this doesn't look right to me, an extra factor of n' on the first term. and when show steps is clicked it just gets worse. http://www.wolframalpha.com/input/?i=d%2Fdx[(n%27)*exp(-x^2%2F2) . (link is nowikied as wikicode gets confused by adress). Is wolframm alpha often mistaken? —Preceding unsigned comment added by 129.67.39.44 (talk) 22:10, 8 November 2009 (UTC)
- It seems like Wolfram|Alpha does not like the prime. This question works: "differentiate (dn/dx)*exp(-x^2/2)" [6]. --Andreas Rejbrand (talk) 22:46, 8 November 2009 (UTC)
- Now I see that it actually works with prime as well, but you need to be explicit with the independent variable: "differentiate (n'(x))*exp(-x^2/2)" works [7]. --Andreas Rejbrand (talk) 22:47, 8 November 2009 (UTC)
- Wolfram Alpha has built-in commands and if you do something that it doesn't understand, it will guess at what you mean. So, it is important that you enter things correctly. StatisticsMan (talk) 14:29, 9 November 2009 (UTC)
Integral (in proof about theta function)
I am reading a proof in a book and it gives
and says "substitution followed by a shift of the path of integration". Doing the substitution gives
where . So, I guess the shift of the path of integration takes this path to the real line. But, I do not get why the two integrals would be equal. StatisticsMan (talk) 22:54, 8 November 2009 (UTC)
- I assume t is positive? Consider the paths from -R to R for large R rather than -∞ to ∞. The two paths form the top and bottom of a rectangle. The difference between the integrals along the top and bottom is equal to the difference between the integrals along the two short ends of the rectangle since the function (call it f) is entire. Out there |f(x)| becomes small and so you can show that those integrals along the sides of the rectangle are small, and go to zero in the limit as R goes to infinity. I hope that's clear since I don't think I explained it very well without a diagram. Rckrone (talk) 01:32, 9 November 2009 (UTC)
Equality of the integrals follows from Cauchy's integral theorem (look it up!) applied to the rectangle whose corners are at −R, R, R −iy/t, and −R −iy/t. The integral around the boundary must be zero. The integrals along the short vertical sides approaches 0 as R grows. Hence the integrals along the upper and lower boundaries have the same limit as R grows. Michael Hardy (talk) 05:22, 9 November 2009 (UTC)
- This makes sense. Thanks to both of you. I know Cauchy's integral theorem :). I just didn't think to consider a finite, very large R, though I should have because that is a very common trick. StatisticsMan (talk) 14:31, 9 November 2009 (UTC)
November 9
Random binary digits from random numbers?
What is the best way to get random binary digits from a sequence of random decimal or hexical digits?
For example, if I toss a six sided dice a total of 8 times, how can I convert that sequence of 8 tosses into a sequence of random binary digits (0's and 1's), each having 50% probability. I know I could simply do 1,2,3 on dice = 0, and 4-5-6 on a dice = 1, but that takes 1 roll per binary digit, I want to do it in a way that minimises the number of die rolls required per binary bit.
--Dacium (talk) 02:58, 9 November 2009 (UTC)
- You could get 2 random bits from each roll 2/3 of the time and 1 1/3 of the time, for an average of 5/3 bits per roll. 1 counts as 00, 2 counts as 01, 3 counts as 10, 4 counts as 11, all with equal probability so it should be uniform. Then, 5 could be just 0 and 6 could be 1. StatisticsMan (talk) 04:30, 9 November 2009 (UTC)
- The most bits you can get out per roll is . With only a finite number of rolls, you can only get a rational number of bits, as arbitrarily close to x as you'd like (with the closer you get to x requiring more rolls total and a more complex algorithm). For example, following StatisticsMan's method, you can get bits per roll as follows: every pair of rolls has 36 possibilities, so list these possibilities in some order; the first 32 possibilities yield the bits 00000, 00001, ..., 11111, and the last 4 possibilities yield the bits 00, 01, 10, 11. Eric. 131.215.159.109 (talk) 05:26, 9 November 2009 (UTC)
- Of course, for best results you should use all the rolls at once. I think the following works: Write where . After rolling, write the result as an integer . Find the minimal m such that . Then and is uniformly distributed over this range. This gives you m bits, and happens with probability (only if ). The expectation is which is 2.42478 per roll. -- Meni Rosenfeld (talk) 06:19, 9 November 2009 (UTC)
- Remark on infinite sequences. If you interpret a sequence of digits in p:={0,1,..,p-1} as the expansion in base p of a real number x, you get a measure preserving map from pN (with the product of countably many copies of the uniform probability measure on p) into the interval [0,1] with the Lebesgue measure. If you then expand x in base q, you get a measure preserving map into the sequences of digits in q={0,1,..,q-1}. These maps are bijective up to a countable null set (due to the double representation of some rationals). So any set of p-sequences gives rise this way to a set of q-sequences with the same probability. --pma (talk) 07:32, 9 November 2009 (UTC)
- Dacium, what are you actually trying to do? If you're really rolling dice by hand and trying to get a long bit stream, then StatisticsMan's proposal is good and it does use the entropy optionally unless I'm missing something. If you're trying to program a computer to convert some non-binary entropy source to a bit stream, the mathematical theory of randomness extractors is way more complicated than you need. Just make a conservative estimate of the amount of min-entropy in your input source, and collect enough input that the output of a CSPRNG seeded from that input can't be feasibly distinguished from true random bits. If you can be more specific about your requirements you will probably get more useful answers. 69.228.171.150 (talk) 10:55, 9 November 2009 (UTC)
With simple single rolls of a d10 converted by hand, one easy way would be to convert 10 and 1-7 into three digit binary numbers (000 to 111) and code 8s as 0 and 9s as 1; this would be analogous to Statisticsman's original suggestion with a d6. An average string of 100 decimal digits would give you 260 zeroes and ones. Of course, if you're using a non-hexical die (e.g., an RPG d10) , your best option would be to use a d8, to give you a three-digit binary number every time, or a d20 for 16 four-digit binary numbers and four two-digit ones - an average of 3.6 binary digits per roll. Grutness...wha? 11:31, 9 November 2009 (UTC)
- with decimal I would take 3 digits at a time, and if it's more than 500, the binary digit is 1, less than 500 and it's 0, and this should reduce any off-by-one error from my sloppy thinking to at most 2/500 or maybe 3/500 skew depending on how unlucky I am. If this is unacceptably high, I would increase it to 8 digits at a time, so that if the number is more than 50,000,000 it's a 1 and less than 50,000,000 it's a 0, thereby reducing any off-b y-one skew to a negligible amount. Why only 8 digits? As that is a safe distance (2 orders of magnitude) from the overflow of the signed integers I'd probably be using by accident to store the result.... can you tell I'm an adherent of Murphy's law? —Preceding unsigned comment added by 92.224.205.24 (talk) 16:06, 9 November 2009 (UTC)
mathematics/{1+x+x*x}{1+5x+x}{5+x+x}=64
{1+x+x*x}{1+5x+x}{5+x+x}=64 —Preceding unsigned comment added by 59.164.3.176 (talk) 08:23, 9 November 2009 (UTC)
- It helps if you also tells us what you are wondering about. Taemyr (talk) 08:35, 9 November 2009 (UTC)
- In the future, we will not answer your question unless it is deemed unambiguous by a majority. Although you are most certainly encouraged to ask questions here (and we are happy to answer them), it is important to note that phrasing your question in a simple manner increases the chance that someone willl answer it (after all, this is is a free service). Let me assume that the following is your question:
- The solutions to which are x = -2.7825809, x = 0.66558396. --PST 11:35, 9 November 2009 (UTC)
- Coefficient of x should be 37 i.e.
- Gandalf61 (talk) 12:53, 9 November 2009 (UTC)
- and so the two real solutions are −2.82568 and 0.650129 and the two nonreal solutions are −0.74556±1.4562i. Bo Jacoby (talk) 14:47, 9 November 2009 (UTC).
- Both of you are right. Maybe I need a break from the reference desk... --PST 03:35, 10 November 2009 (UTC)
- None of us are perfect and your contributions are appreciated. Bo Jacoby (talk) 19:40, 10 November 2009 (UTC).
Your equation expands to be 12x4 + 44x3 + 49x2 + 37x − 59 = 0. The left hand side of this equation is a polynomial in the variable x. In fact it is a quartic polynomial, i.e. the highest power of x is four. A corollary of the fundamental theorem of algebra tells us that your equation has, counting multiplicities, four solutions over the complex numbers. As is the case with quadratic polynomials, it is possible to find the exact solutions in terms of radicals, i.e. expressions involving nth roots. Although, in practise, this is very difficult to do by hand for anything other than the simplest of quartic polynomials. If you are satisfied with approximate solutions then there are a few things you could do. The first is to use the Newton method to find approximations to the solutions. With the Newton method you start with an educated guess for one of your solutions, say x0, and feed this into a certain function giving a new value, say x1. You then put x1 into the same function to get x2, and so on. This process of iteration will give a sequence of numbers (x0, x1, x2, …) which will eventually (given a few extra conditions) converge to a solution of your equation. You won't need more than a calculator, a little knowledge of complex numbers and the ability to differentiate a polynomial to use this method. Alternatively, you could just cheat and use a computer algebra package like Maple or Matlab to generate some numerical approximations to the solutions; but where's the fun in that? :o) ~~ Dr Dec (Talk) ~~ 17:35, 10 November 2009 (UTC)
- The fun is in mastering a tool that enables you to solve hard problems easily. J (programming language):
>{:p.(((1 1 1&p.)*(1 6&p.)*(5 2&p.))-64&p.)t.i. 5 _2.82568 _0.74556j1.4562 _0.74556j_1.4562 0.650129
Bo Jacoby (talk) 19:40, 10 November 2009 (UTC).
Commutative diagram
I have an n x n diagram (n rows and n columns) with the property that each (1 x 1) cell of the diagram is a commutative diagram. Is the whole diagram commutative? What's the name for this result (if it's true)? —Preceding unsigned comment added by 58.161.138.117 (talk) 12:24, 9 November 2009 (UTC)
- You mean you have a rectangular grid with all arrows between neighbouring nodes going (say) left-to-right and top-to-bottom? Yes, the diagram is commutative. You can prove it easily for any m-by-n grid by induction on m, with inner induction on n. I have no idea whether it has a name. — Emil J. 14:21, 9 November 2009 (UTC)
Why don't we have a complete mathematical language?
Mathematical texts still use natural languages. Couldn't mathematicians develop a complete language? This language being complete in the sense that you can express things like "why", "imagine a". Students would learn how to code their natural native languages into this mathematical language.--Quest09 (talk) 12:57, 9 November 2009 (UTC)
- Well you could use the Mizar system or Metamath but mathematicians are human too and these definitely wouldn't help students. I seem to remember a funny story in Mathematical Intelligencer where students did everything this way. Dmcq (talk) 13:12, 9 November 2009 (UTC)
- There is not much use in developing a language unless there are semantics for it, but it is not at all obvious what the semantics for "imagine a" would be. Thus the formal languages that are used in math are restricted to a class in which we can develop a good semantics. For example, we have a good sense of what "A and B" means in terms of "A" and "B". Philosophers and linguists spend a long time studying the meanings of natural-language phrases, but we do not understand them well enough to try to formalize them.
- On the other hand, many students do learn how to express mathematical claims in "mathematical language" which corresponds roughly to first-order logic. But this does not include the entire range of expression that is possible in a natural language. — Carl (CBM · talk) 13:43, 9 November 2009 (UTC)
- Also see Loglan/Lojban. --Stephan Schulz (talk) 14:38, 9 November 2009 (UTC)
Because mathematicians have a right brain as well as a left brain. The right brain does not respond in the least to rigorous symbolic manipulation. Thus expressing everything symbolically is a great way to take the right brain out of the equation, and deduct rigorous truth even when it is grossly unintuitive. But only a few such deductions are unintuitive; the vast majority of truth mathematicians manipulate makes sense to them on an intuitive, right-brain level as well (and this is why they are mathematicians). If you were to remove language, you could no longer easily communicate with the right brain (imagine if you remove directx/opengl: you no longer can easily program the GPU) and many kinds of math become hard to fathom. This is especially true for appeals to visual intuition, as there is no rigor there at all. A visual demonstration of the pythagorean theorem has no rigor at all, you must 'add in' all the rigor bit by bit. But, to a mathematician, it might immediately make total sense to see that. Now, was the picture, which by definition has no rigor at all, it's just pixels, the right way to communicate the picture, instead of through coordinate geometry, which would have taken you 3-5 minutes to translate into a picture? Obviously it was. It's simply easier to show the picture. Similarly, it's just easier to use natural language to appeal to your intuition, and the left brain can then proceed to flesh out the formalities. For example, primes get harder and harder to come by as you go up through larger and larger primes. Couldn't there be a largest prime, couldn't the set of primes be finite, and not unending? The simple (and famous) natural language appeal that there cannot be a largest prime is simple: "there can't be a largest prime, because if there were, you could take it and all the primes smaller than it, multiply them together, and add one. The new number would just fail to divide by "every" prime (it would leave remainder of 1 for each attempt), but if it fails to divide by any prime, then its only factors must be 1 and itself -- ie it must be a prime, and since we got it by multiplying "all" the primes together, it muts be greater than all of them. So, it is a new "greatest" prime, clearly a contradiction with the possibility that there were a finite number of primes to begin with." This was a natural-language appeal, and it uses your intuition. I could have phrased it in a more complex and rigorous way, that would have been harder to understand, but if I had done so, you simply wouldn't have understood the truth of that statement so spontaneously (assuming you did). by the way, if you DIDN'T follow my logic, imagine how much farther away still you would ahve been from 'getting' it if I had used all symbols! Now, once you have understood the untuitive appeal (or thought of it yourself), you might sit down and go through it rigorously. You might even say whooops, with greater rigor I notice that the new "greatest" prime might not really be a prime, but just relative prime to all the primes -- in fact, it might have another prime factor we left out of "all" primes. These kinds of ancillary effects often come out only under greater rigor. But natural language is quite important for even doing the kind of intuitive reasoning/communication that it takes to get to the point where you even know what to try symbolically/with great rigor. 92.224.205.24 (talk) 15:41, 9 November 2009 (UTC)
- 92, your post would be much easier to read if you divided it to paragraphs. -- Meni Rosenfeld (talk) 19:09, 9 November 2009 (UTC)
Question regarding order of an element in a group
I have the following question. Suppose G is a finite group and K is a normal subgroup of G. Also suppose G/K has an element of order n. Then prove that G also has an element of order n.
I understand that finiteness is essential because if G= Z, K = 2Z clearly G/K has an element of order 2 but G doesn't. Other then that I can't think of a thing! I'll appreciate any help--Shahab (talk) 16:20, 9 November 2009 (UTC)
- (Finiteness is not quite essential, it suffices if G is a torsion group.) Just do the obvious thing: take an element g/K of G/K or order n, and try to see if this gives any constraints on the order of g in G. Then you should be able to find an element of order n in G with no difficulty. — Emil J. 16:35, 9 November 2009 (UTC)
- All I can think of is that O(g)≥ n as n is the least positive integer for which g^n is in K. Can you please be a bit more explicit. Thanks--Shahab (talk) 17:15, 9 November 2009 (UTC)
- n is the least positive integer for which gn is in K, yes. And more holds: for any integer a, ga is in K if and only if n divides a. Now, go(g) = 1 is in K, hence o(g) is not just greater than or equal to n, but actually divisible by n. — Emil J. 18:01, 9 November 2009 (UTC)
- Let with and let n be the least positive integer with this property. Then, if m is a positive integer with , since K is a subgroup. Consequently, since 1 is an element of K and thus has order n, as desired. Note that I think my proof overlaps with EmilJ's, but hopefully I was a bit more explicit. --PST 03:34, 10 November 2009 (UTC)
- All I can think of is that O(g)≥ n as n is the least positive integer for which g^n is in K. Can you please be a bit more explicit. Thanks--Shahab (talk) 17:15, 9 November 2009 (UTC)
November 10
Question About Primes Near nn
I'm wondering primarily whether anyone has found a solution to nn+2=Prime beyond n=3. Generally, has anyone taken up the question of the nearest prime values to nn? Has the question appeared in the literature at all? I imagine it must have.Julzes (talk) 17:56, 10 November 2009 (UTC)
- I've just run a little procedure to check. I got n = 737 and n = 1,349. I had to stop the program after that point because the numbers were getting so big that my computer nearly exploded! ~~ Dr Dec (Talk) ~~ 18:05, 10 November 2009 (UTC)
- Can your computer actually tell if such a number is prime? RSA encryption is based on the fact that it's hard to factor a 400 digit number, even with a supercomputer. When n = 1349, you're talking about a number with
over 1 million digitsaround 4000 digits. At best, you can do primality testing to have a good probability that such a number is prime. Isn't this right? I'm not claiming I know exactly. StatisticsMan (talk) 18:59, 10 November 2009 (UTC)- Factorization is harder than testing for primality. Taemyr (talk) 19:05, 10 November 2009 (UTC)
- (e/c) First, primality testing can be done deterministically in polynomial time (though that does not necessarily mean that's what Dr Dec did, since the AKS algorithm is actually rather inefficient in practice). Factoring and testing primality are in a completely different ballpark. Second, 13491349 has only ~4000 or so decimal digits, not a million. — Emil J. 19:08, 10 November 2009 (UTC)
- Can your computer actually tell if such a number is prime? RSA encryption is based on the fact that it's hard to factor a 400 digit number, even with a supercomputer. When n = 1349, you're talking about a number with
- Okay, good points, especially about the number of digits. I did 3^1000 = 1000000, instead of 3 * 1000 = 3000. So, my bad. By deterministically, do you mean that you can tell 100% for sure that a number is prime, correct? So, it's not a probabilistic test? StatisticsMan (talk) 19:13, 10 November 2009 (UTC)
- Yes, that is correct. But 3^1000 is not 1000000. -- Meni Rosenfeld (talk) 20:38, 10 November 2009 (UTC)
- Okay, good points, especially about the number of digits. I did 3^1000 = 1000000, instead of 3 * 1000 = 3000. So, my bad. By deterministically, do you mean that you can tell 100% for sure that a number is prime, correct? So, it's not a probabilistic test? StatisticsMan (talk) 19:13, 10 November 2009 (UTC)
I used Maple and the command isprime. For example isprime(2); returns true whereas isprime(4); returns false. I ran the very simple program:
- L := [ ]: (making an empty list)
- for i from 1 to 1000 do (setting the variable and number of cases)
- if isprime((2n+1)2n+1 + 2) = true then L := [op(L),i]: fi: od: (checking all odd n, if it's prime add n to the list)
At the end my list L came out as [1,368,674] meaning that nn + 2 is prime for n = 3, 737 and 1,349. ~~ Dr Dec (Talk) ~~ 19:26, 10 November 2009 (UTC)
- P.S. I've asked Maple to factor 737737 + 2 and it seems to have either got lost or gone on strike. Either way, it's not very happy. As has been mentioned above: factorisation is much more difficult than primality testing. Hence the security of RSA encryption. ~~ Dr Dec (Talk) ~~ 19:35, 10 November 2009 (UTC)
- According to MapleSoft's web page, isprime returns true if p is very probably prime. It does not say which algorithm it uses but gives references at the bottom. It says there is no known example of it being wrong and such a number would have to be hundreds of digits long. StatisticsMan (talk) 19:45, 10 November 2009 (UTC)
- Mathematica confirms the results of User:Dr Dec. From the implementation notes:
- PrimeQ first tests for divisibility using small primes, then uses the Miller-Rabin strong pseudoprime test base 2 and base 3, and then uses a Lucas test.
- As of 1997, this procedure is known to be correct only for , and it is conceivable that for larger n it could claim a composite number to be prime.
- The Primality Proving Package contains a much slower algorithm which has been proved correct for all n. It can return an explicit certificate of primality.
- I can try to verify this using the last item if it is important to you. -- Meni Rosenfeld (talk) 20:38, 10 November 2009 (UTC)
Infinite-time probability
This problem follows from a debate I was having with a friend in one of my classes. "If two people walk randomly around a room, given infinite time, will they always collide?" To simplify, if the chance of hitting in one minute is between 0 and 1 (say 0.01), what is the chance they will collide given an infinite number of minutes? As it happens, I said "1" and my friend said "almost 1".
Now, it would appear from our articles that the answer is that it will "almost surely" occur.
“ | The difference between an event being almost sure and sure is the same as the subtle difference between something happening with probability 1 and happening always.
If an event is sure, then it will always happen, and no other event can possibly occur. If an event is almost sure, then other events are theoretically possible in a given sample space; however, as the cardinality of the sample space increases, the probability of any other event asymptotically converges toward zero. Thus, one can never definitively say for any sample space that other events will never occur, but can in the general case assume this to be true. |
” |
I'm afraid these paragraphs, as they apply to my problem, are a loss to me. Is it a certainty or not? (I realise that I probably technically won the debate here; the answer is 1. More interesting is the interpretation of that figure, it would seem.) - Jarry1250 [Humorous? Discuss.] 17:57, 10 November 2009 (UTC)
- You can't actually do the experiment for an infinite amount of time, so we can't say what would actually happen. What we can say is that, as the length of time you do it for increases the chance of a collision tends towards one. In the limit as you do it for infinite amount of time we say the probability is one, but it is still possible that it wouldn't happen (both people could, purely by chance, walk round in small, non-intersecting circles for the whole time, for example). All that being said, you could both be wrong. It depends on what you mean by "walk randomly". Random walks are a much studied mathematical technique and different things happen in different situations. I think the best way to model your question is to consider the room as a grid and each person steps either north, east, south or west into the next square each second. You then need to decide on the distribution - are they more likely to keep walking in the same direction than they are to turn around and go backwards, for example? You can't calculate it based on the collision chance in a minute since each minute will be different depending on where they start that minute, and the minutes will be dependant on each other. If they move around really quickly (or the room is really small) then it is possible each minute would be approximately the same, but if they are going at a normal walking speed around a normal sized room, a minute is too short a time period. You may also be interested in Brownian motion, which is like the random walk I described but with an infinitely small grid (roughly speaking). --Tango (talk) 18:09, 10 November 2009 (UTC)
- Thanks Tango - I understand the problem with modelling the exact scenario, to the point it doesn't really matter so much to me as the meta-issue. So the probability is 1, but they might not collide? Weird. - Jarry1250 [Humorous? Discuss.] 18:16, 10 November 2009 (UTC)
- "Might" is such a strong word. The only thing separating "sure" from "almost sure" is that your sample space includes instantiations in which they don't collide. Tango's example of walking forever in circles matches the problem specification of walking inside the room. However, the probability that any of these instantiations will manifest is 0, and for any sensible notion of probability, this means that they will collide. -- Meni Rosenfeld (talk) 18:25, 10 November 2009 (UTC)
- Thanks Tango - I understand the problem with modelling the exact scenario, to the point it doesn't really matter so much to me as the meta-issue. So the probability is 1, but they might not collide? Weird. - Jarry1250 [Humorous? Discuss.] 18:16, 10 November 2009 (UTC)
- For a reasonable mathematical model of the problem, the answer is : "The probability that they will collide at some point is exactly 1. They will almost surely collide at some point". The fact that these two statements do not contradict each other is what the paragraph you quote attempts to clarify. IMO, the distinction between "surely" and "almost surely" is too subtle to survive the translation to the physical world. -- Meni Rosenfeld (talk) 18:19, 10 November 2009 (UTC)
- The distinction involves infinity - nothing involving infinity translates nicely to the physical world. --Tango (talk) 19:06, 10 November 2009 (UTC)
Set of integers
Is there a simple name (e.g. weird number, prime number, even prime number, colossally abundant number) for this set of integers: 1, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53... ? (Note that this is essentially the set of odd prime numbers, except that 1 is not a prime number. These numbers all have the property that they cannot be divided by any number except 1 without becoming a non-integer number.) 68.116.36.179 (talk) 18:32, 10 November 2009 (UTC)
- Some people used to include 1 in the primes but it just causes trouble. I'm not sure why you have excluded 2. Dmcq (talk) 18:37, 10 November 2009 (UTC)
- Also a nitpick; 7 divided by 7 equals 1. Taemyr (talk) 18:54, 10 November 2009 (UTC)
Well, 2 is the only even prime number so being an odd prime is anything but special. If P denotes the set of prime numbers then your set is { { 1 } ∪ P } − { 2 }. So you seem to have substituted 2 for 1 in the set of primes. Why ought this new set have a special name? ~~ Dr Dec (Talk) ~~ 19:04, 10 November 2009 (UTC)
- The phrase "odd prime" does come up quite often in number theory, since it is often desirable to exclude 2 (particularly when dealing with quadratics). Why anyone would want to include 1, though, I don't know. You could call that set "odd positive non-composite integers" if you wanted to, one being neither prime nor composite (and being unique among positive integers in having that property). --Tango (talk) 19:11, 10 November 2009 (UTC)
Short division
I am trying to help my niece with her homework but I cannot remember doing short division at school and can't make any sense of any of the guides I have found. Can someone work through an example with me? Take 4 into 67 - I can get as far as getting to 16 and then I am stumped how I get to the .75? --Cameron Scott (talk) 19:38, 10 November 2009 (UTC)
- If I remember well you do: 4 into 67 is 16 ,then you go on with the remainder 3=67-64. You put a 0 after 3 and a dot after 16, and go on this way: 4 into 30 gets 7 &c. Check short division also. --pma (talk) 19:45, 10 November 2009 (UTC)
What does "&C" mean? I couldn't make any sense of the short division article - it might as well have been in dutch. --Cameron Scott (talk) 19:52, 10 November 2009 (UTC)
- &C is short for the borrowed latin phrase et cetera. 69.228.171.150 (talk) 20:22, 10 November 2009 (UTC)
Let's say that you want to divide 956 by 4. Well you write out
Now how many times does 4 go into 9? Well, 4 goes into 9 twice with 1 left over, so we write a 2 above the 9 and put a little 1 in front of the 5 (like carrying in addition). So we have
Next, how many times does 4 go into 15? Well, 4 goes into 15 thrice with 3 left over, so we write a 3 above the 15 and put a little 3 in front of the 6. So we have
Finally, how many times does 4 go into 36? Well, it goes 9 times with none left over. So we write a 9 above the 36 and stop because the remainder was zero. This gives:
If the remainder weren't zero then we could have carried on by considering
The net result is that 956/4 = 239. ~~ Dr Dec (Talk) ~~ 20:21, 10 November 2009 (UTC)
Thanks, that's a bit clearer. While I remember doing long division at school, I have no recollection of short division. Between three (!!!!) adults, we finally worked it out. --Cameron Scott (talk) 20:26, 10 November 2009 (UTC)