Wikipedia:Reference desk/Mathematics: Difference between revisions
Line 277: | Line 277: | ||
This is the theorem. Suppose the figure is like [http://img12.imageshack.us/i/300kp.png/ this], we construct CO and produce it to P. Then, angle OCA = angle OAC (OC = OA and converse of isosceles triangle theorem) and thus angle OCA = 1/2 angle AOP (exterior angle teorem). Similarly, angle OCB = 1/2 angle BOP. Therefore, angle ACB = 1/2 angle AOB. But if the figure is like [http://img20.imageshack.us/i/301r.png/ this], how do we prove it? <span style="border:1px solid #000075">[[User:Srinivas|<span style="background:#0000FF; color:#BFBFFF; font-weight:bold">Srin</span>]][[User talk:Srinivas|<span style="background:#BFBFFF; color:#0000FF">ivas</span>]]</span> 14:27, 11 November 2009 (UTC) |
This is the theorem. Suppose the figure is like [http://img12.imageshack.us/i/300kp.png/ this], we construct CO and produce it to P. Then, angle OCA = angle OAC (OC = OA and converse of isosceles triangle theorem) and thus angle OCA = 1/2 angle AOP (exterior angle teorem). Similarly, angle OCB = 1/2 angle BOP. Therefore, angle ACB = 1/2 angle AOB. But if the figure is like [http://img20.imageshack.us/i/301r.png/ this], how do we prove it? <span style="border:1px solid #000075">[[User:Srinivas|<span style="background:#0000FF; color:#BFBFFF; font-weight:bold">Srin</span>]][[User talk:Srinivas|<span style="background:#BFBFFF; color:#0000FF">ivas</span>]]</span> 14:27, 11 November 2009 (UTC) |
||
:We can do the same thing: extend CO to a point P on the other side of the circle. Using the same arguments angle OCA = 1/2 angle AOP, and angle OCB = 1/2 angle BOP. This time instead of adding these angles together, we subtract: angle OCA - angle OCB = angle ACB, etc. [[User:Rckrone|Rckrone]] ([[User talk:Rckrone|talk]]) 15:12, 11 November 2009 (UTC) |
:We can do the same thing: extend CO to a point P on the other side of the circle. Using the same arguments angle OCA = 1/2 angle AOP, and angle OCB = 1/2 angle BOP. This time instead of adding these angles together, we subtract: angle OCA - angle OCB = angle ACB, etc. [[User:Rckrone|Rckrone]] ([[User talk:Rckrone|talk]]) 15:12, 11 November 2009 (UTC) |
||
= November 12 = |
Revision as of 08:51, 12 November 2009
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
November 6
Counterexample
Could anyone suggest a nice example of a continuously differentiable function on R^2 which has only 1 stationary point - a local minimum (or maximum), but which is not a global minimum (or maximum)?
I keep looking for functions with minimum at 0,0 but I can't seem to find anything which increases locally in all 4 directions, i.e. along both axes, and yet doesn't produce any more stationary points when it eventually decreases somewhere. I found one in a paper by Ash and Sexton but it was quite unpleasant and I was hoping to find something nice and neat. Any ideas?
Thanks. 82.6.96.22 (talk) 00:50, 6 November 2009 (UTC)
- I think (x3 - x)ey + e2y works. Rckrone (talk) 01:31, 6 November 2009 (UTC)
- As an aside, increasing along both axes doesn't guarantee a local min. For example x4 + y4 - (x+y)4/4 - (x-y)4/4 has only a saddle point at (0, 0). Rckrone (talk) 02:02, 6 November 2009 (UTC)
In polar coordinates: unless I misunderstand your question or am not thinking straight, try
That should have a local minimum at the origin but oscillate unboundedly as you get further away. 69.228.171.150 (talk) 06:10, 6 November 2009 (UTC)
- A further note. Consider a diffeo h of Rn with an open subset A of Rn itself, for instance a ball, h:Rn→A. If f: Rn→R is any differentiable function, the composition g(x):=f(h(x)) is a differentiable function whose critical set has the same structure (same number, index, nullity, level) of the critical points of f in A. Of course, the latter may be whatever. If the global max and min points of f belongs to the boundary of A, then g has also the same sup/inf as f. So for instance there is a function on Rn with critical points of prescribed level, index, and cardinality, and the interval f(Rn) may also be prescribed. --pma (talk) 09:35, 6 November 2009 (UTC)
- With regards to :, would that not have more than 1 stationary point though? 82.6.96.22 (talk) 14:23, 6 November 2009 (UTC)
- Oh yes, I did misunderstand. Hmm, I'll think about whether I can fix it by suitably twisting up the shape while keeping the equations reasonably neat. 69.228.171.150 (talk) 22:50, 6 November 2009 (UTC)
- With regards to :, would that not have more than 1 stationary point though? 82.6.96.22 (talk) 14:23, 6 November 2009 (UTC)
Try this section of the maxima and minima article. It says that ƒ : R2 → R given by ƒ(x,y) = x2 + y2(1 − x)3 has its only critical point at x = y = 0 but that this isn't a global minimum because ƒ(4,1) = −11 < ƒ(0,0) = 0. ~~ Dr Dec (Talk) ~~ 15:44, 6 November 2009 (UTC)
Polynomials
When you say R[x]/<x^2+1> is isomorphic to the complex numbers, do you mean that because [x]^2+[1]=[0], where the brackets denotes the respective equivalence classes, and x is the polynomial f(x)=x? Is the following correct: [x]^2=[x^2] and hence [x]^2+[1] = [x^2+1] = [0], so [x] is like the imaginary unit? Najor Melson (talk) 05:38, 6 November 2009 (UTC)
- If R is a ring and if I is an ideal of R, the map sending r to the coset r + I is a surjective ring homomorphism. In particular, using your notation, [x]2 = [x2] is true, as well as your other assertions. Consider the surjective homomorphism defined on R[x] that sends a polynomial f to its image at i (the imaginary unit). Since R[x] is a principal ideal domain, and since x2 + 1 is irreducible over the real numbers, the kernel of this homomorphism is generated by x2 + 1. In particular, . Furthermore, since the image of x under this isomorphism is i, x does indeed "act like i" in the given quotient ring (ismorphisms preserve the behaviour of elements). Hope this helps. --PST 05:53, 6 November 2009 (UTC)
It means not only that [x]^2+[1]=[0], but more generally there is a one-to-one correspondence between these equivalence classes and the complex numbers, such that multiplication and addition of equivalence classes correspond exactly to multiplication and addition of complex numbers. Michael Hardy (talk) 16:58, 6 November 2009 (UTC)
ordinal strength
The ordinal strength of PA is ε0. How does one figure it out for PA+Con(PA)? Thanks. 69.228.171.150 (talk) 08:36, 6 November 2009 (UTC)
- It depends on the particular way you choose to measure ordinal strength. There is a trivial answer for measures based on computable functions, such as the smallest ordinal α such that every provably total computable function is β-recursive for some β < α: since the set of p.t.c.f. of a theory is unaffected by addition of true -sentences, PA + Con(PA) has the same ordinal as PA, ε0, in this measure. I'm afraid the answer is more complicated for measures sensitive to the -fragment of the theory. For example, the -ordinal measure by Beklemishev[1][2] gives ε0·2 for PA + Con(PA). — Emil J. 16:03, 7 November 2009 (UTC)
- Thanks, that's interesting. I am naive about this stuff and thought that Gentzen had shown that PA couldn't prove ε0 was well-ordered, but that adding TI(ε0) was enough to prove Con(PA), so the ordinals had to be different. The Beklemishev article looks informative and I'll try to read it. 69.228.171.150 (talk) 21:06, 7 November 2009 (UTC)
- It is true that PA cannot prove ε0 was well-ordered, and that primitive recursive arithmetic plus transfinite induction on ε0 proves PA is consistent. The thing that is not obvious (or perhaps non-intuitive) is that if you add Con(PA) to PA as an axiom, you obtain no new provable ordinals. This is because the two measures of logical strength, namely:
- T is stronger than S if T proves Con(S)
- T is stronger than S if the proof-theoretic ordinal of T is higher than the proof theoretic ordinal of S
- are not actually equivalent. I do not even know if there is a theorem that (2) implies (1) for some broad class of theories. — Carl (CBM · talk) 13:35, 9 November 2009 (UTC)
- TI(ε0) is a second-order statement. While one often encounters it mentioned sloppily in the context of PA, it only makes sense in second-order arithmetic, or at least arithmetic with an extra uninterpreted predicate, so properly speaking it gives an ordinal analysis of ACA0 or PA(X) rather than PA. Having said that, this is basically -ordinal analysis, which is even less sensitive to additional axioms than the computable functions approach (i.e., -analysis) I mentioned above. ACA0 + Con(PA), as well as ACA0 + T for any
r.e.set T of arithmetical sentencesof bounded complexityconsistent with PA, has the same ordinal ε0 as PA in this setting. The reason is that TI(ε0) implies not only Con(PA), but in fact the full uniform reflection principle for PA. — Emil J. 14:08, 9 November 2009 (UTC)- You can also consider restricted forms of TI as a schema for classes of arithmetical formulas, this works already in the first-order language. If I got the subscripts right, -TI(ε0) is not provable in any consistent extension of PA by a set of -sentences, in particular PA + Con(PA) does not prove -TI(ε0). — Emil J. 17:06, 9 November 2009 (UTC)
- I did get the subscripts wrong, or rather, badly suboptimal. The (hopefully) correct property is that for any n ≥ 0, -TI(ε0) is equivalent to -TI(ε0), which is in turn equivalent to -RFN(PA), and consequently is not provable in any consistent extension of PA by a set of -sentences (whereas it is easy to see to be expressible as a single -sentence). In particular, already -TI(ε0) is unprovable in PA + Con(PA) (or in any consistent extension of PA by -sentences). — Emil J. 13:05, 10 November 2009 (UTC)
- You can also consider restricted forms of TI as a schema for classes of arithmetical formulas, this works already in the first-order language. If I got the subscripts right, -TI(ε0) is not provable in any consistent extension of PA by a set of -sentences, in particular PA + Con(PA) does not prove -TI(ε0). — Emil J. 17:06, 9 November 2009 (UTC)
- Thanks, I just spotted the after-Nov-7 posts. I didn't realize that PA and PA+X could have the same ordinal if PA doesn't prove X. It is counterintuitive because of the natural picture of these theories being well-ordered as you add or take away axioms. I also hadn't thought much about how to formalize TI(ε0). The book I was reading [3] may have handwaved it, but I'll go back and see if I can figure it out. 69.228.171.150 (talk) 03:51, 12 November 2009 (UTC)
- It is true that PA cannot prove ε0 was well-ordered, and that primitive recursive arithmetic plus transfinite induction on ε0 proves PA is consistent. The thing that is not obvious (or perhaps non-intuitive) is that if you add Con(PA) to PA as an axiom, you obtain no new provable ordinals. This is because the two measures of logical strength, namely:
- Thanks, that's interesting. I am naive about this stuff and thought that Gentzen had shown that PA couldn't prove ε0 was well-ordered, but that adding TI(ε0) was enough to prove Con(PA), so the ordinals had to be different. The Beklemishev article looks informative and I'll try to read it. 69.228.171.150 (talk) 21:06, 7 November 2009 (UTC)
Birth and Death Chain - Markov Processes
Hi all :)
I asked this a while ago but never got an answer, hopefully someone might be able to help me this time. For a fairly simple birth-death chain (1D, with probabilities , , of moving in each direction at node i), what's the easiest way to find (*) P(Xn-> infinity as n -> infinity) - e.g. should I be looking at the expected time of something as n tends to infinity, or P(Xn<k) as n tends to infinity, or the transition matrix P^n etc? Hopefully once I know a sensible way to show (*), the actual question will be fairly simple.
Thanks a lot, Otherlobby17 (talk) 16:28, 6 November 2009 (UTC)
- Could you describe the problem in more words? For example, what is k? 69.228.171.150 (talk) 23:00, 6 November 2009 (UTC)
- I want to find it for values of k in - is there a general formula of any sort? Initially I want to show that the probability is 1 for k=2, but then I've been asked to find the formula for a general k in the positive reals - why, is that not feasible? Other than the , , formulae, I don't really have anything else to explain unfortunately, I mean it's a markov chain along a line (natural numbers, say) with probabilities p_ij of moving from i to j. Sorry but that's pretty much the whole problem! Otherlobby17 (talk) 15:06, 7 November 2009 (UTC)
- It seems you'll have to explain your notation even further. Specifically, your formula for has no mention of , and your is not given any meaning. Is one of the states, or some constant? What exactly do all of these labels mean? Nm420 (talk) 18:44, 7 November 2009 (UTC)
- I'm pretty sure what the OP means is that when the state is currently at i, pi is the probability of it moving to i+1, and qi is the probability of it moving to i-1. No other moves are allowed. Referring to the notation in Birth-death process, pi = λi and qi = μi. k is just a constant as the OP said. The answer to the question presumably depends on k. Sorry I don't have any insight into solving the problem. Rckrone (talk) 19:20, 7 November 2009 (UTC)
- That makes some sense, though I'm afraid there's little aid I can provide in this problem as well. Since the chain is irreducible, all states are either transient or recurrent; that is, the probability that is either 0 or 1. If you can show under what conditions state 0 (say) is transient, then you will be done. 97.83.155.87 (talk) 20:54, 7 November 2009 (UTC)
- I'm pretty sure what the OP means is that when the state is currently at i, pi is the probability of it moving to i+1, and qi is the probability of it moving to i-1. No other moves are allowed. Referring to the notation in Birth-death process, pi = λi and qi = μi. k is just a constant as the OP said. The answer to the question presumably depends on k. Sorry I don't have any insight into solving the problem. Rckrone (talk) 19:20, 7 November 2009 (UTC)
- It seems you'll have to explain your notation even further. Specifically, your formula for has no mention of , and your is not given any meaning. Is one of the states, or some constant? What exactly do all of these labels mean? Nm420 (talk) 18:44, 7 November 2009 (UTC)
- I want to find it for values of k in - is there a general formula of any sort? Initially I want to show that the probability is 1 for k=2, but then I've been asked to find the formula for a general k in the positive reals - why, is that not feasible? Other than the , , formulae, I don't really have anything else to explain unfortunately, I mean it's a markov chain along a line (natural numbers, say) with probabilities p_ij of moving from i to j. Sorry but that's pretty much the whole problem! Otherlobby17 (talk) 15:06, 7 November 2009 (UTC)
- Apologies, I assumed this was fairly standard notation but was obviously mistaken. Yes, at state i (i=0,1,2,...) we may move left by 1 state with probability qi (or Pi,i-1 if you prefer to talk about the transition matrix) and right by 1 state with probability pi (Pi,i+1) - pi+qi=1, so we have only these 2 options at each stage. The relation between pi and qi is the one given above involving k, and I want to find a solution based on the value of k, which is somewhere in (0,infinity) (I am first asked to prove the result is P(Xn-> infinity as n -> infinity)=1 for k=2, and then asked for general k, but obviously the general case would solve my first problem anyway, unless you need the result at k=2 to prove the general case) - when I mentioned pi,j above, what i meant was the probability of moving from i to j - nonzero iff j=i±1, in this case. Does that make sense?
- Also, I follow what you say about all states being either transient or recurrent, but how does that necessarily imply that the probability that is either 0 or 1? Thanks very much all :) Otherlobby17 (talk) 20:59, 7 November 2009 (UTC)
- If a state were recurrent, then the probability of the process returning to it in a finite amount of time is 1; obviously the process cannot return to any particular state if it is growing without bound. That is the intuitive idea behind it anyhow. 97.83.155.87 (talk) 22:06, 7 November 2009 (UTC)
- If a state is recurrent then the probability of the sequence going to infinity is zero, but if all the states are transient, couldn't the probability of the sequence going to infinity still be less than 1? I don't see why it would have to be restricted to 0 or 1 either. Rckrone (talk) 05:57, 8 November 2009 (UTC)
- If all the states are transient, meaning that for any j in N the set is finite almost surely, then (by sigma-additivity) is almost surely a sequence of naturals that takes each value finitely many times, therefore diverges to . Following 97.83 directions, I guess one should start by computing or bounding the probability to go back to 0 after the time n; it's the corner coefficient in the n-fold power of the associated infinite matrix
- However it seems difficult to do it for the given . But for instance, if one finds then 0 has to be transient, as the latter value is a bound for the probability that for at least one n>0 (a more precise bound may be obtained via the inclusion-exclusion formula also). In the lack of better ideas, I'd suggest to make some numerics with various choices of the parameter k (note that only depends on the first n/2 values of or so) . --pma (talk) 15:28, 8 November 2009 (UTC)
- If a state is recurrent then the probability of the sequence going to infinity is zero, but if all the states are transient, couldn't the probability of the sequence going to infinity still be less than 1? I don't see why it would have to be restricted to 0 or 1 either. Rckrone (talk) 05:57, 8 November 2009 (UTC)
Help with Logic Problem
I think i've made progress, but I'm stuck--I was hoping for a nudge in the right direction
I'm only allowed to use the 18 valid argument forms.
Here's what I have so far
- A≣B (premise)
- ~(A ⋅ ~R) ⊃ (A ⋅ S) (premise) /∴ ~(B ⋅ S) ⊃ ~(A ⋅ R)
- (A ⊃ B) ⋅ (B ⊃ A) 1 EQUIV
- A ⊃ B 3 SIMP
- (A ⋅ ~R) v (A ⋅ S) 2 IMP
- A ⋅ (~R v S) 5 DIST
- A 6 SIMP
- B 7, 4 MP
- ~R v S 6 SIMP
but now i'm completely stuck! I have a sneaking suspicion that ADD might help......UGH, any thoughts?
sorry for the double post, i fixed the formatting209.6.48.226 (talk) 18:41, 6 November 2009 (UTC)
- Well, "the 18 valid argument forms" doesn't really ring a bell — you should know that which inference rules are taken as basic is something that varies by presentation. There's a good chance that the specific list of inference rules and their names/acronyms is particular to, perhaps, even just the one professor from whom you're taking the course. It's also possible that this is some more widely recognized list that someone else can help you with, just not me. But it's very far from universal, and you should be aware of that. --Trovatore (talk) 20:24, 6 November 2009 (UTC)
- I hadn't heard of 18 valid argument forms either, but there are a few ghits. 69.228.171.150 (talk) 20:40, 6 November 2009 (UTC)
November 7
Probability question
This is nothing to do with Homework, just an interesting problem that I came across. Assume there are three possible states for a man : He can be healthy, sick, or dead. The probability that he is sick the next day given that he is healthy today is 0.2 (the values don't really matter much here). The probability that he is dead the next day given he is sick today is 0.25. The probability that he is dead the next day given he is healthy today is zero. The probability that he is healthy the next day given he is sick is 0.4. Given he is healthy today, what is the probability that he will never die ? My instinct tells me that the answer is zero, he has to get sick somewhere down the line, and eventually die. Rkr1991 (Wanna chat?) 06:32, 7 November 2009 (UTC)
- Yes, it's 0. Assuming the independence you may observe that the probability to be dead within 2 days is in any case at least p=0.05, so the probability to be still alive after 2n days is not larger than (1-p)n, that goes to 0 exponentially. Yours is an example of a Markov chain. --pma (talk) 07:59, 7 November 2009 (UTC)
- OK. So how do you find the answer to the slightly harder question, what is the probability that he is alive after n days given he is alive today ? Rkr1991 (Wanna chat?) 10:55, 7 November 2009 (UTC)
- (I assume you mean "given he is healthy today" ). Just write the transition matrix relative to the states 1=H, 2=S, 3=D, which is
- P:=
- where pij is the probability to pass from the state i to the state j. Then, in Pn the coefficient p(n)1,3 is the probability to be in 3 at time n starting from 1 at time 0. To write Pn efficiently you should diagonalize P first (you can: it has 3 simple eigenvalues: 1, 0.213, and 0.936), P=LDL-1, Pn=LDnL-1. Check Markov chain for details.--pma (talk) 12:13, 7 November 2009 (UTC)
- OK. So how do you find the answer to the slightly harder question, what is the probability that he is alive after n days given he is alive today ? Rkr1991 (Wanna chat?) 10:55, 7 November 2009 (UTC)
- Great, thanks. Rkr1991 (Wanna chat?) 13:47, 7 November 2009 (UTC)
- Just a quibble here: You say he has to eventually die. Not really. That he will die eventually has probability 1, but is not certain; it is possible for him to remain healthy for all time, but the probability is 0. It's a subtle distinction, but an important one in some contexts. --Trovatore (talk) 01:20, 9 November 2009 (UTC)
- Indeed. He will almost surely die. --Tango (talk) 01:24, 9 November 2009 (UTC)
- And we should then add, that he will almost surely remain dead forever, but not certainly, as there is a probability 0 that he comes back :-/ --pma (talk) 04:12, 12 November 2009 (UTC)
- Indeed. He will almost surely die. --Tango (talk) 01:24, 9 November 2009 (UTC)
Moment of inertia of a trapezoidal prism
I'm trying to put together a little tool to calculate a first approximation to the centre of gravity and moments of inertia of a conventional airliner. Most of it is pretty simple: I'll model the fuselage as a solid cylinder (though I'm tempted to try to make it more a combination of the structural mass as a cylindrical shell and the payload mass as a cylinder inside it), the engines also as cylinders, and the moments of inertia around the aircraft CG of the tail parts are dominated by the parallel axis theorem so I can neglect their contributions about their own CGs.
The challenge I'm finding is the wing. It is not small and it is close to the aircraft CG so I need it's own moment of inertia. I would like to take into account the effects of wing sweep, dihedral, and taper (that is, the difference in chord between the wing tip and the wing root). I don't need anything more complex like elliptical wings or multiple taper ratios. Sweep and dihedral I'll deal with by just rotating the moment of inertia matrix once I have it. So what I need is the moment of inertia of a trapezoidal prism. But I can't find any equations for that anywhere.
From the Moment of inertia article:
The convention I am using is that x is rearward, y is out the right wing, and z is up. So the wing is a trapezoid when projected on the x-y plane, with a constant (relateively small) thickness in the z direction. I am happy keeping the density (rho) constant. I expect that the answer will be in the forms of formulae for Ixx, Iyy, Izz about the principal axes (the diagonal of the matrix), with Ixz, Iyz, and Ixy zero and I will have to rotate the matrix to work in my coordinate system.
I would also like some guidance on the appropriate thickness to choose as the z-dimension. Clearly it is airfoil shaped, but I was going to approximate it as rectangular. Given the maximum airfoil thickness, what would be a rough guide to the right prism thickness? Actually, now that I think about it, in truth the height of the airfoil is proportional to the chord as well, that is it decreases from root to tip. But I am happy to neglect that. An alternative to modelling it as a rectangular section would be a diamond section, i.e. a quadrilateral with two sets of two equal sides, and the user of my tool would have to specify the maximum airfoil thickness and the location of the maximum airfoil thickness.
I know I could have asked this on the Science RD, but I hope I've taken the physics out of the question and it is now actually a math question.
Thanks, moink (talk) 07:13, 7 November 2009 (UTC)
- Have you considered numerical integration? If the parameters don't change too drastically, you can do that for various values and fit a function to the results. -- Meni Rosenfeld (talk) 06:30, 9 November 2009 (UTC)
- I could do that, but I am pretty sure there is a closed-form solution. moink (talk) 08:54, 9 November 2009 (UTC)
- Deciding I had to start deriving it myself, I googled "centroid of a trapezoid" to at least not have to derive the centroid, and found this page: [4] which has two of the three answers I need. The third can't be too terribly different from a rectangular prism. moink (talk) 08:54, 9 November 2009 (UTC)
- Looking more closely at those, they are area moments of inertia and not mass moments of inertia, but I think they are still helpful as partial solutions to the integral. moink (talk) 09:10, 9 November 2009 (UTC)
Concatenations with preimages
Given a function f : X → Y, how to prove f(f −1(B)) ⊆ B and f −1(f(A)) ⊇ A for all subsets A of X and all subsets B of Y? --88.78.2.122 (talk) 08:52, 7 November 2009 (UTC)
- Just use the definitions of the preimage f −1(B) ("all what goes in B") and of the image f(A) ("where A goes"). So f(f −1(B)) ⊆ B reads "all what goes in B, goes in B" and f −1(f(A)) ⊇ A reads "A goes where also goes all what goes where A goes" .
- Also notice the useful equivalence, for any subset A of X and any subset B of Y:
- f(A) ⊆ B ⇔ A ⊆ f −1(B),
- from which you can deduce both your inclusions starting respectively from f −1(B) ⊆ f −1(B) and f(A)) ⊆ f(A). --pma (talk) 09:03, 7 November 2009 (UTC)
- Just for talking, you may consider the power set of X and Y, namely P(X) and P(Y), as small categories, where objects are subsets, and arrows are inclusions. Then the image map f* : P(X)→P(Y) and the preimage map f* : P(Y)→P(X) are adjoint functors, and of course the inclusions you wrote are just the co-unity and unity of the adjunction f* f*. (Check closure operator also).
- PS: Talking about Html, does anybody know how to get the ℘ a bit higher, and to get a bit lower the star in f* ? (I got it in quite a weird way). And why I can't see ⊣ for ? --pma (talk) 11:25, 7 November 2009 (UTC)
- <span style="vertical-align:20%">℘</span> ℘ (but not abusing Weierstrass' p-function symbol for power set is a better option), ''f''<sub style="vertical-align:-80%">*</sub> f*. Note that fine-tuning like this is very font-dependent, hence you cannot do it in a way which would reliably work for all readers. As for ⊣, there is no such thing among the XHTML 1.0 named character entities[5] (Wikipedia's HTML tidy code will thus remove it even if a particular browser could support it as an extension). The character appears in Unicode on position U+22A3, hence you can get it by ⊣ (or ⊣) in HTML: ⊣⊣ (or simply input the character directly: ⊣). — Emil J. 15:03, 7 November 2009 (UTC)
- Thank you! Nice emoticon :⊣) also --pma (talk) 15:37, 7 November 2009 (UTC)
- <span style="vertical-align:20%">℘</span> ℘ (but not abusing Weierstrass' p-function symbol for power set is a better option), ''f''<sub style="vertical-align:-80%">*</sub> f*. Note that fine-tuning like this is very font-dependent, hence you cannot do it in a way which would reliably work for all readers. As for ⊣, there is no such thing among the XHTML 1.0 named character entities[5] (Wikipedia's HTML tidy code will thus remove it even if a particular browser could support it as an extension). The character appears in Unicode on position U+22A3, hence you can get it by ⊣ (or ⊣) in HTML: ⊣⊣ (or simply input the character directly: ⊣). — Emil J. 15:03, 7 November 2009 (UTC)
plausibility of mathematical completeness
It's the mid 1920's and you're a top mathematician being recruited to the Hilbert school. Your mission: prove Hilbert's famous contention (from "On the Infinite"):
- As an example of the way in which fundamental questions can be treated I would like to choose the thesis that every mathematical problem can be solved. We are all convinced of that. After all, one of the things that attracts us most when we apply ourselves to a mathematical problem is precisely that within us we always hear the call: here is the problem, search for the solution; you can find it by pure thought, for in mathematics there is no ignoramibus.
The subtlety of Gödel's 1931 incompleteness proof may have escaped everyone else, but still, those guys weren't slouches. My question is why could anyone ever have been convinced Hilbert's thesis was true?
Consider a simple enumeration of all the sentences φ1, φ2,... over, say, the language of PA. For each sentence φk make a red dot on the number line at the point k, and write the formula denoted by φk next to the dot. At the end of this process you have an infinite row of red dots, each labelled by a formula. Now go through again and enumerate the theorems t1, t2,... and for each ti, find the dot you made earlier for that formula and flip its color from red to blue. (If it's already blue, don't do anything; this will happen a lot because you will generate all the theorems infinitely often). Also similarly flip the color of the dot for ~ti to blue. In other words you have colored the decidable formulas' dots blue and left the undecidable ones (if there are any) red. Hilbert says that at the end of this, you will have no red dots left, just blue ones. This seems like a pretty elementary description, certainly accessible and hopefully natural to any logician of that time.
Of course we know today that there will necessarily be red dots at the end, and of course we can reasonably say that until 1931, Hilbert could reasonably harbor some hope that there wouldn't be red dots. But what could make him (or anyone) so sure that there'd be no red dots. The process is just a big combinatorial mess, some weird contraption jumping around all over the place flipping dots from red to blue, and the exact pattern of flipping depends intimately on the exact make-up of the contraption (i.e. the content of the theory being studied). You could construct axioms that resulted in only flipping the prime-numbered dots, or whatever. Was there any mathematically reasonable plausibility argument that the dots would end up all blue for a theory like PA or ZF? Did they expect too much, based on the completeteness of simpler theories like Presburger arithmetic? Or was it just a bogus emotional conviction people had back then, that's alien to us now? 69.228.171.150 (talk) 12:22, 7 November 2009 (UTC)
- While waiting for a more technical explanation, my answer to your question is: evidently, at that time it was not at all trivial. Speaking in general, it seems to me that this question reflects a common ahistorical attitude of we contemporary/modern people towards people of the past/ancient people (your post is still correct, I'm just taking the opportunity). We usually say "how could they be so naives to believe something, and not to see what for us is such a triviality". The conclusion is: "we are then smarter than them" (usually only thought), and a certain condescending air. I think I have enough evidence that our brain is not better (experiment: just ask for a formula for the solution of the third degree equation to a mathematician who doesn't know it). If today so many ideas are so easily understandable for so many people of medium intelligence, whereas once they were so difficult and just for a small elite, this seems to me the proof that the people who created these ideas, and those who prepared the ground for them, where giants, and we should have the greatest gratitude and admiration for them. --pma (talk) 14:02, 7 November 2009 (UTC)
- I'm not at all saying it should have been obvious in the 1920's that PA was incomplete. I'm just wondering why anyone would have expected it to be complete. Certainly it was an important question to investigate, but why did almost everyone expect a particular outcome instead of saying "gee, I don't know"? Also I'm talking about the mid 1920's (maybe I should have said late 1920's, since "On the Infinite" was written in 1925), not the era of Frege. The groundwork was pretty well established by then. (Actually, one exception: Emil Post anticipated the incompleteness theorem in 1921, but wasn't able to put a proof together.) There are plenty of statements today that we think are true but for which we don't have a proof (e.g. Riemann hypothesis, P!=NP, FLT until recently, etc). But we can make reasonable plausibility arguments for each of those, even if we turn out to be wrong afterwards. I'm wondering whether there was a plausibility argument for completeness. 69.228.171.150 (talk) 20:58, 7 November 2009 (UTC)
- My guess is that it was based on the notion that mathematical truth derived entirely from axiomatic proof, combined with the optimistic faith that truth is knowable ("we must have no ignorabimus"). Had Hilbert really been thinking of the question in the combinatorial light you present, I doubt he would have been so sure; it was because he invested the combinatorics with meaning that he thought it had to be this way. I have not extensively read Hilbert and am speculating a bit here, but this makes sense to me. --Trovatore (talk) 21:29, 7 November 2009 (UTC)
- I'm not at all saying it should have been obvious in the 1920's that PA was incomplete. I'm just wondering why anyone would have expected it to be complete. Certainly it was an important question to investigate, but why did almost everyone expect a particular outcome instead of saying "gee, I don't know"? Also I'm talking about the mid 1920's (maybe I should have said late 1920's, since "On the Infinite" was written in 1925), not the era of Frege. The groundwork was pretty well established by then. (Actually, one exception: Emil Post anticipated the incompleteness theorem in 1921, but wasn't able to put a proof together.) There are plenty of statements today that we think are true but for which we don't have a proof (e.g. Riemann hypothesis, P!=NP, FLT until recently, etc). But we can make reasonable plausibility arguments for each of those, even if we turn out to be wrong afterwards. I'm wondering whether there was a plausibility argument for completeness. 69.228.171.150 (talk) 20:58, 7 November 2009 (UTC)
Calculus with imaginary constants
Is calculus with imaginary constants, say , just the same as when the constant is real? 131.111.216.150 (talk) 15:43, 7 November 2009 (UTC)
- Broadly. If you want to evaluate an integral like where f is a complex valued function of a real variable and a and b are reals, then you can just write f as the sum of its real and imaginary parts f = u + iv, and then the integral of f is defined to be the integral of u plus i times the integral of v (as long as the integrals exist). However you do have to be a little careful with things like change of variables. Here's an example: consider the real integral . Making the substitution y = ix, so dy=i dx doesn't change the limits and, x^4=(y/i)^4=y^4. So . It follows the integral is zero....but it obviously isn't. Tinfoilcat (talk) 16:38, 7 November 2009 (UTC)
- Making that substitution certainly does change the limits, and the domain integrated over. It changes the integral from one on the real line from -∞ to ∞ to one on the imaginary line from -i∞ to i∞. Algebraist 19:04, 7 November 2009 (UTC)
- It doesn't change the "limits" if you work on the Riemann sphere, where there's only one infinity. But of course it changes the path - that's why there's no paradox here. Tinfoilcat (talk) 19:26, 7 November 2009 (UTC)
- Making that substitution certainly does change the limits, and the domain integrated over. It changes the integral from one on the real line from -∞ to ∞ to one on the imaginary line from -i∞ to i∞. Algebraist 19:04, 7 November 2009 (UTC)
To answer the OP: Yes, you treat complex constants just the same as real constants. For example:
where c1, c2 ∈ C are constants of integration. It's usual to use z as a complex variable instead of the familiar x for a real variable. Calculating complex integrals is a bit different to calculating real ones. Given two real numbers, say x1 and x2, there's only one way to integrate a function between x1 and x2 since the real numbers form a line. When we integrate over the complex numbers we have a lot more freedom. Given two complex numbers, say z1 and z2, we can integrate a function along a path which starts at z1 and ends at z2 since the complex numbers form a plane. In practise we just make some substitutions and calculate a simple integral. Say you want to integrate ƒ(z) = 1/z around the the unit circle, well the unit circle is given by the contour γ = {eit : 0 ≤ t < 2π}. So we make the substitution z = eit (and hence dz = ieitdt) and integrate for 0 ≤ t < 2π, i.e. we calculate
Notice that γ starts and ends at the same point, namely 1, but the integral is not zero (as it would be in the real case). This is where complex analysis starts to get interesting…
~~ Dr Dec (Talk) ~~ 14:21, 8 November 2009 (UTC)
Mathematical Sequences and inductive reasoning
Use inductive reasoning to write the 10th term in each sequence below. THEN write a formula for the nth term.
a) 4, 13, 22, 31, 40...
b) 3, 5, 9, 17, 33...
c) 0, 6, 24, 60, 120...
d) 5, 10, 17, 28, 47, 82... —Preceding unsigned comment added by Whitesox1994 (talk • contribs) 18:36, 7 November 2009 (UTC)
- We can't do homework for you. And even if it is not homework, at least show us your current attempt at solving these problems. You will find some inspiration at arithmetic sequence for the first answer. Zunaid 19:23, 7 November 2009 (UTC)
- Such sequences are often best explored by calculating successive differences between the terms, sometimes continuing the process by calculating differences of the differences. A clue for d: it's growing too quickly for that to be useful.→81.153.218.208 (talk) 19:51, 7 November 2009 (UTC)
- It's useful for (d), it just won't give you the answer by itself. Do the process twice and you'll get a sequence you should immediately recognise. You can get the next term in the sequence from that, but to get a formula from it will require some cleverness. --Tango (talk) 20:18, 7 November 2009 (UTC)
Your question is imprecise and unfortunately illustrates the "sort of mathematics" schools are teaching students these days. In many branches of applied mathematics, sequences will not behave in such a simple manner, and thus basic arithmetic is of no use (as pma notes below, one usually needs a large set of data to extrapolate, and even then, other techniques must be used for extrapolation - one can never be sure that any particular extrapolation of a sequence is correct). In any case:
- (a) The successive differences are: 13 - 4 = 22 - 13 = 31 - 22 = 40 - 31 = 9. Since the difference between two terms of the sequence remains constant, one may conclude that "adding nine to the previous term gives the next term". Now, you should determine a formula for the nth term (which will of course contain n as a variable). Using this formula, compute the tenth term of the sequence.
- (b) The successive differences are: 5 - 3 = 2, 9 - 5 = 4, 17 - 9 = 8, 33 - 17 = 16. Writing the diffences as a sequence - 2, 4, 8, 16..., we observe that the differences are powers of two. Thus the desired formula for the nth term is . Now, compute the tenth term of the sequence using this formula.
- (c) Do this by yourself using the other answers I have given you. To learn mathematics is to do it (and think about it). Furthermore, mathematics cannot be done by watching someone else do it - often one must invent new methods to tackle new questions; methods not previously known. I hope that by doing (c), and by comparing it to the solutions to (a), (b) and (d), you will appreciate this.
- (d) The successive differences are: 10 - 5 = 5, 17 - 10 = 7, 28 - 17 = 11, 47 - 28 = 19, 82 - 47 = 35. Writing the differences as a sequence - 5, 7, 11, 19, 35..., we shall now compute the differences of the differences - 2, 4, 8, 16...; a sequence which you should recognize. With (a), the successive differences remained constant - thus the formula for (a) involved only n (and not n2 or any powers of n). On the other hand, in (b), the differences were powers of 2, and thus we took the summation of all powers of 2 in our formula. Likewise, in this case, the differences of the differences are all powers of 2 - working in a similar manner to (b), the formula for the nth term is where we adopt the convention that an empty sum is necessarily zero.
Although I have given you some of the answers here, we are specifically advised not to do so. Therefore, instead of merely accepting the answers as true, you can do a couple of simple exercises - for instance, check that for the first few values of n, the nth term agrees with the formula I have given. Also, try to invent new sorts of sequences and apply the above methods to those - this will be good practice. Finally, I have not given you the actual answers for (c) and (a), so computing the nth term for these sequences should be your first task - spend at least 30 minutes reflecting on these exercises (and their solutions), if you want to benefit from them (and be able to do future homework without assistance). In the future, we probably will not give you solutions unless you show us your work. Hope this helps. --PST 02:39, 8 November 2009 (UTC)
a) 4, 13, 22, 31, 40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...
b) 3, 5, 9, 17, 33, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,......
c) 0, 6, 24, 60, 120, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,......
d) 5, 10, 17, 28, 47, 82, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...... --pma (talk) 22:44, 7 November 2009 (UTC)
- Unfortunately, we are fighting a losing battle against maths teachers on this one - I think it is best to just give in as far as answering questions on the Ref Desk goes. --Tango (talk) 03:59, 8 November 2009 (UTC)
- I think it's really up to the student to be in control of his learning. Honestly when I was in high school I just can't stand my English classes; I have absolutely no idea what the poems are talking about and it just hurts to be writing essays explaining Keats. Although the education is deteriorating, it does help to separate the more able students from the rest (who can learn by themselves), so I wouldn't say it's totally undesirable. Najor Melson (talk) 06:15, 8 November 2009 (UTC)
It is a good idea to repeatedly compute the negative differences bn=an− an+1. Starting with
5 10 17 28 47 82
you get row by row
5 10 17 28 47 82 -5 -7 -11 -19 -35 2 4 8 16 -2 -4 -8 2 4 -2
Now extend the first column downwards and then compute the columns one by one in the same way. Extending by zeroes you get the answer 444. (This sequence is a polynomial of degree 5).
5 10 17 28 47 82 133 204 299 444 -5 -7 -11 -19 -35 -51 -71 -95 -145 2 4 8 16 16 20 24 50 -2 -4 -8 -14 -4 -14 -26 2 4 6 8 10 12 -2 -2 -2 -2 -2 0 0 0 0 0 0 0 0 0 0
Extending the first column in a periodic way you get 1054:
5 10 17 28 47 82 149 280 539 1054 -5 -7 -11 -19 -35 -67 -131 -259 -515 2 4 8 16 32 64 128 256 -2 -4 -8 -16 -32 -64 -128 2 4 8 16 32 64 -2 -4 -8 -16 -32 2 4 8 16 -2 -4 -8 2 4 -2
Bo Jacoby (talk) 07:57, 8 November 2009 (UTC).
- I'd simplify this method a bit. For the last case, I'd only go down 2 levels, skip worrying about the signs, and align them like so:
5 10 17 28 47 82 149 280 539 1054 5 7 11 19 35 67 131 259 515 2 4 8 16 32 64 128 256
- I also believe you made a mistake in the last term. StuRat (talk) 18:33, 8 November 2009 (UTC)
- Thank you! I now corrected my error. The idea is that the row of function values is transformed into the column of coefficients. T(5,10,17,28,47) = (5,-5,2,-2,2). And vice versa: T(5,-5,2,-2,2) = (5,10,17,28,47). Extending the column of coefficients with zeroes leads to a polynomial extension of the row of function values. T(5,-5,2,-2,2,0,0,0,0) = (5,10,17,28,47,82,149,280,539,1054). The transformation T involves subtraction only, but no multiplication, (and so I was attempted to do it by hand, and made the error). The signs are essential for making T an involution, T=T−1. Bo Jacoby (talk) 00:20, 9 November 2009 (UTC).
- Yes, but my assessment is that an involution is a bit beyond a student who needs help finding the 10th term in the sequence 4, 13, 22, 31, 40..., and I always try to tailor my answers to the target audience. StuRat (talk) 00:51, 9 November 2009 (UTC)
November 8
Equation Help
Hi-I need to solve this equation
a(dv/dy)=b+c(dv^2/dy^2) where a b and c are constants
I really don't know where to begin with this-I've tried using integrating factors, bur still can't do it.Cuban Cigar (talk) 03:17, 8 November 2009 (UTC)
- You wish to solve , or, if we re-write, . Observe that this is a homogeneous linear equation with constant coefficients: . This site should help you obtain the solution. Hope this helps. --PST 03:46, 8 November 2009 (UTC)
- Sorry PST, the equation is homogenous, but is not. Bo Jacoby (talk) 14:33, 8 November 2009 (UTC).
- Well the transformation v=z+by/a should cure that. Dmcq (talk) 18:06, 8 November 2009 (UTC)
I've always liked to use the Laplace transform for problems like this. You start with a differential equation and then turn it, via the Laplace transform, into an algebraic equation. You solve that algebraic equation and then turn the solution, via the inverse Laplace transform, into the solution of the original differential equation. It's a bit hard at first, but once you begin to recognise a few objects and learn a few techniques it'll all get much easier. I got (after a couple of not-quite-right solutions):
where v(0) and v'(0) are the initial conditions for the ODE. If you don't have initial conditions then replace them with arbitrary constants. ~~ Dr Dec (Talk) ~~ 19:15, 8 November 2009 (UTC)
- Sorry Dr Dec, this doesn't take the initial data v(0) and v'(0). --pma (talk) 22:40, 9 November 2009 (UTC)
- Huh? What "doesn't take the initial data v(0) and v'(0)"? ~~ Dr Dec (Talk) ~~ 19:42, 10 November 2009 (UTC)
- I'm just saying that the expression you wrote on the RHS, when computed for y=0, doesn't assume the value that you denoted v(0). Maybe there's a typo somewhere. --pma (talk) 20:03, 10 November 2009 (UTC)
- You're quite right! I missed a factor of c. The solution should have been:
- ~~ Dr Dec (Talk) ~~ 23:35, 11 November 2009 (UTC)
- You're quite right! I missed a factor of c. The solution should have been:
Someone was telling me that I could use Integrating factor for this; but I'm not given a value for the constants, so I'm not sure what the value of the discriminant would be. Does anyone know how to do it using integrating factor method?Cuban Cigar (talk) 06:06, 9 November 2009 (UTC) —Preceding unsigned comment added by Cuban Cigar (talk • contribs) 06:05, 9 November 2009 (UTC)
- Did you try using that substitution I gave above? The answer should be fairly obvious after doing that. You don't need integrating factors or anything like that. Dmcq (talk) 13:05, 9 November 2009 (UTC)
- Btw, ehm, what are z and y in your substitution? --pma (talk) 22:25, 9 November 2009 (UTC)
- I used y instead of x because that's what the original poster used. The w is the new variable instead of v. The solution below has a w that is my w differentiated unfortunately but putting something in instead of w' would be the next step. Dmcq (talk) 00:06, 10 November 2009 (UTC)
- Btw, ehm, what are z and y in your substitution? --pma (talk) 22:25, 9 November 2009 (UTC)
- OP: follow Dmcq's hint! Then, if you also want a bit of theory to approach similar problems, remember that the general solution of a non-homogeneous linear equation (here your differential equation cv" - av' + b = 0) is the sum of (I) the general solution of the associated homogeneous equation (here it is cv" - av' = 0) plus (II) a particular solution of the non-homogeneous equation. Now you have two sub-problems, though easier.
- (I) cv" - av' = 0 is really easy. Call v'=w. Then you want w'=(a/c)w. Write the well known general solution w and then get v as (general) antiderivative of w. You will have 2 arbitrary constants (as they call'em) for the general solution, as it has to be.
- (II) a particular solution for cv" - av' + b = 0. Although there is a general method to do that, often a particular solution is found at a glance looking for particular solutions of special smart form. Here the hint is: try a v linear, that is, of the form v(x)=kx for some suitable constant k. Plug it in the equation and choose the constant k that makes kx a solution.--pma (talk) 22:10, 9 November 2009 (UTC)
Ah yes I got it now thanks all for your help =)Cuban Cigar (talk) 10:35, 10 November 2009 (UTC)
An alternative method - Let , then and you have a first order ODE to solve, rather than a second order. Once you have found u, integrate to get v. Readro (talk) 11:03, 10 November 2009 (UTC)
The original equation should have been . If we're going to criticise posters for lack of precision/rigour (see the question above on the 10th terms of some sequences, for example), then let's use the right notation. Actually, I think that some of the real mathematicians here are far too critical of naive (in their terms) posters, making it unlikely that they'll come back. Ego trip, anyone?→86.132.164.242 (talk) 13:20, 10 November 2009 (UTC)
Hi, 86... Here we are mainly virtual mathematicians. Too critical: you're possibly right but (my) problem is I can't see any criticism towards the posters... could you explain better? The notation for derivative is v', why. And Dr Dec used the variable y just because the OP used y too in his/her first post. --pma (talk) 15:54, 10 November 2009 (UTC)no matter
- Why does the variable have to be x? Surely the labelling of a variable is unimportant. I could replace y with a heart and still have the same ODE:
- ~~ Dr Dec (Talk) ~~ 19:47, 10 November 2009 (UTC)
- even nicer! --84.221.208.19 (talk) 20:44, 10 November 2009 (UTC)
- IT MUST BE X!!! , um sorry, I've come across a lot of that recently and it must be rubbing off. The hearts are nice. Very mellow. :-) Dmcq (talk) 22:51, 11 November 2009 (UTC)
Which school are you?
I've been reading sections on mathematical logic on this page, and I wonder what is the current census. Be aware that I have no formal background in mathematical logic.
I tend to prefer Poincare's style, where mathematics is about imagining weird spaces and having fun, satisfying our intuition, instead of the "barren landscape of logicism". I'm not exactly a intuitionist. I don't have a preference for any particular school, and find right and wrong in every school.I'm guessing most people are platonist? This is not surprising, since when we are thinking we visualize pictures and pretend the objects in question really exist in some other dimension. Najor Melson (talk) 03:28, 8 November 2009 (UTC)
- I went through a maths degree without really hearing about these different schools. I've only heard of them on Wikipedia. I think most mathematicians don't care about them, they just get on with it. It's only logicians that get involved with this kind of thing and they are just one small group of mathematicians. --Tango (talk) 03:56, 8 November 2009 (UTC)
- Although I agree with the above comment, I feel that its truth is a rather unfortunate fact of life. Firstly, logic and axiomatic set theory are interesting in their own right - mathematicians who claim that "they are unimportant" (these mathematicians are around, unfortunately), are ignorant people who do not appreciate the work of others. Furthermore, I am sure that it is beyond the capacity of some of these mathematicians to understand these branches (logic and intuition are different sorts of thinking in some respects). Secondly, logic is a fundamental aspect of mathematics - it can be used to not only clarify intuition, but also to develop insights which cannot be determined by mere intuition. Succintly, as mathematicians, we should respect the work of others even if we feel that it is "not worthwhile"; there are some snobs out there who do not know anything about axiomatic set theory/logic, but still claim that it is "not important". That said, Tango is right that there are few logicians around; my point is that it does not make their work less important to that of other mathematicians. --PST 06:48, 8 November 2009 (UTC)
Yes, I agree with PST. Logic is the far opposite of intuition; I used to have a BIG problem with Cantor's concept of cardinality, but now it's like my left hand. I find intuition fails badly especially in topology (or maybe that's because topology is the only field I'm just familiar with, I'm a learner not a professor), and many times reasoning with pure intuition leads to contradictions, so like you said we need logic to develop our theory further. However, I believe to most logicians proofs should be absolutely formal, and to me that's not only 1000+ pages long but also a "barren landscape". I think they'll sacrifice everything to make sure mathematics is contradiction free. Strangely enough I was planning to be a string theorist, and still am, but somehow got into mathematics and it just won't let go! Najor Melson (talk) 12:18, 8 November 2009 (UTC)
- I don't know any logicians who, in practice, think that "proofs should be absolutely formal". It is obvious to everyone that that path is not open to human mathematicians. There are a lot who think in principle that all our informal proofs should be sort of blueprints for formal proofs, but that's a bit different. Then there's the automated theorem proving community, but again that's not really what we're talking about.
- In practice, logicians use intuition as much as anyone. Of course it's an intuition you develop; the intuitions that you have after learning a subject are different from the ones you have at the start. --Trovatore (talk) 07:40, 9 November 2009 (UTC)
May be the original question has more to do with philosophy of mathematics, rather than logic? (Igny (talk) 03:44, 10 November 2009 (UTC))
- I just happened to see a blog post by Philip Wadler about how computers have led to "computational thinking" as an important mode of thought. It links to several articles related to the topic. Certainly, programming and algorithms are endlessly fascinating to some people, and are the subject of plenty of intuition. The Curry-Howard correspondence explains how computer programs and mathematical proofs are in some sense the same thing. Every teenager who stays up all night getting their wikipedia bot to work is experiencing the thrill of proof and logic, even if they don't realize it. 69.228.171.150 (talk) 02:23, 11 November 2009 (UTC)
Generalization of Fermat's little theorem
What is the generalization of Fermat's little theorem for finite fields as referenced here? Thanks--Shahab (talk) 06:27, 8 November 2009 (UTC)
- They mention it here Finite_field#Analog_of_Fermat.27s_little_theorem. Rckrone (talk) 08:08, 8 November 2009 (UTC)
i dunno, but in my math movie Fermat unveils it with: "say hello to my leetle friend." —Preceding unsigned comment added by 92.230.70.105 (talk) 11:48, 8 November 2009 (UTC)
a trick to make a positive-expectation, but in actuality no risk of having to pay out, lotto?
The reason mathematicians don't like lotteries is that for every dollar you put in (buying tickets), your expected return is as little as 17 cents. Obviously for them, it would be no problem to play a lottery where for every dollar they put in, they would expect to get $100 dollars. As for me, the person who would love to provide this lottery, I would do it on the following terms: The price of the lottery is $1, the chances of winning are 1 in a quadrillion, and the payout is $100 quadrillion. Obviously the governments of the world, collectively, could insure this amount, which is 1000 trillion, although if anyone won they may have to pay it out over a few hundred years. However, they would have no problem taking this risk, as if every single person on Earth transferred all the cash they had into lottery tickets in this scheme, the risk of having to payout would still be completely negligible. If even such a negligible risk would be too substantial to the governments of the world (if in addition to cash, everyone were to also transfer EVERY asset, of any kind, into tickets, the risk of payout might reach 1:100), then they could increase the odds to 1 in 100 quadrillion, the payout it $10,000 quadrillion, and the time to payout to many hundreds of years or more. That would certainly leave them at peace that the money they would receive is "free". Now, I would be very happy to take a 1 in a quadrillion risk for far less than $1, but the problem is that I am not actually in the same position as all of the world's governments, collectively. I simply can't offer the reward. So, my question, is this scheme, to "rip off" autistic mathematicians, who have no sense of practicality but only mathematical definitions such as "expected return" -- in this case, by making my lotto tickets more attractive than any other investment they can make on those purely mathematical grounds --, is this scheme salvageable in some way? Is there some trick I can use to make my payout credible, while remaining far too unlikely for me to seriously worry about? I can't think of any tricks of this kind. Maybe there would be some way to take a very large payout, along with an astronomically small risk of this, and somehow divide it using time or some other method so that in the end the actual cash I have on hand is sufficient for the payout, while the risk of payout remains astronomically small, and at the same time, mathematically, the expected payout remains strongly positive ($100 for every $1 invested)? I am open to any practical solution, including perhaps signing contracts with all the collective governments on the world, who would take the risk of having to pay out over 10,000 years. Hell, it can be payout over 100,000 years, there are at least SOME mathematicians who would simply not see a practical objection to that, not having learned about utility and so on. As you can see this whole vein of thought is not completely serious, however I wonder if you have any thoughts on this matter wihch would make the thing salvageable. Thank you! 92.230.70.105 (talk) 11:29, 8 November 2009 (UTC)
- Actually I think this is a pretty good question to ask a mathematician: "Would you take the risk of having to pay out $100 trillion, with the chances of having to do so being 1 in a trillion, for $20 right now?" For me, that's a free $20 right there. I'm guessing strongly that most mathematicians have a different answer :) —Preceding unsigned comment added by 92.230.70.105 (talk) 11:41, 8 November 2009 (UTC)
- The logic, in general, holds, except for the main flaw that no-one, in whatever position, would play it. There are lots of metrics in which the system falls flat, most notably timeframe of return, and I doubt anyone would take that risk. They'd be better trying to fool less intelligent people, for a short while. Seeing other people win is a very important part of why people play lotteries. In any case, it's much more profitable for a government to run a standard one at take the money of that. (Or, run one to cover the costs of a specific project, perhaps an Olympic bid?) - Jarry1250 [Humorous? Discuss.] 11:45, 8 November 2009 (UTC)
- Yes, time-frame is a problem. Do you have any idea for a trick that could overcome this? 92.230.70.105 (talk) 13:04, 8 November 2009 (UTC)
- The huge payout occurs so late that you expect it to be uninteresting, but your eternal soul must pay for ever. See Faust. Bo Jacoby (talk) 14:53, 8 November 2009 (UTC).
- The opening sentance says that mathematicians don't like to play the lottery; that's not true. The reasoning implies that mathematicians don't like to gamble; that's not true either. If these kind of sweeping statements were true then most doctors (of the medical persuasion) would be a tee-total, non-smokers. ~~ Dr Dec (Talk) ~~ 15:04, 8 November 2009 (UTC)
- You seem to think I deduced my conclusion about mathematicians from first principles; not at all, it is my first-hand experience. 92.230.64.60 (talk) 18:09, 8 November 2009 (UTC)
- The original poster might or might noy know that some mathematicians are aware of subtler concepts than expected value, such as risk (and risk aversion), decision theory, prospect theory and so on, and that they are since the times of Daniel Bernoulli at the very least... Goochelaar (talk) 15:30, 8 November 2009 (UTC)
- I believe this strategy was already tried, only on a somewhat smaller scale. Somebody offered lottery tickets with a very small chance that one person might win a billion dollar prize. While they didn't have the billion dollars to cover the risk, the were able to convince an insurance company, Lloyd's of London, if memory serves, to cover the risk. StuRat (talk) 18:15, 8 November 2009 (UTC)
- The diminishing marginal utility of money is significant here. $100 quadrillion isn't really worth significantly more than $100 billion - you can't actually buy anything you couldn't buy with the lesser amount. That means if you work out the expected *utility* rather than the expected money, it is negative. That is why a sensible mathematician would not go for it. --Tango (talk) 22:19, 8 November 2009 (UTC)
- See St._Petersburg paradox#Expected utility theory. By increasing the pay off you can counter any utility argument. (Igny (talk) 03:31, 13 November 2009 (UTC))
- There are some pretty good articles about how to use the Kelly criterion to figure out how big a payout it takes to make buying a lottery ticket worthwhile, based on your already-available bankroll. Basically it's never worth your while unless you're already a millionaire. Try this one and it looks like a web search finds more. 69.228.171.150 (talk) 03:21, 9 November 2009 (UTC)
- That article suffers a major flaw - it assumes you won't share the jackpot. When there is a large jackpot a lot of people buy tickets which makes it very likely that the jackpot will be shared. You can't ignore the number of tickets sold in the calculation, even if you are only counting the jackpot and not the smaller prizes. Since the jackpot will probably be shared the expectation is probably still negative (we had this discussion not long ago and worked out you need on average 5 or 6 rollovers to get positive expectation, if memory serves - that wasn't a very precise calculation, though). With a negative expectation, the correct bet size is always zero (unless you can start your own lottery and thus bet negatively). --Tango (talk) 03:37, 9 November 2009 (UTC)
Vectors of the same magnitude
Let and be two elements in a normed vector space such that . Is there any nice Latin or Greek word for this condition, as a property of this pair of vectors? Isometric does not feel right (I want it to be a property of linear transformations, surfaces etc.). --Andreas Rejbrand (talk) 14:29, 8 November 2009 (UTC)
- But the term isometry already exists and it is a property of a linear transformation. I don't think that there will be a standard word for two vectors of the same magnitude; it's too simple of a property. The vectors u and v would be two radii of a ball of radius ||u|| = ||v||. ~~ Dr Dec (Talk) ~~ 14:56, 8 November 2009 (UTC)
- Yes, I know that, of course (that was what I meant - to me a "isometry" is a kind of linear transformation or a function between two surfaces). OK. --Andreas Rejbrand (talk) 15:06, 8 November 2009 (UTC)
- What exactly are you looking for: a name for two vectors which have the same length in a given space or the name for two vectors whose lengths are kept equal under a given linear transformation? (I don't think there's a special name for the former. As for the latter, well, u and v belong to the same eigenspace.) ~~ Dr Dec (Talk) ~~ 15:16, 8 November 2009 (UTC)
- Exactly what I wrote, i.e. two vectors with the same norm. --Andreas Rejbrand (talk) 15:24, 8 November 2009 (UTC)
- But you said that you "want it to be a property of linear transformations, ...", by "it" I assumed you meant ||u|| = ||v||. I've un-done your last edit. You shouldn't change previous comments once people have started to reply: everyone will get lost. Either restate your point or strike the mistakes and re-write using <s>strike</s> to yield
strikeand then leave a post to say that you've changed the post after people have replied. If someone has replied while you were writing a reply then leave a (ec) at the start of your post to mean edit conflict. It helps other people follow the thread days, weeks, or even months late. - ~~ Dr Dec (Talk) ~~ 15:38, 8 November 2009 (UTC)
- But you said that you "want it to be a property of linear transformations, ...", by "it" I assumed you meant ||u|| = ||v||. I've un-done your last edit. You shouldn't change previous comments once people have started to reply: everyone will get lost. Either restate your point or strike the mistakes and re-write using <s>strike</s> to yield
- I said "Isometric does not feel right (I want it to be a property of linear transformations, surfaces etc.)." and by that I mean that that the it would not feel right to call the vectors "isometric", because "isometric" already has a very fundamental meaning in linear algebra, as a isometric linear transformation is a distance-preserving linear transformation (with a determinant of absolute value 1). I am sorry that the text was misinterpreted. I actually am a university-level math teacher, and right now I am teaching linear algebra, so I do know the topic. I just wondered if anyone knew a nice word meaning "of the same length" when it comes to vectors in a normed vector space. --Andreas Rejbrand (talk) 15:49, 8 November 2009 (UTC)
- ...and two surfaces are said to be isometric if there is an isometry between them, which is the case iff they have the same first fundamental form... --Andreas Rejbrand (talk) 15:58, 8 November 2009 (UTC)
- The parenthesis "(I want it..." is before the period of the sentence "Isometric does not...", and this supports my intended interpretation. Also, my intended interpretation makes sense. (Indeed, the word "isometric" is already defined for linear transformations and surfaces.) The other interpretation assumes that I do not know anything about math, and you should not assume that... --Andreas Rejbrand (talk) 16:00, 8 November 2009 (UTC)
- Also I am a sysop at the Swedish Wikipedia, so I know how to edit discussion pages. --Andreas Rejbrand (talk) 15:50, 8 November 2009 (UTC)
- OK, it seems like I overreacted a bit here. I am quite stressed today. --Andreas Rejbrand (talk) 16:19, 8 November 2009 (UTC)
- I never heard of a term for that. Possibly beacuse saying "u and v have the same norm" is usually accepted as reasonably short. If for some reason you need to use it many times, and prefer not to express it by the formula , there shoud be an expression borrowed from elementary geometry that should work. (I'm not sure: how does one say when two line-segments have the same length? Congruent, equivalent, isometric? In principle isometric is not wrong, but it may be confusing as you and Dr Dec remarked). --pma (talk) 15:51, 8 November 2009 (UTC)
- Congruent seems to be correct for line segments of the same length. Use IE or Firefox and take a look at this page. ~~ Dr Dec (Talk) ~~ 17:25, 8 November 2009 (UTC)
- I've never heard of a word, but if I were to invent one I'd go with something like "cometric". "iso-" usually mean invariant (which isn't what we want), "co-" (or "con-") means "same" ("concentric" means "same centre", "coincident" means going through the same point, "congruent" means "same shape", etc.). So, "cometric" would mean "same length". --Tango (talk) 18:57, 8 November 2009 (UTC)
- Well indeed the Greek ἴσος means equal, and in this exact meaning is used in all compounds including scientific ones. The prefix co- is from the Latin (cum= with ) and refers more to being associated with / joined together with, rather than having the same quality (think of cosine, cofactor, complement but also the ones you quoted). "Cometric" sounds like a mixed Latin-Greek variant of simmetric (σύν=cum). --pma (talk) 19:29, 8 November 2009 (UTC). Why not then the all-Latin version commensurable ops it already exists ;-) --pma (talk) 19:36, 8 November 2009 (UTC)
Don't know of a standard term either; "equal length" vectors is the shortest description I can think of. Abecedare (talk) 19:44, 8 November 2009 (UTC)
- The term I have seen is "equipollent", but it is not in common use as, of course, "equal length" is a more readily understandable term for the concept. 71.182.248.124 (talk) 18:50, 9 November 2009 (UTC)
- Yes! Should I have to choose a one-word term for ||u||=||v||, this is definitely the best of all, as it refers to magnitude or intensity in general and not only to length.--pma (talk) 08:34, 10 November 2009 (UTC)
Pythagorean Theorem
Why are there so many proofs of the Pythagorean Theorem? Most other theorems only have one or two distinct proofs, yet there are several hundred of the Pythagorean Theorem? --75.39.192.162 (talk) 18:40, 8 November 2009 (UTC)
- It is a very old, very simple theorem that everyone knows. That's probably the reason. It could also be something inherent about the theorem - it is so fundamental to mathematics that it crops up all over the place and can thus be proven using the tools of various different fields. One addition reason - it has so many proofs already that people think it is fun to come up with more. --Tango (talk) 18:49, 8 November 2009 (UTC)
Proof theory
As another question related to the last. I remember hearing about proof theory or something like that (I think it was at the 2007 BMC in Swansea). It's basically the metamathematics of proof. I've got a few questions to do with that:
- Is there a way to formally say that one proof is different to another?
- If so, then is there a way to assign a cardinality to the number of proofs of any given theorem?
- Finally, pushing the boat right out: is there a way of assigning a metric to the space of proofs of a given theorem?
~~ Dr Dec (Talk) ~~ 19:49, 8 November 2009 (UTC)
- If we interpret the second question as talking about known proofs, then the answers to all three questions are "yes". The real question is whether there is some natural/useful way to do it. I could say that two proofs are different if the MD5 hashes of their TeX code are different and use the Hamming distance on the MD5 hashes to define a metric. It would be a completely useless way to do it, though. If we could prove the total number of possible proofs using some useful equivalence relation that would be a fascinating result, but I'm sceptical that it can be done. --Tango (talk) 20:19, 8 November 2009 (UTC)
- No, my question was deeper than this. A proof should be language invariant, so writing it in Spanish or in English should give the same proof (from a proof theory point of view). Clearly the LaTeX code would be quite different. It's more about the methodology, the mathematical machinery, and the like. Not just the prima facie appearance of the proof. I think that 69.228.171.150's post is closer to what I had in mind. ~~ Dr Dec (Talk) ~~ 22:15, 8 November 2009 (UTC)
- I understood that. I was giving an extreme example to demonstrate how your question was flawed. What you want to know is if it is possible (it certainly is and I gave an example) but if it can be done in some natural and interesting way. --Tango (talk) 02:17, 9 November 2009 (UTC)
- No, my question was deeper than this. A proof should be language invariant, so writing it in Spanish or in English should give the same proof (from a proof theory point of view). Clearly the LaTeX code would be quite different. It's more about the methodology, the mathematical machinery, and the like. Not just the prima facie appearance of the proof. I think that 69.228.171.150's post is closer to what I had in mind. ~~ Dr Dec (Talk) ~~ 22:15, 8 November 2009 (UTC)
The question of proof identity is a vigorous topic in proof theory and there is ongoing effort to clarify the concept, but there is no satisfactory definition right now. This paper describes some of the issues. You might look at this site for some references and info. We have a related stub article, deep inference. 69.228.171.150 (talk) 22:04, 8 November 2009 (UTC)
- Great, thanks! I'll try to make some headway reading your links. Thanks again. ~~ Dr Dec (Talk) ~~ 22:23, 8 November 2009 (UTC)
- Can someone explain this to me: Tango said "If we could prove the total number of possible proofs using some useful equivalence relation that would be a fascinating result". Is this the same as saying we're trying to prove certain properties of proofs themselves, like as in a meta-theory? If so will there ever be an end to the succession of meta-theories? I find the apparent circularity very confusing, seeing as proof theory is supposed to prove things about metric spaces and we're now trying to construct a metric space out of proofs. Money is tight (talk) 01:51, 9 November 2009 (UTC)
- Yes, we're talking about metamathematics. I guess you could study metametamathematics if you wanted to, but I expect it would be classified as just part of metamathematics. For the most part, it would be indistinguishable from metamathematics because metamathematics is all about using mathematical techniques to study maths itself. --Tango (talk) 02:17, 9 November 2009 (UTC)
- Proof theory normally doesn't prove things about metric spaces. It proves things about proofs. Gödel's incompleteness theorems are probably the most famous theorems of proof theory. Of course, since (according to proof theory) proofs are mathematical objects, it's reasonable to want to assign a metric topology to them, as Declan was asking, so you can tell if two proofs are the same/almost the same/etc. As the link I gave shows, proof identity is an active research area, but complicated. 69.228.171.150 (talk) 04:25, 9 November 2009 (UTC)
- Can someone explain this to me: Tango said "If we could prove the total number of possible proofs using some useful equivalence relation that would be a fascinating result". Is this the same as saying we're trying to prove certain properties of proofs themselves, like as in a meta-theory? If so will there ever be an end to the succession of meta-theories? I find the apparent circularity very confusing, seeing as proof theory is supposed to prove things about metric spaces and we're now trying to construct a metric space out of proofs. Money is tight (talk) 01:51, 9 November 2009 (UTC)
Declan, you might also be interested in Hilbert's 24th problem (also mentioned in the Straßburger link above). 69.228.171.150 (talk) 04:32, 9 November 2009 (UTC)
- Excellent. I'll take a look at this in the morning. Thanks for all of your pointers; I've found them most enlightening. ~~ Dr Dec (Talk) ~~ 22:35, 9 November 2009 (UTC)
- A notion of distance between two proofs A and B could be the minimal length of a proof of "A ⇒ B" plus the minimal length of a proof of "B ⇒ A". This is more or less what we often mean in common speach, when we say that two proofs are "close" to each other. --pma (talk) 04:32, 12 November 2009 (UTC)
Bad assumption ?
Given the assumption that an event has a 1% probability, but the measured results show that this event occurred in 2 out of 3 trials, what is the probability that this assumption was correct ?. StuRat (talk) 20:07, 8 November 2009 (UTC)
- This problem is underdetermined, as we are not given the prior probability of the assumption being correct.
If we have thatIf we have a prior distribution on the probability then it's a simple application of Bayes' formula. - The likelihood of the assumption being correct - which is completely different, but may be of interest nonetheless - is .
- The beta distribution is also relevant for this kind of problem. -- Meni Rosenfeld (talk) 20:35, 8 November 2009 (UTC)
My advice is to ignore prior probability. Here is an explanation of it anyway: if you put a billion normal, fair coins and one double-headed (heads/heads, ie comes up heads 100% of the time) coin into a giant bag and then draw a coin at random and test to see it is fair, getting the result: "heads, heads, heads, heads, heads" (five in a row) then what is the probability you just tested a fair coin? Obviously the normal, reasonable response is to think that the odds are overwhelming that you picked one of the billion normal ones, even though the result you got with the (very probably) normal coin only appears 1/32 of the time with a fair coin. You have this reasonable response because because 1/32 is nothing compared with 1/1billion! In this case, the prior probability makes you quite sure that the coin is fair, even though without it, you would begin to suspect it is unfair. But still, this kind of devilry is irrelevant, and you should do what I do and ignore prior probability. 92.230.64.60 (talk) 21:03, 8 November 2009 (UTC)
- Why should one ignore prior probablity? Because William Feller was ignorant concerning it? Michael Hardy (talk) 05:12, 9 November 2009 (UTC)
- See Prosecutor's fallacy for the sort of problems that this reasong leads to. Taemyr (talk) 21:08, 8 November 2009 (UTC)
- (ec) Ignoring prior probability results in miscarriages of justice (really, courts have made that mistake and thrown innocent people in jail due to giving too much weight to DNA results, etc.). If you ignore the prior probability, you get the wrong answer. It is as simple as that. If you have spent years getting data that support the assumption that the probability is 1% and then you do three more tests that seem to contradict that assumption then you shouldn't reject the assumption. However, if you just guessed the probability based on no evidence then you should reject it as soon as there is evidence that contradicts it. --Tango (talk) 21:14, 8 November 2009 (UTC)
- 92 has given some great reasons why the prior distribution must not be ignored. But I believe his point was that with this kind of low-probability events, "side-channel attacks" become significant. If I pick up a coin, toss it 20 times, and obtain heads every time, I'll begin to suspect that whoever told me that the pile has a billion fair coins lied. Of course, that too can be encoded in the prior belief and updated in a Bayesian way. -- Meni Rosenfeld (talk) 08:18, 10 November 2009 (UTC)
Let me add some detail to this problem. We have a large number of items (say a million) and know that precisely 1% of the objects is of the desired type. We then draw 3 items, and 2 of the 3 turn out to be the desired item. We don't know if the draw was truly random or had some bias. How can we determine that ? It certainly seems likely that there is bias in the draw, but how do we quantify the chances ? StuRat (talk) 21:20, 8 November 2009 (UTC)
- We use hypothesis testing. We choose a null-hypothesis: "The draw was fair." We then choose a desired confidence level, say 95%. We then work out the probability of the event (or a more surprising one - it's important to include that, although it doesn't make much difference in this case) happening assuming that hypothesis: 3*0.01*0.01*0.99+0.01*0.01*0.01=0.000298. We then observe that 0.000298<1-0.95=0.05. That means we should reject the hypothesis and assume (until we get more evidence) that the draw was biased. In fact, it's worth noting that a single one of the desired type being drawn would be enough to reject the hypothesis in this case - the probability of at least one of them being draw (assuming the hypothesis) is about 3%. --Tango (talk) 21:34, 8 November 2009 (UTC)
- Thanks. Can we then express that as a probability ? Something like "the probability that this was a fair draw is _____" ? StuRat (talk) 22:01, 8 November 2009 (UTC)
- If we can, it is beyond my limited statistical training. --Tango (talk) 22:02, 8 November 2009 (UTC)
- Following on from Tango's analysis, our article on Type 1 and type 2 error might help, but I'm too tired to think straight at present. Dbfirs 22:24, 8 November 2009 (UTC)
- What about this? Surely that very much does assign probabilities that hypotheses are true, starting with prior probabilities?--Leon (talk) 08:54, 9 November 2009 (UTC)
- Excellent point. I did know that, I wasn't thinking straight! I've moved your comment down to keep things in chronological order. --Tango (talk) 22:41, 9 November 2009 (UTC)
- No, I'm afraid you were thinking completely straight (except perhaps for the part where you mentioned hypothesis testing, which is highly controversial). As I tried to explain, Bayesian inference relies on the availability of a prior distribution for the variables of interest. The choice of prior becomes less significant as more data is collected - and it can be sufficient to just choose a maximum entropy distribution - but in this case we only have 3 data points.
- Since StuRat avoided specifying his prior belief that the draw was fair - and the distribution of the bias if it exists - Bayes is inapplicable. He may be considering something simple like "the draw is fair with probability 0.5, and if it's not, it has a probability of p to come up with a special item, where p is distributed uniformly", in which case my calculations show that given the data, the probability of the draw being fair is 0.00118659. -- Meni Rosenfeld (talk) 08:10, 10 November 2009 (UTC)
- Hypothesis testing in controversial? So much for the stats modules I did at A-level... But I wasn't thinking straight - the answer I should have given way "No, we can't calculate that probability without knowing the prior probability. If you give us that, then we can do it." --Tango (talk) 17:59, 10 November 2009 (UTC)
- Excellent point. I did know that, I wasn't thinking straight! I've moved your comment down to keep things in chronological order. --Tango (talk) 22:41, 9 November 2009 (UTC)
- If we can, it is beyond my limited statistical training. --Tango (talk) 22:02, 8 November 2009 (UTC)
- Thanks. Can we then express that as a probability ? Something like "the probability that this was a fair draw is _____" ? StuRat (talk) 22:01, 8 November 2009 (UTC)
wolfram alpha
this doesn't look right to me, an extra factor of n' on the first term. and when show steps is clicked it just gets worse. http://www.wolframalpha.com/input/?i=d%2Fdx[(n%27)*exp(-x^2%2F2) . (link is nowikied as wikicode gets confused by adress). Is wolframm alpha often mistaken? —Preceding unsigned comment added by 129.67.39.44 (talk) 22:10, 8 November 2009 (UTC)
- It seems like Wolfram|Alpha does not like the prime. This question works: "differentiate (dn/dx)*exp(-x^2/2)" [6]. --Andreas Rejbrand (talk) 22:46, 8 November 2009 (UTC)
- Now I see that it actually works with prime as well, but you need to be explicit with the independent variable: "differentiate (n'(x))*exp(-x^2/2)" works [7]. --Andreas Rejbrand (talk) 22:47, 8 November 2009 (UTC)
- Wolfram Alpha has built-in commands and if you do something that it doesn't understand, it will guess at what you mean. So, it is important that you enter things correctly. StatisticsMan (talk) 14:29, 9 November 2009 (UTC)
Integral (in proof about theta function)
I am reading a proof in a book and it gives
and says "substitution followed by a shift of the path of integration". Doing the substitution gives
where . So, I guess the shift of the path of integration takes this path to the real line. But, I do not get why the two integrals would be equal. StatisticsMan (talk) 22:54, 8 November 2009 (UTC)
- I assume t is positive? Consider the paths from -R to R for large R rather than -∞ to ∞. The two paths form the top and bottom of a rectangle. The difference between the integrals along the top and bottom is equal to the difference between the integrals along the two short ends of the rectangle since the function (call it f) is entire. Out there |f(x)| becomes small and so you can show that those integrals along the sides of the rectangle are small, and go to zero in the limit as R goes to infinity. I hope that's clear since I don't think I explained it very well without a diagram. Rckrone (talk) 01:32, 9 November 2009 (UTC)
Equality of the integrals follows from Cauchy's integral theorem (look it up!) applied to the rectangle whose corners are at −R, R, R −iy/t, and −R −iy/t. The integral around the boundary must be zero. The integrals along the short vertical sides approaches 0 as R grows. Hence the integrals along the upper and lower boundaries have the same limit as R grows. Michael Hardy (talk) 05:22, 9 November 2009 (UTC)
- This makes sense. Thanks to both of you. I know Cauchy's integral theorem :). I just didn't think to consider a finite, very large R, though I should have because that is a very common trick. StatisticsMan (talk) 14:31, 9 November 2009 (UTC)
November 9
Random binary digits from random numbers?
What is the best way to get random binary digits from a sequence of random decimal or hexical digits?
For example, if I toss a six sided dice a total of 8 times, how can I convert that sequence of 8 tosses into a sequence of random binary digits (0's and 1's), each having 50% probability. I know I could simply do 1,2,3 on dice = 0, and 4-5-6 on a dice = 1, but that takes 1 roll per binary digit, I want to do it in a way that minimises the number of die rolls required per binary bit.
--Dacium (talk) 02:58, 9 November 2009 (UTC)
- You could get 2 random bits from each roll 2/3 of the time and 1 1/3 of the time, for an average of 5/3 bits per roll. 1 counts as 00, 2 counts as 01, 3 counts as 10, 4 counts as 11, all with equal probability so it should be uniform. Then, 5 could be just 0 and 6 could be 1. StatisticsMan (talk) 04:30, 9 November 2009 (UTC)
- The most bits you can get out per roll is . With only a finite number of rolls, you can only get a rational number of bits, as arbitrarily close to x as you'd like (with the closer you get to x requiring more rolls total and a more complex algorithm). For example, following StatisticsMan's method, you can get bits per roll as follows: every pair of rolls has 36 possibilities, so list these possibilities in some order; the first 32 possibilities yield the bits 00000, 00001, ..., 11111, and the last 4 possibilities yield the bits 00, 01, 10, 11. Eric. 131.215.159.109 (talk) 05:26, 9 November 2009 (UTC)
- Of course, for best results you should use all the rolls at once. I think the following works: Write where . After rolling, write the result as an integer . Find the minimal m such that . Then and is uniformly distributed over this range. This gives you m bits, and happens with probability (only if ). The expectation is which is 2.42478 per roll. -- Meni Rosenfeld (talk) 06:19, 9 November 2009 (UTC)
- Remark on infinite sequences. If you interpret a sequence of digits in p:={0,1,..,p-1} as the expansion in base p of a real number x, you get a measure preserving map from pN (with the product of countably many copies of the uniform probability measure on p) into the interval [0,1] with the Lebesgue measure. If you then expand x in base q, you get a measure preserving map into the sequences of digits in q={0,1,..,q-1}. These maps are bijective up to a countable null set (due to the double representation of some rationals). So any set of p-sequences gives rise this way to a set of q-sequences with the same probability. --pma (talk) 07:32, 9 November 2009 (UTC)
- Dacium, what are you actually trying to do? If you're really rolling dice by hand and trying to get a long bit stream, then StatisticsMan's proposal is good and it does use the entropy optionally unless I'm missing something. If you're trying to program a computer to convert some non-binary entropy source to a bit stream, the mathematical theory of randomness extractors is way more complicated than you need. Just make a conservative estimate of the amount of min-entropy in your input source, and collect enough input that the output of a CSPRNG seeded from that input can't be feasibly distinguished from true random bits. If you can be more specific about your requirements you will probably get more useful answers. 69.228.171.150 (talk) 10:55, 9 November 2009 (UTC)
With simple single rolls of a d10 converted by hand, one easy way would be to convert 10 and 1-7 into three digit binary numbers (000 to 111) and code 8s as 0 and 9s as 1; this would be analogous to Statisticsman's original suggestion with a d6. An average string of 100 decimal digits would give you 260 zeroes and ones. Of course, if you're using a non-hexical die (e.g., an RPG d10) , your best option would be to use a d8, to give you a three-digit binary number every time, or a d20 for 16 four-digit binary numbers and four two-digit ones - an average of 3.6 binary digits per roll. Grutness...wha? 11:31, 9 November 2009 (UTC)
- with decimal I would take 3 digits at a time, and if it's more than 500, the binary digit is 1, less than 500 and it's 0, and this should reduce any off-by-one error from my sloppy thinking to at most 2/500 or maybe 3/500 skew depending on how unlucky I am. If this is unacceptably high, I would increase it to 8 digits at a time, so that if the number is more than 50,000,000 it's a 1 and less than 50,000,000 it's a 0, thereby reducing any off-b y-one skew to a negligible amount. Why only 8 digits? As that is a safe distance (2 orders of magnitude) from the overflow of the signed integers I'd probably be using by accident to store the result.... can you tell I'm an adherent of Murphy's law? —Preceding unsigned comment added by 92.224.205.24 (talk) 16:06, 9 November 2009 (UTC)
mathematics/{1+x+x*x}{1+5x+x}{5+x+x}=64
{1+x+x*x}{1+5x+x}{5+x+x}=64 —Preceding unsigned comment added by 59.164.3.176 (talk) 08:23, 9 November 2009 (UTC)
- It helps if you also tells us what you are wondering about. Taemyr (talk) 08:35, 9 November 2009 (UTC)
- In the future, we will not answer your question unless it is deemed unambiguous by a majority. Although you are most certainly encouraged to ask questions here (and we are happy to answer them), it is important to note that phrasing your question in a simple manner increases the chance that someone willl answer it (after all, this is is a free service). Let me assume that the following is your question:
- The solutions to which are x = -2.7825809, x = 0.66558396. --PST 11:35, 9 November 2009 (UTC)
- Coefficient of x should be 37 i.e.
- Gandalf61 (talk) 12:53, 9 November 2009 (UTC)
- and so the two real solutions are −2.82568 and 0.650129 and the two nonreal solutions are −0.74556±1.4562i. Bo Jacoby (talk) 14:47, 9 November 2009 (UTC).
- Both of you are right. Maybe I need a break from the reference desk... --PST 03:35, 10 November 2009 (UTC)
- None of us are perfect and your contributions are appreciated. Bo Jacoby (talk) 19:40, 10 November 2009 (UTC).
Your equation expands to be 12x4 + 44x3 + 49x2 + 37x − 59 = 0. The left hand side of this equation is a polynomial in the variable x. In fact it is a quartic polynomial, i.e. the highest power of x is four. A corollary of the fundamental theorem of algebra tells us that your equation has, counting multiplicities, four solutions over the complex numbers. As is the case with quadratic polynomials, it is possible to find the exact solutions in terms of radicals, i.e. expressions involving nth roots. Although, in practise, this is very difficult to do by hand for anything other than the simplest of quartic polynomials. If you are satisfied with approximate solutions then there are a few things you could do. The first is to use the Newton method to find approximations to the solutions. With the Newton method you start with an educated guess for one of your solutions, say x0, and feed this into a certain function giving a new value, say x1. You then put x1 into the same function to get x2, and so on. This process of iteration will give a sequence of numbers (x0, x1, x2, …) which will eventually (given a few extra conditions) converge to a solution of your equation. You won't need more than a calculator, a little knowledge of complex numbers and the ability to differentiate a polynomial to use this method. Alternatively, you could just cheat and use a computer algebra package like Maple or Matlab to generate some numerical approximations to the solutions; but where's the fun in that? :o) ~~ Dr Dec (Talk) ~~ 17:35, 10 November 2009 (UTC)
- The fun is in mastering a tool that enables you to solve hard problems easily. J (programming language):
>{:p.(((1 1 1&p.)*(1 6&p.)*(5 2&p.))-64&p.)t.i. 5 _2.82568 _0.74556j1.4562 _0.74556j_1.4562 0.650129
Bo Jacoby (talk) 19:40, 10 November 2009 (UTC).
Commutative diagram
I have an n x n diagram (n rows and n columns) with the property that each (1 x 1) cell of the diagram is a commutative diagram. Is the whole diagram commutative? What's the name for this result (if it's true)? —Preceding unsigned comment added by 58.161.138.117 (talk) 12:24, 9 November 2009 (UTC)
- You mean you have a rectangular grid with all arrows between neighbouring nodes going (say) left-to-right and top-to-bottom? Yes, the diagram is commutative. You can prove it easily for any m-by-n grid by induction on m, with inner induction on n. I have no idea whether it has a name. — Emil J. 14:21, 9 November 2009 (UTC)
Why don't we have a complete mathematical language?
Mathematical texts still use natural languages. Couldn't mathematicians develop a complete language? This language being complete in the sense that you can express things like "why", "imagine a". Students would learn how to code their natural native languages into this mathematical language.--Quest09 (talk) 12:57, 9 November 2009 (UTC)
- Well you could use the Mizar system or Metamath but mathematicians are human too and these definitely wouldn't help students. I seem to remember a funny story in Mathematical Intelligencer where students did everything this way. Dmcq (talk) 13:12, 9 November 2009 (UTC)
- There is not much use in developing a language unless there are semantics for it, but it is not at all obvious what the semantics for "imagine a" would be. Thus the formal languages that are used in math are restricted to a class in which we can develop a good semantics. For example, we have a good sense of what "A and B" means in terms of "A" and "B". Philosophers and linguists spend a long time studying the meanings of natural-language phrases, but we do not understand them well enough to try to formalize them.
- On the other hand, many students do learn how to express mathematical claims in "mathematical language" which corresponds roughly to first-order logic. But this does not include the entire range of expression that is possible in a natural language. — Carl (CBM · talk) 13:43, 9 November 2009 (UTC)
- Also see Loglan/Lojban. --Stephan Schulz (talk) 14:38, 9 November 2009 (UTC)
Because mathematicians have a right brain as well as a left brain. The right brain does not respond in the least to rigorous symbolic manipulation. Thus expressing everything symbolically is a great way to take the right brain out of the equation, and deduct rigorous truth even when it is grossly unintuitive. But only a few such deductions are unintuitive; the vast majority of truth mathematicians manipulate makes sense to them on an intuitive, right-brain level as well (and this is why they are mathematicians). If you were to remove language, you could no longer easily communicate with the right brain (imagine if you remove directx/opengl: you no longer can easily program the GPU) and many kinds of math become hard to fathom. This is especially true for appeals to visual intuition, as there is no rigor there at all. A visual demonstration of the pythagorean theorem has no rigor at all, you must 'add in' all the rigor bit by bit. But, to a mathematician, it might immediately make total sense to see that. Now, was the picture, which by definition has no rigor at all, it's just pixels, the right way to communicate the picture, instead of through coordinate geometry, which would have taken you 3-5 minutes to translate into a picture? Obviously it was. It's simply easier to show the picture. Similarly, it's just easier to use natural language to appeal to your intuition, and the left brain can then proceed to flesh out the formalities. For example, primes get harder and harder to come by as you go up through larger and larger primes. Couldn't there be a largest prime, couldn't the set of primes be finite, and not unending? The simple (and famous) natural language appeal that there cannot be a largest prime is simple: "there can't be a largest prime, because if there were, you could take it and all the primes smaller than it, multiply them together, and add one. The new number would just fail to divide by "every" prime (it would leave remainder of 1 for each attempt), but if it fails to divide by any prime, then its only factors must be 1 and itself -- ie it must be a prime, and since we got it by multiplying "all" the primes together, it muts be greater than all of them. So, it is a new "greatest" prime, clearly a contradiction with the possibility that there were a finite number of primes to begin with." This was a natural-language appeal, and it uses your intuition. I could have phrased it in a more complex and rigorous way, that would have been harder to understand, but if I had done so, you simply wouldn't have understood the truth of that statement so spontaneously (assuming you did). by the way, if you DIDN'T follow my logic, imagine how much farther away still you would ahve been from 'getting' it if I had used all symbols! Now, once you have understood the untuitive appeal (or thought of it yourself), you might sit down and go through it rigorously. You might even say whooops, with greater rigor I notice that the new "greatest" prime might not really be a prime, but just relative prime to all the primes -- in fact, it might have another prime factor we left out of "all" primes. These kinds of ancillary effects often come out only under greater rigor. But natural language is quite important for even doing the kind of intuitive reasoning/communication that it takes to get to the point where you even know what to try symbolically/with great rigor. 92.224.205.24 (talk) 15:41, 9 November 2009 (UTC)
- 92, your post would be much easier to read if you divided it to paragraphs. -- Meni Rosenfeld (talk) 19:09, 9 November 2009 (UTC)
Question regarding order of an element in a group
I have the following question. Suppose G is a finite group and K is a normal subgroup of G. Also suppose G/K has an element of order n. Then prove that G also has an element of order n.
I understand that finiteness is essential because if G= Z, K = 2Z clearly G/K has an element of order 2 but G doesn't. Other then that I can't think of a thing! I'll appreciate any help--Shahab (talk) 16:20, 9 November 2009 (UTC)
- (Finiteness is not quite essential, it suffices if G is a torsion group.) Just do the obvious thing: take an element g/K of G/K or order n, and try to see if this gives any constraints on the order of g in G. Then you should be able to find an element of order n in G with no difficulty. — Emil J. 16:35, 9 November 2009 (UTC)
- All I can think of is that O(g)≥ n as n is the least positive integer for which g^n is in K. Can you please be a bit more explicit. Thanks--Shahab (talk) 17:15, 9 November 2009 (UTC)
- n is the least positive integer for which gn is in K, yes. And more holds: for any integer a, ga is in K if and only if n divides a. Now, go(g) = 1 is in K, hence o(g) is not just greater than or equal to n, but actually divisible by n. — Emil J. 18:01, 9 November 2009 (UTC)
- Let with and let n be the least positive integer with this property. Then, if m is a positive integer with , since K is a subgroup. Consequently, since 1 is an element of K and thus has order n, as desired. Note that I think my proof overlaps with EmilJ's, but hopefully I was a bit more explicit. --PST 03:34, 10 November 2009 (UTC)
- All I can think of is that O(g)≥ n as n is the least positive integer for which g^n is in K. Can you please be a bit more explicit. Thanks--Shahab (talk) 17:15, 9 November 2009 (UTC)
November 10
Question About Primes Near nn
I'm wondering primarily whether anyone has found a solution to nn+2=Prime beyond n=3. Generally, has anyone taken up the question of the nearest prime values to nn? Has the question appeared in the literature at all? I imagine it must have.Julzes (talk) 17:56, 10 November 2009 (UTC)
- I've just run a little procedure to check. I got n = 737 and n = 1,349. I had to stop the program after that point because the numbers were getting so big that my computer nearly exploded! ~~ Dr Dec (Talk) ~~ 18:05, 10 November 2009 (UTC)
- Can your computer actually tell if such a number is prime? RSA encryption is based on the fact that it's hard to factor a 400 digit number, even with a supercomputer. When n = 1349, you're talking about a number with
over 1 million digitsaround 4000 digits. At best, you can do primality testing to have a good probability that such a number is prime. Isn't this right? I'm not claiming I know exactly. StatisticsMan (talk) 18:59, 10 November 2009 (UTC)- Factorization is harder than testing for primality. Taemyr (talk) 19:05, 10 November 2009 (UTC)
- (e/c) First, primality testing can be done deterministically in polynomial time (though that does not necessarily mean that's what Dr Dec did, since the AKS algorithm is actually rather inefficient in practice). Factoring and testing primality are in a completely different ballpark. Second, 13491349 has only ~4000 or so decimal digits, not a million. — Emil J. 19:08, 10 November 2009 (UTC)
- Can your computer actually tell if such a number is prime? RSA encryption is based on the fact that it's hard to factor a 400 digit number, even with a supercomputer. When n = 1349, you're talking about a number with
- Okay, good points, especially about the number of digits. I did 3^1000 = 1000000, instead of 3 * 1000 = 3000. So, my bad. By deterministically, do you mean that you can tell 100% for sure that a number is prime, correct? So, it's not a probabilistic test? StatisticsMan (talk) 19:13, 10 November 2009 (UTC)
- Yes, that is correct. But 3^1000 is not 1000000. -- Meni Rosenfeld (talk) 20:38, 10 November 2009 (UTC)
- Okay, good points, especially about the number of digits. I did 3^1000 = 1000000, instead of 3 * 1000 = 3000. So, my bad. By deterministically, do you mean that you can tell 100% for sure that a number is prime, correct? So, it's not a probabilistic test? StatisticsMan (talk) 19:13, 10 November 2009 (UTC)
- Yea, I agree. I realized after I logged off that I said it wrong. I'm not sure what I did honestly. But, I agree with around 4000 digits. StatisticsMan (talk) 20:55, 10 November 2009 (UTC)
I used Maple and the command isprime. For example isprime(2); returns true whereas isprime(4); returns false. I ran the very simple program:
- L := [ ]: (making an empty list)
- for i from 1 to 1000 do (setting the variable and number of cases)
- if isprime((2n+1)2n+1 + 2) = true then L := [op(L),i]: fi: od: (checking all odd n, if it's prime add n to the list)
At the end my list L came out as [1,368,674] meaning that nn + 2 is prime for n = 3, 737 and 1,349. ~~ Dr Dec (Talk) ~~ 19:26, 10 November 2009 (UTC)
- P.S. I've asked Maple to factor 737737 + 2 and it seems to have either got lost or gone on strike. Either way, it's not very happy. As has been mentioned above: factorisation is much more difficult than primality testing. Hence the security of RSA encryption. ~~ Dr Dec (Talk) ~~ 19:35, 10 November 2009 (UTC)
- According to MapleSoft's web page, isprime returns true if p is very probably prime. It does not say which algorithm it uses but gives references at the bottom. It says there is no known example of it being wrong and such a number would have to be hundreds of digits long. StatisticsMan (talk) 19:45, 10 November 2009 (UTC)
- Mathematica confirms the results of User:Dr Dec. From the implementation notes:
- PrimeQ first tests for divisibility using small primes, then uses the Miller-Rabin strong pseudoprime test base 2 and base 3, and then uses a Lucas test.
- As of 1997, this procedure is known to be correct only for , and it is conceivable that for larger n it could claim a composite number to be prime.
- The Primality Proving Package contains a much slower algorithm which has been proved correct for all n. It can return an explicit certificate of primality.
- I can try to verify this using the last item if it is important to you. -- Meni Rosenfeld (talk) 20:38, 10 November 2009 (UTC)
This is interesting, but not terribly important. Thanks. I'm more interested in the general question of how to do a literature search. Is there a symbolic-string search engine that will pop out what's desired, or would one have to go through Mathematical Reviews' Number Theory subsection for all recent years?Julzes (talk) 23:17, 10 November 2009 (UTC)
- oeis:A100407: "Numbers n such that n^n+2 is prime. 1, 3, 737, 1349." There are no others up to 3500 which corresponds to 12405 digits. It took me 12 minutes at 2.4 GHz to double check this limit with PrimeForm/GW 3.2.2. A search of n^n+2 [8] at http://www.primenumbers.net/prptop/searchform.php finds nothing. If any other terms had been found then they would probably have been submitted to that database of gigantic probable primes and turned up in the search. 1349^1349+2 is a 4223-digit probable prime. There is no known primality test which can quickly determine whether it is really prime, because the currently known quick tests require a large part of the factorization of p-1 or p+1 to be known. See more at http://primes.utm.edu/prove/index.html. The fastest algorithm to prove n^n+2 for large n is ECPP (Elliptic curve primality proving). The fastest publicly available ECPP implementation is Marcel Martin's Primo at http://www.ellipsa.eu/public/primo/primo.html. It might take around 1 GHz month to prove primality of 1349^1349+2 with Primo. Don't attempt any other public program like Mathematica's ProvablePrimeQ at this size. If you try a program and it claims prime in less than a GHz week then the program did not make a primality proof although it may have claimed to do so (some programs hide that they only make probable prime tests). oeis:A074966: NextPrime(n^n) - n^n. I easily found these OEIS sequences by computing the first few terms and search them in OEIS. PrimeHunter (talk) 00:42, 11 November 2009 (UTC)
- I doubt Wolfram would outright lie about this - especially since it produces an Atkin-Goldwasser-Kilian-Morain certificate which I suppose can be easily checked. Anyway, I did run ProvablePrimeQ for 737 for several minutes and it did not terminate. -- Meni Rosenfeld (talk) 06:24, 11 November 2009 (UTC)
- ProvablePrimeQ makes a primality proof but nobody uses it at sizes like 1349^1349+2 because it is too slow if it ever completes. Maybe it could handle 737^737+2 within a GHz week but I don't know. It is other programs which may give misleading messages about showing a number being prime. It was perhaps confusing that I mentioned this right after talking about ProvablePrimeQ. There are also many Mathematica users who have falsely thought that Mathematica's PrimeQ makes primality proofs. I don't have Mathematica but apparently some parts of the documentation can give the impression that PrimeQ shows primality. PrimeHunter (talk) 14:03, 11 November 2009 (UTC)
- Indeed, the documentation for PrimeQ does not mention that it is probabilistic, but neither does it explicitly say otherwise. Also, that information is not exactly hidden for those who are aware of the issue and willing to look. -- Meni Rosenfeld (talk) 14:34, 11 November 2009 (UTC)
- By the way - do you happen to have any data about how academic this discussion is? That is, was there ever a report that PrimeQ, or an equivalently accurate function, claimed a composite number was prime? -- Meni Rosenfeld (talk) 15:23, 11 November 2009 (UTC)
- ProvablePrimeQ makes a primality proof but nobody uses it at sizes like 1349^1349+2 because it is too slow if it ever completes. Maybe it could handle 737^737+2 within a GHz week but I don't know. It is other programs which may give misleading messages about showing a number being prime. It was perhaps confusing that I mentioned this right after talking about ProvablePrimeQ. There are also many Mathematica users who have falsely thought that Mathematica's PrimeQ makes primality proofs. I don't have Mathematica but apparently some parts of the documentation can give the impression that PrimeQ shows primality. PrimeHunter (talk) 14:03, 11 November 2009 (UTC)
- I doubt Wolfram would outright lie about this - especially since it produces an Atkin-Goldwasser-Kilian-Morain certificate which I suppose can be easily checked. Anyway, I did run ProvablePrimeQ for 737 for several minutes and it did not terminate. -- Meni Rosenfeld (talk) 06:24, 11 November 2009 (UTC)
Thanks, PrimeHunter. I guess this is a question I might have tried a little harder to answer on my own, with my prior knowledge of OEIS. oeis:A074967: n^n-PrevPrime(n^n) also. There's still the question of literature, but a starting point for that is at OEIS anyway, for this specific problem at least.Julzes (talk) 01:18, 11 November 2009 (UTC)
One thing I noted is that oeis:A074966 has a strange quote in its comments. Asking whether there are any solutions of nn+1=Prime beyond the three solutions n=1, 2 and 4; it is stated that any such prime is greater than 1030000, according to Sierpinski. Now, this doesn't make a whole lot of sense. Any n that gives a prime for nn+1 must be of the form 2^2^k, with the result being a (2k+k)th Fermat number. Even if it wasn't known at the time of the quote that the 20th Fermat number is composite, this number is 315653 digits. At this date, not only the 20th but also the 37th Fermat number is known to be composite. The 70th appears to still be open (?), so the smallest possible prime would be this.Julzes (talk) 15:04, 11 November 2009 (UTC)
Infinite-time probability
This problem follows from a debate I was having with a friend in one of my classes. "If two people walk randomly around a room, given infinite time, will they always collide?" To simplify, if the chance of hitting in one minute is between 0 and 1 (say 0.01), what is the chance they will collide given an infinite number of minutes? As it happens, I said "1" and my friend said "almost 1".
Now, it would appear from our articles that the answer is that it will "almost surely" occur.
“ | The difference between an event being almost sure and sure is the same as the subtle difference between something happening with probability 1 and happening always.
If an event is sure, then it will always happen, and no other event can possibly occur. If an event is almost sure, then other events are theoretically possible in a given sample space; however, as the cardinality of the sample space increases, the probability of any other event asymptotically converges toward zero. Thus, one can never definitively say for any sample space that other events will never occur, but can in the general case assume this to be true. |
” |
I'm afraid these paragraphs, as they apply to my problem, are a loss to me. Is it a certainty or not? (I realise that I probably technically won the debate here; the answer is 1. More interesting is the interpretation of that figure, it would seem.) - Jarry1250 [Humorous? Discuss.] 17:57, 10 November 2009 (UTC)
- You can't actually do the experiment for an infinite amount of time, so we can't say what would actually happen. What we can say is that, as the length of time you do it for increases the chance of a collision tends towards one. In the limit as you do it for infinite amount of time we say the probability is one, but it is still possible that it wouldn't happen (both people could, purely by chance, walk round in small, non-intersecting circles for the whole time, for example). All that being said, you could both be wrong. It depends on what you mean by "walk randomly". Random walks are a much studied mathematical technique and different things happen in different situations. I think the best way to model your question is to consider the room as a grid and each person steps either north, east, south or west into the next square each second. You then need to decide on the distribution - are they more likely to keep walking in the same direction than they are to turn around and go backwards, for example? You can't calculate it based on the collision chance in a minute since each minute will be different depending on where they start that minute, and the minutes will be dependant on each other. If they move around really quickly (or the room is really small) then it is possible each minute would be approximately the same, but if they are going at a normal walking speed around a normal sized room, a minute is too short a time period. You may also be interested in Brownian motion, which is like the random walk I described but with an infinitely small grid (roughly speaking). --Tango (talk) 18:09, 10 November 2009 (UTC)
- Thanks Tango - I understand the problem with modelling the exact scenario, to the point it doesn't really matter so much to me as the meta-issue. So the probability is 1, but they might not collide? Weird. - Jarry1250 [Humorous? Discuss.] 18:16, 10 November 2009 (UTC)
- "Might" is such a strong word. The only thing separating "sure" from "almost sure" is that your sample space includes instantiations in which they don't collide. Tango's example of walking forever in circles matches the problem specification of walking inside the room. However, the probability that any of these instantiations will manifest is 0, and for any sensible notion of probability, this means that they will collide. -- Meni Rosenfeld (talk) 18:25, 10 November 2009 (UTC)
- The notion of probability here is completely well-specified. What you seem to want is a different notion of possibility. If you declare probability-zero events to be impossible, you have to explain just how they get to be impossible, given that all initial segments of them are possible. I think that's contrary to the usual intuitive understanding of the modal operators. --Trovatore (talk) 00:15, 11 November 2009 (UTC)
- "Might" is such a strong word. The only thing separating "sure" from "almost sure" is that your sample space includes instantiations in which they don't collide. Tango's example of walking forever in circles matches the problem specification of walking inside the room. However, the probability that any of these instantiations will manifest is 0, and for any sensible notion of probability, this means that they will collide. -- Meni Rosenfeld (talk) 18:25, 10 November 2009 (UTC)
- Thanks Tango - I understand the problem with modelling the exact scenario, to the point it doesn't really matter so much to me as the meta-issue. So the probability is 1, but they might not collide? Weird. - Jarry1250 [Humorous? Discuss.] 18:16, 10 November 2009 (UTC)
- For a reasonable mathematical model of the problem, the answer is : "The probability that they will collide at some point is exactly 1. They will almost surely collide at some point". The fact that these two statements do not contradict each other is what the paragraph you quote attempts to clarify. IMO, the distinction between "surely" and "almost surely" is too subtle to survive the translation to the physical world. -- Meni Rosenfeld (talk) 18:19, 10 November 2009 (UTC)
- The distinction involves infinity - nothing involving infinity translates nicely to the physical world. --Tango (talk) 19:06, 10 November 2009 (UTC)
- Think of "almost surely" as meaning something like this: pick a random real number in the interval [0,10] with uniform probability. If the random number is exactly π, they don't collide; otherwise, they do. Observations: 1) the probability of picking any specific number from an infinite collection is zero; 2) despite this, you can't exactly say the random number won't be π. The problem is "pick a random real number" is inherently imprecise. It's the same way with "run the experiment for an infinite amount of time". 69.228.171.150 (talk) 23:35, 10 November 2009 (UTC)
Why is nobody answering the question?
- Random walks in 2 dimensions are recurrent.
- Random walks in more than 2 dimensions are transient.
That means the probability of eventually coming within a distance ε of each other, no matter how small ε is, is 1, if they're in a plane. In 3-space, the probability is less than 1. (If it's a discrete random walk on integer points in 3 dimensions, with the usual assumptions (independence of steps, uniform distribution), then I think the probability they eventually meet again is something like 0.3.) I seem to recall that George Polya figured this out in about 1910 or something like that. Michael Hardy (talk) 04:18, 11 November 2009 (UTC)
- If ε is finite, then ε^n represents a finite proportion of the total space in the room, no matter how many dimensions are involved, and thus the probability is 1. And if you make ε tend to 0 at the same time as the time tends to infinity, then the answer will depend on how the two are related.Ehrenkater (talk) 18:33, 11 November 2009 (UTC)
- Since OP had already assumed recurrence I don't see how this is what the question was. Rather I read the OP question as dealing with the difference of "with probability 1" vs. "with surety". As such the answers given above is the proper answers to the question. Taemyr (talk) 05:25, 11 November 2009 (UTC)
- Besides, the OP said they walked in a room, which I take to mean a bounded set. I think this guarantees recurrence no matter the dimension. -- Meni Rosenfeld (talk) 06:09, 11 November 2009 (UTC)
- Thanks Michael for the answer, but yeah, I was kinda after the difference between "with probability 1" vs. "with surety" as it applied to this real life scenario. Sorry for not being more explicit. But anyhow, all useful information I will keep with me. - Jarry1250 [Humorous? Discuss.] 17:52, 11 November 2009 (UTC)
Set of integers
Is there a simple name (e.g. weird number, prime number, even prime number, colossally abundant number) for this set of integers: 1, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53... ? (Note that this is essentially the set of odd prime numbers, except that 1 is not a prime number. These numbers all have the property that they cannot be divided by any number except 1 without becoming a non-integer number.) 68.116.36.179 (talk) 18:32, 10 November 2009 (UTC)
- Some people used to include 1 in the primes but it just causes trouble. I'm not sure why you have excluded 2. Dmcq (talk) 18:37, 10 November 2009 (UTC)
- Also a nitpick; 7 divided by 7 equals 1. Taemyr (talk) 18:54, 10 November 2009 (UTC)
Well, 2 is the only even prime number so being an odd prime is anything but special. If P denotes the set of prime numbers then your set is { { 1 } ∪ P } − { 2 }. So you seem to have substituted 2 for 1 in the set of primes. Why ought this new set have a special name? ~~ Dr Dec (Talk) ~~ 19:04, 10 November 2009 (UTC)
- The phrase "odd prime" does come up quite often in number theory, since it is often desirable to exclude 2 (particularly when dealing with quadratics). Why anyone would want to include 1, though, I don't know. You could call that set "odd positive non-composite integers" if you wanted to, one being neither prime nor composite (and being unique among positive integers in having that property). --Tango (talk) 19:11, 10 November 2009 (UTC)
- Thanks everyone, I think that that is indeed the closest name to what I wanted. I also saw it here: [9] (useful site, BTW). This site say they are the "odd noncomposite numbers". Re: the nitpick: OK, so I should have said "cannot be divided by any number except 1 or themselves without becoming a non-integer number." The reason I'm interested in having 1 in with the odd primes is because the number 1 shares some characteristics, but of course not all, with the other odd primes. I'm sure there are some applications for this - the reason I'm interested in this set is as a way to verify that a chemical equation cannot be reduced further. 68.116.36.179 (talk) 22:40, 10 November 2009 (UTC)
- Then why are you excluding 2? I think you need to look at greatest common divisor - if the gcd is greater than 1, then you can cancel the common factor. --Tango (talk) 22:50, 10 November 2009 (UTC)
- Oh yes, you're right. I failed to think about this very hard. How do you put the resolved checkmark on this thing? 66.178.144.142 (talk) 02:42, 11 November 2009 (UTC)
- I have checked "resolved" for you. --PST 03:20, 11 November 2009 (UTC)
- Thanks PST. 66.178.144.142 (talk) 18:06, 11 November 2009 (UTC)
- Then why are you excluding 2? I think you need to look at greatest common divisor - if the gcd is greater than 1, then you can cancel the common factor. --Tango (talk) 22:50, 10 November 2009 (UTC)
- Thanks everyone, I think that that is indeed the closest name to what I wanted. I also saw it here: [9] (useful site, BTW). This site say they are the "odd noncomposite numbers". Re: the nitpick: OK, so I should have said "cannot be divided by any number except 1 or themselves without becoming a non-integer number." The reason I'm interested in having 1 in with the odd primes is because the number 1 shares some characteristics, but of course not all, with the other odd primes. I'm sure there are some applications for this - the reason I'm interested in this set is as a way to verify that a chemical equation cannot be reduced further. 68.116.36.179 (talk) 22:40, 10 November 2009 (UTC)
Short division
I am trying to help my niece with her homework but I cannot remember doing short division at school and can't make any sense of any of the guides I have found. Can someone work through an example with me? Take 4 into 67 - I can get as far as getting to 16 and then I am stumped how I get to the .75? --Cameron Scott (talk) 19:38, 10 November 2009 (UTC)
- If I remember well you do: 4 into 67 is 16 ,then you go on with the remainder 3=67-64. You put a 0 after 3 and a dot after 16, and go on this way: 4 into 30 gets 7 &c. Check short division also. --pma (talk) 19:45, 10 November 2009 (UTC)
What does "&C" mean? I couldn't make any sense of the short division article - it might as well have been in dutch. --Cameron Scott (talk) 19:52, 10 November 2009 (UTC)
- &C is short for the borrowed latin phrase et cetera. 69.228.171.150 (talk) 20:22, 10 November 2009 (UTC)
Let's say that you want to divide 956 by 4. Well you write out
Now how many times does 4 go into 9? Well, 4 goes into 9 twice with 1 left over, so we write a 2 above the 9 and put a little 1 in front of the 5 (like carrying in addition). So we have
Next, how many times does 4 go into 15? Well, 4 goes into 15 thrice with 3 left over, so we write a 3 above the 15 and put a little 3 in front of the 6. So we have
Finally, how many times does 4 go into 36? Well, it goes 9 times with none left over. So we write a 9 above the 36 and stop because the remainder was zero. This gives:
If the remainder weren't zero then we could have carried on by considering
The net result is that 956/4 = 239. ~~ Dr Dec (Talk) ~~ 20:21, 10 November 2009 (UTC)
Thanks, that's a bit clearer. While I remember doing long division at school, I have no recollection of short division. Between three (!!!!) adults, we finally worked it out. --Cameron Scott (talk) 20:26, 10 November 2009 (UTC)
I dont understand why you are trying to do division by hand. The only reason I can think of for doing division by hand is to check that the calculator is giving me the right answer. And to check that the calculator is giving me the right answer, I have to learn doing multiplication by hand (for example: A divide B = C. Check by multiplying B by C). So I cannot think of any reason to do division by hand. 203.41.81.1 (talk) 21:08, 10 November 2009 (UTC)
- Unfortunately at school they have an annoying habit of telling you to find the answer a certain way, even if it is ultimately pointless. - Jarry1250 [Humorous? Discuss.] 21:19, 10 November 2009 (UTC)
- Indeed, she has to show her workings to indicate how she's done it. --Cameron Scott (talk) 21:33, 10 November 2009 (UTC)
Why's it "annoying" or "pointless"? There are things you can prove by manual division that a calculator can't. For example if you ask your calculator to calculate m/n and it returns the answer 0.123456789 then all you know is that m/n is 0.123456789 to nine decimal places (it might be 0.1234567891…, or 0.1234567886…, or infinitely many other numbers). If you do the division by hand you might notice that the remainders start to repeat after the 9 and so the actual number is 0.123456789. After practising the method for a while you get a feel for it and you can start to formulate some nice theorems. For example, what is the maximum possible period length of a repeating decimal (in the case of 13717421/111111111 = 0.123456789, the period length is nine). Well, if we divide m by n then there are n possible remainders and so m/n as a repeating decimal can have a period of at most n digits. Prove that with a calculator! ~~ Dr Dec (Talk) ~~ 22:04, 10 November 2009 (UTC)
- A good apology for division by hand. But really? Blink? Staecker (talk) 00:50, 11 November 2009 (UTC)
Short division is not only a terrible excuse to name an obscure concept, but also meaningless and un-mathematical. As I have said, and shall continue to say, schools fail to give a proper mathematics education to their students these days (mathematicians do not find new ways for dividing one number into another - ultimately, this is non-mathematical). Proper mathematics occurs when students are asked to think deeply and creatively rather than memorize set methods to solve problems (as you will know, there is not shortage of open problems in mathematics, and new methods must be discovered to solve them - one cannot "copy" old methods). Additionally, many of these open problems are well beyond the comprehension of adults (who do not have university level mathematics education) - even formal mathematicians spend years on some of these problems (this is not to say that one needs mathematics education to do research - but it certainly helps). There is rarely a "formula" that can simply be applied (mathematics is not about formulaic approaches). The point is that although I would certainly discourage the teaching of "short division", I would encourage your niece to consider problems which require abstract thinking in relatively simple contexts. Although having an enormous amount of depth and breadth of knowledge is required before one can begin research in mathematics (nowadays), mathematical thinking can start at any point, and I would encourage your niece to consider this should she have the spare time. For instance, suppose I have a pair of numbers a and b. The GCD (Greatest Common Divisor) of a and b is the largest number g that divides both a and b. Examples include:
- GCD(2, 5) = 1
- GCD(3, 6) = 3
- GCD(18, 45) = 9
Use the naive technique of long division to establish an algorithm by which one can determine the GCD of two numbers (the answer lies in Euclidean algorithm, but try to solve it without looking at the article). As a simpler problem, what is the relation between the least common multiple and the greatest common divisor? Lastly, invent your own questions during your investigation of the GCD - a mathematician is also someone who is able to pose good questions, and solve them. Nevetheless, these problems do not illustrate that mathematics is about basic arithmetic only - mathematics has hardly anything to do with basic arithmetic (it is the schools which give the wrong impression). The article mathematics should give a reasonable overview as to what mathematicians research. Hope this helps (and remember to ask your niece if she is interested in solving the problems I posed!). --PST 03:17, 11 November 2009 (UTC)
- I'm fairly certain that the point of teaching methods of doing arithmetic is not to train students to be mathematicians. It's to teach them the basic skills that will presumably help them in everyday life. The vast majority will never need to think about actual problems in mathematics, but they may very well need to do arithmetic without having access to a calculator. I don't know how much justification there is for long division these days since if you can break out some paper you probably also have access to a computer, but that doesn't change the basic idea, which is that these are general life skills. It's silly to say that there's no place for that, even if it's not preparing anyone for "real math". Rckrone (talk) 06:36, 11 November 2009 (UTC)
- That is the problem. The curriculum should be focused on training future mathematicians (as is done for other subjects. I am sure that many aspects of science or history have no purpose in "daily life". Why are these subjects taught at (primary) school?). Mathematics is the center of intellectual ability - it encapsulates all forms of thinking from abstract to practical. --PST 07:31, 11 November 2009 (UTC)
- I agree that it's also important to expose students to real math, since clearly some of them need to end up becoming mathematicians or physicists or whatever. And the same can be said for most of the more specialized subjects taught in school. But there are some general skills that are important to everyone which include arithmetic, reading/writing, and maybe some basic history and civics. Encouraging future mathematicians doesn't need to come at the expense of establishing those skills for everyone else, and shouldn't. Rckrone (talk) 07:58, 11 November 2009 (UTC)
- I completely agree with you. However, basic arithmetic appears to be taught for approximately 7 years at school. Certainly, it could be taught within the span of 5 years at the very most (but I feel that 3 years is enough). If children are too young in the earlier years of schooling to learn quickly, then introducing them to games like chess would be appropriate (whence they could have fun and improve intellectually). I know that what I am proposing is being implemented at some special programs these days, but the point is that the curriculum is not only too slow for many intelligent students, but also very ineffective (arithmetic need not be taught for over half of a student's schooling). Furthermore, weight should not be put on establishing "life skills" - there should be some balance between actual mathematical thinking and arithmetic. Training students to follow set methods to solve problems should be done first, but not at the cost of "dumbing them down" by discouraging creativity. --PST 09:37, 11 November 2009 (UTC)
- I agree that it's also important to expose students to real math, since clearly some of them need to end up becoming mathematicians or physicists or whatever. And the same can be said for most of the more specialized subjects taught in school. But there are some general skills that are important to everyone which include arithmetic, reading/writing, and maybe some basic history and civics. Encouraging future mathematicians doesn't need to come at the expense of establishing those skills for everyone else, and shouldn't. Rckrone (talk) 07:58, 11 November 2009 (UTC)
- That is the problem. The curriculum should be focused on training future mathematicians (as is done for other subjects. I am sure that many aspects of science or history have no purpose in "daily life". Why are these subjects taught at (primary) school?). Mathematics is the center of intellectual ability - it encapsulates all forms of thinking from abstract to practical. --PST 07:31, 11 November 2009 (UTC)
November 11
mathematics
can we differentiate log log sin x? —Preceding unsigned comment added by Jituinfo007 (talk • contribs) 03:58, 11 November 2009 (UTC)
- See chain rule. You will need to apply it 3 times. --Tango (talk) 04:07, 11 November 2009 (UTC)
- No. Since |sin(x)|<=1, log(sinx)<=0, so we can't differentiate it because log of negative numbers are not real and log(0) is undefined. However we can differentiate it by treating it as a complex function. —Preceding unsigned comment added by Money is tight (talk • contribs) 04:11, 11 November 2009 (UTC)
- Even in the reals, log log sin x denotes the function with empty domain by your argument, and we can certainly differentiate that: it equals its derivative. — Emil J. 12:36, 11 November 2009 (UTC)
- Why does a function having an empty domain have a derivative equal to itself?--Shahab (talk) 15:20, 11 November 2009 (UTC)
- The idea being that the derivative also has an empty domain and the same codomain. There's really only one such function, so they're equal. Rckrone (talk) 15:32, 11 November 2009 (UTC)
- Why should the derivative have the same domain? For example the domain of log x and 1/x are different.-Shahab (talk) 15:37, 11 November 2009 (UTC)
- The derivative of log x is not what you call 1/x, but the restriction of 1/x to (0,+∞). In order for a function to have a derivative in a point a, it has to be defined in a neighbourhood of a by the definition, so the derivative of f cannot have a larger domain than (the interior of) the domain of f. f being differentiable means that the derivative exists at each point of its domain, hence the domain of a differentiable function equals the domain of its derivative. — Emil J. 15:56, 11 November 2009 (UTC)
- (Edit Conflict) If where , for instance, the domain of the derivative of f is precisely the set of all points within the domain of f at which f is differentiable, and thus contained within the domain of f. Hence, if the domain of f is empty, the same must be true of the domain of the derivative of f. Your counterexample is false because you are presuming that there is some universal set which contains both the domain of the logarithm function and the domain of - the domain of a function is simply an abstract set with no particular reference to any universal set (more precisely, one cannot infer a specific universal set from the domain alone). --PST 15:58, 11 November 2009 (UTC)
- Why should the derivative have the same domain? For example the domain of log x and 1/x are different.-Shahab (talk) 15:37, 11 November 2009 (UTC)
- The idea being that the derivative also has an empty domain and the same codomain. There's really only one such function, so they're equal. Rckrone (talk) 15:32, 11 November 2009 (UTC)
- Why does a function having an empty domain have a derivative equal to itself?--Shahab (talk) 15:20, 11 November 2009 (UTC)
- Even in the reals, log log sin x denotes the function with empty domain by your argument, and we can certainly differentiate that: it equals its derivative. — Emil J. 12:36, 11 November 2009 (UTC)
- No. Since |sin(x)|<=1, log(sinx)<=0, so we can't differentiate it because log of negative numbers are not real and log(0) is undefined. However we can differentiate it by treating it as a complex function. —Preceding unsigned comment added by Money is tight (talk • contribs) 04:11, 11 November 2009 (UTC)
Obviously a function and its derivative don't generally have the same domain: sometimes the domain of the derivative is smaller than that of the original function (e.g. look at the square-root function, defined at 0, and its derivative, undefined at 0). And Shahab's example is a silly mistake, as has already been explained above. But this one particular function—the empty function—has the same domain as its derivative. Michael Hardy (talk) 19:28, 11 November 2009 (UTC)
- Personally, I don't see any silly mistake in Shahab's post, but rather a misunderstanding on the vocabolary of the other posts. Or maybe I'm myself misunderstanding the meaning of "silly mistake". In any case it is quite common, and not only in elementary calculus, to call "domain" of certain analytic functions what is elsewhere called "natural domain" or "maximal domain" to avoid confusion. --pma (talk) 20:56, 11 November 2009 (UTC)
- I think "silly mistake" just means a mistake that is obviously a mistake once it is pointed out to you and that the person making could reasonably have not made. For former criterion is certainly satisfied, but the latter might not be. It is a very common mistake to forget that the definition of a function requires defining the domain and that two functions which are equal on the intersection of their domains are not necessarily the same function - it is a mistake worth avoiding. --Tango (talk) 22:45, 11 November 2009 (UTC)
- I will probably be on the bad books of Michael Hardy with this post but I think that technicalities do not lie at the real heart of mathematics. It is certainly important that one appreciates subtle errors in proofs (which happen quite frequently), and that he/she understands the importance of precision, but one can be a mathematician without passing these criteria. Shahab has made a mistake which should be noted, but I do not see any advantages in the fact that an empty function equals its derivative, except for conventional conveniences. --PST 03:34, 12 November 2009 (UTC)
- but the empty function also solves and other equations as well... Will you acknowledge this at least..? ;-) --pma (talk) 04:40, 12 November 2009 (UTC)
- I will probably be on the bad books of Michael Hardy with this post but I think that technicalities do not lie at the real heart of mathematics. It is certainly important that one appreciates subtle errors in proofs (which happen quite frequently), and that he/she understands the importance of precision, but one can be a mathematician without passing these criteria. Shahab has made a mistake which should be noted, but I do not see any advantages in the fact that an empty function equals its derivative, except for conventional conveniences. --PST 03:34, 12 November 2009 (UTC)
- I think "silly mistake" just means a mistake that is obviously a mistake once it is pointed out to you and that the person making could reasonably have not made. For former criterion is certainly satisfied, but the latter might not be. It is a very common mistake to forget that the definition of a function requires defining the domain and that two functions which are equal on the intersection of their domains are not necessarily the same function - it is a mistake worth avoiding. --Tango (talk) 22:45, 11 November 2009 (UTC)
- You are right (as usual...I was hoping that I would not be caught :)). However, I would still think that classifying equations with an empty solution set is merely a conventional convenience - the empty function solves every equation on the empty domain (whether differential, algebraic or functional). Perhaps I shall acknowledge that the empty function is a trivial solution and nothing more. Luckily we are not living in finite group theory, in which case we would have a hard time in formulating a precise definition of what a "trivial solution/counterexample" is... :( --PST 07:12, 12 November 2009 (UTC)
Unrestricted Grammars
What kind of sequence would a(n) need to be so that there is no turing machine with symbols {0, 1} that accepts only strings 0^a(n) for all n? 66.202.66.78 (talk) 11:03, 11 November 2009 (UTC)
- It depends on what you mean by "accepts only". If you mean that the machine always returns 0 when a string is in the sequence, and always returns 1 when the string is not in the sequence, then the sequences that have that property are the ones that enumerate decidable sets. If you mean that the machine always returns 0 for a string in the sequence and never halts for any string not in the sequence, the sequences with that property are the r.e. sequences. So the answer is either that a(n) does not enumerate a decidable set, or that a(n) is not an r.e. sequence, depending on what you meant. — Carl (CBM · talk) 12:35, 11 November 2009 (UTC)
Circle Theorem question
The angle subtended by an arc at the centre is double the angle subtended by it at any point on the remaining part of the circle.
This is the theorem. Suppose the figure is like this, we construct CO and produce it to P. Then, angle OCA = angle OAC (OC = OA and converse of isosceles triangle theorem) and thus angle OCA = 1/2 angle AOP (exterior angle teorem). Similarly, angle OCB = 1/2 angle BOP. Therefore, angle ACB = 1/2 angle AOB. But if the figure is like this, how do we prove it? Srinivas 14:27, 11 November 2009 (UTC)
- We can do the same thing: extend CO to a point P on the other side of the circle. Using the same arguments angle OCA = 1/2 angle AOP, and angle OCB = 1/2 angle BOP. This time instead of adding these angles together, we subtract: angle OCA - angle OCB = angle ACB, etc. Rckrone (talk) 15:12, 11 November 2009 (UTC)