Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Meni Rosenfeld (talk | contribs) at 05:41, 16 November 2009 (How to solve). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:



November 9

Random binary digits from random numbers?

What is the best way to get random binary digits from a sequence of random decimal or hexical digits?

For example, if I toss a six sided dice a total of 8 times, how can I convert that sequence of 8 tosses into a sequence of random binary digits (0's and 1's), each having 50% probability. I know I could simply do 1,2,3 on dice = 0, and 4-5-6 on a dice = 1, but that takes 1 roll per binary digit, I want to do it in a way that minimises the number of die rolls required per binary bit.

--Dacium (talk) 02:58, 9 November 2009 (UTC)[reply]

You could get 2 random bits from each roll 2/3 of the time and 1 1/3 of the time, for an average of 5/3 bits per roll. 1 counts as 00, 2 counts as 01, 3 counts as 10, 4 counts as 11, all with equal probability so it should be uniform. Then, 5 could be just 0 and 6 could be 1. StatisticsMan (talk) 04:30, 9 November 2009 (UTC)[reply]
The most bits you can get out per roll is . With only a finite number of rolls, you can only get a rational number of bits, as arbitrarily close to x as you'd like (with the closer you get to x requiring more rolls total and a more complex algorithm). For example, following StatisticsMan's method, you can get bits per roll as follows: every pair of rolls has 36 possibilities, so list these possibilities in some order; the first 32 possibilities yield the bits 00000, 00001, ..., 11111, and the last 4 possibilities yield the bits 00, 01, 10, 11. Eric. 131.215.159.109 (talk) 05:26, 9 November 2009 (UTC)[reply]
Of course, for best results you should use all the rolls at once. I think the following works: Write where . After rolling, write the result as an integer . Find the minimal m such that . Then and is uniformly distributed over this range. This gives you m bits, and happens with probability (only if ). The expectation is which is 2.42478 per roll. -- Meni Rosenfeld (talk) 06:19, 9 November 2009 (UTC)[reply]
Remark on infinite sequences. If you interpret a sequence of digits in p:={0,1,..,p-1} as the expansion in base p of a real number x, you get a measure preserving map from pN (with the product of countably many copies of the uniform probability measure on p) into the interval [0,1] with the Lebesgue measure. If you then expand x in base q, you get a measure preserving map into the sequences of digits in q={0,1,..,q-1}. These maps are bijective up to a countable null set (due to the double representation of some rationals). So any set of p-sequences gives rise this way to a set of q-sequences with the same probability. --pma (talk) 07:32, 9 November 2009 (UTC)[reply]
Dacium, what are you actually trying to do? If you're really rolling dice by hand and trying to get a long bit stream, then StatisticsMan's proposal is good and it does use the entropy optionally unless I'm missing something. If you're trying to program a computer to convert some non-binary entropy source to a bit stream, the mathematical theory of randomness extractors is way more complicated than you need. Just make a conservative estimate of the amount of min-entropy in your input source, and collect enough input that the output of a CSPRNG seeded from that input can't be feasibly distinguished from true random bits. If you can be more specific about your requirements you will probably get more useful answers. 69.228.171.150 (talk) 10:55, 9 November 2009 (UTC)[reply]

With simple single rolls of a d10 converted by hand, one easy way would be to convert 10 and 1-7 into three digit binary numbers (000 to 111) and code 8s as 0 and 9s as 1; this would be analogous to Statisticsman's original suggestion with a d6. An average string of 100 decimal digits would give you 260 zeroes and ones. Of course, if you're using a non-hexical die (e.g., an RPG d10) , your best option would be to use a d8, to give you a three-digit binary number every time, or a d20 for 16 four-digit binary numbers and four two-digit ones - an average of 3.6 binary digits per roll. Grutness...wha? 11:31, 9 November 2009 (UTC)[reply]

with decimal I would take 3 digits at a time, and if it's more than 500, the binary digit is 1, less than 500 and it's 0, and this should reduce any off-by-one error from my sloppy thinking to at most 2/500 or maybe 3/500 skew depending on how unlucky I am. If this is unacceptably high, I would increase it to 8 digits at a time, so that if the number is more than 50,000,000 it's a 1 and less than 50,000,000 it's a 0, thereby reducing any off-b y-one skew to a negligible amount. Why only 8 digits? As that is a safe distance (2 orders of magnitude) from the overflow of the signed integers I'd probably be using by accident to store the result.... can you tell I'm an adherent of Murphy's law? —Preceding unsigned comment added by 92.224.205.24 (talk) 16:06, 9 November 2009 (UTC)[reply]

mathematics/{1+x+x*x}{1+5x+x}{5+x+x}=64

{1+x+x*x}{1+5x+x}{5+x+x}=64 —Preceding unsigned comment added by 59.164.3.176 (talk) 08:23, 9 November 2009 (UTC)[reply]

It helps if you also tells us what you are wondering about. Taemyr (talk) 08:35, 9 November 2009 (UTC)[reply]
In the future, we will not answer your question unless it is deemed unambiguous by a majority. Although you are most certainly encouraged to ask questions here (and we are happy to answer them), it is important to note that phrasing your question in a simple manner increases the chance that someone willl answer it (after all, this is is a free service). Let me assume that the following is your question:
The solutions to which are x = -2.7825809, x = 0.66558396. --PST 11:35, 9 November 2009 (UTC)[reply]
Coefficient of x should be 37 i.e.
Gandalf61 (talk) 12:53, 9 November 2009 (UTC)[reply]
and so the two real solutions are −2.82568 and 0.650129 and the two nonreal solutions are −0.74556±1.4562i. Bo Jacoby (talk) 14:47, 9 November 2009 (UTC).[reply]
Both of you are right. Maybe I need a break from the reference desk... --PST 03:35, 10 November 2009 (UTC)[reply]
None of us are perfect and your contributions are appreciated. Bo Jacoby (talk) 19:40, 10 November 2009 (UTC).[reply]

Your equation expands to be 12x4 + 44x3 + 49x2 + 37x − 59 = 0. The left hand side of this equation is a polynomial in the variable x. In fact it is a quartic polynomial, i.e. the highest power of x is four. A corollary of the fundamental theorem of algebra tells us that your equation has, counting multiplicities, four solutions over the complex numbers. As is the case with quadratic polynomials, it is possible to find the exact solutions in terms of radicals, i.e. expressions involving nth roots. Although, in practise, this is very difficult to do by hand for anything other than the simplest of quartic polynomials. If you are satisfied with approximate solutions then there are a few things you could do. The first is to use the Newton method to find approximations to the solutions. With the Newton method you start with an educated guess for one of your solutions, say x0, and feed this into a certain function giving a new value, say x1. You then put x1 into the same function to get x2, and so on. This process of iteration will give a sequence of numbers (x0, x1, x2, …) which will eventually (given a few extra conditions) converge to a solution of your equation. You won't need more than a calculator, a little knowledge of complex numbers and the ability to differentiate a polynomial to use this method. Alternatively, you could just cheat and use a computer algebra package like Maple or Matlab to generate some numerical approximations to the solutions; but where's the fun in that? :o) ~~ Dr Dec (Talk) ~~ 17:35, 10 November 2009 (UTC)[reply]

The fun is in mastering a tool that enables you to solve hard problems easily. J (programming language):
   >{:p.(((1 1 1&p.)*(1 6&p.)*(5 2&p.))-64&p.)t.i. 5
_2.82568 _0.74556j1.4562 _0.74556j_1.4562 0.650129

Bo Jacoby (talk) 19:40, 10 November 2009 (UTC).[reply]

Commutative diagram

I have an n x n diagram (n rows and n columns) with the property that each (1 x 1) cell of the diagram is a commutative diagram. Is the whole diagram commutative? What's the name for this result (if it's true)? —Preceding unsigned comment added by 58.161.138.117 (talk) 12:24, 9 November 2009 (UTC)[reply]

You mean you have a rectangular grid with all arrows between neighbouring nodes going (say) left-to-right and top-to-bottom? Yes, the diagram is commutative. You can prove it easily for any m-by-n grid by induction on m, with inner induction on n. I have no idea whether it has a name. — Emil J. 14:21, 9 November 2009 (UTC)[reply]

Why don't we have a complete mathematical language?

Mathematical texts still use natural languages. Couldn't mathematicians develop a complete language? This language being complete in the sense that you can express things like "why", "imagine a". Students would learn how to code their natural native languages into this mathematical language.--Quest09 (talk) 12:57, 9 November 2009 (UTC)[reply]

Well you could use the Mizar system or Metamath but mathematicians are human too and these definitely wouldn't help students. I seem to remember a funny story in Mathematical Intelligencer where students did everything this way. Dmcq (talk) 13:12, 9 November 2009 (UTC)[reply]
There is not much use in developing a language unless there are semantics for it, but it is not at all obvious what the semantics for "imagine a" would be. Thus the formal languages that are used in math are restricted to a class in which we can develop a good semantics. For example, we have a good sense of what "A and B" means in terms of "A" and "B". Philosophers and linguists spend a long time studying the meanings of natural-language phrases, but we do not understand them well enough to try to formalize them.
On the other hand, many students do learn how to express mathematical claims in "mathematical language" which corresponds roughly to first-order logic. But this does not include the entire range of expression that is possible in a natural language. — Carl (CBM · talk) 13:43, 9 November 2009 (UTC)[reply]
Also see Loglan/Lojban. --Stephan Schulz (talk) 14:38, 9 November 2009 (UTC)[reply]

Because mathematicians have a right brain as well as a left brain. The right brain does not respond in the least to rigorous symbolic manipulation. Thus expressing everything symbolically is a great way to take the right brain out of the equation, and deduct rigorous truth even when it is grossly unintuitive. But only a few such deductions are unintuitive; the vast majority of truth mathematicians manipulate makes sense to them on an intuitive, right-brain level as well (and this is why they are mathematicians). If you were to remove language, you could no longer easily communicate with the right brain (imagine if you remove directx/opengl: you no longer can easily program the GPU) and many kinds of math become hard to fathom. This is especially true for appeals to visual intuition, as there is no rigor there at all. A visual demonstration of the pythagorean theorem has no rigor at all, you must 'add in' all the rigor bit by bit. But, to a mathematician, it might immediately make total sense to see that. Now, was the picture, which by definition has no rigor at all, it's just pixels, the right way to communicate the picture, instead of through coordinate geometry, which would have taken you 3-5 minutes to translate into a picture? Obviously it was. It's simply easier to show the picture. Similarly, it's just easier to use natural language to appeal to your intuition, and the left brain can then proceed to flesh out the formalities. For example, primes get harder and harder to come by as you go up through larger and larger primes. Couldn't there be a largest prime, couldn't the set of primes be finite, and not unending? The simple (and famous) natural language appeal that there cannot be a largest prime is simple: "there can't be a largest prime, because if there were, you could take it and all the primes smaller than it, multiply them together, and add one. The new number would just fail to divide by "every" prime (it would leave remainder of 1 for each attempt), but if it fails to divide by any prime, then its only factors must be 1 and itself -- ie it must be a prime, and since we got it by multiplying "all" the primes together, it muts be greater than all of them. So, it is a new "greatest" prime, clearly a contradiction with the possibility that there were a finite number of primes to begin with." This was a natural-language appeal, and it uses your intuition. I could have phrased it in a more complex and rigorous way, that would have been harder to understand, but if I had done so, you simply wouldn't have understood the truth of that statement so spontaneously (assuming you did). by the way, if you DIDN'T follow my logic, imagine how much farther away still you would ahve been from 'getting' it if I had used all symbols! Now, once you have understood the untuitive appeal (or thought of it yourself), you might sit down and go through it rigorously. You might even say whooops, with greater rigor I notice that the new "greatest" prime might not really be a prime, but just relative prime to all the primes -- in fact, it might have another prime factor we left out of "all" primes. These kinds of ancillary effects often come out only under greater rigor. But natural language is quite important for even doing the kind of intuitive reasoning/communication that it takes to get to the point where you even know what to try symbolically/with great rigor. 92.224.205.24 (talk) 15:41, 9 November 2009 (UTC)[reply]

92, your post would be much easier to read if you divided it to paragraphs. -- Meni Rosenfeld (talk) 19:09, 9 November 2009 (UTC)[reply]

Question regarding order of an element in a group

I have the following question. Suppose G is a finite group and K is a normal subgroup of G. Also suppose G/K has an element of order n. Then prove that G also has an element of order n.

I understand that finiteness is essential because if G= Z, K = 2Z clearly G/K has an element of order 2 but G doesn't. Other then that I can't think of a thing! I'll appreciate any help--Shahab (talk) 16:20, 9 November 2009 (UTC)[reply]

(Finiteness is not quite essential, it suffices if G is a torsion group.) Just do the obvious thing: take an element g/K of G/K or order n, and try to see if this gives any constraints on the order of g in G. Then you should be able to find an element of order n in G with no difficulty. — Emil J. 16:35, 9 November 2009 (UTC)[reply]
All I can think of is that O(g)≥ n as n is the least positive integer for which g^n is in K. Can you please be a bit more explicit. Thanks--Shahab (talk) 17:15, 9 November 2009 (UTC)[reply]
n is the least positive integer for which gn is in K, yes. And more holds: for any integer a, ga is in K if and only if n divides a. Now, go(g) = 1 is in K, hence o(g) is not just greater than or equal to n, but actually divisible by n. — Emil J. 18:01, 9 November 2009 (UTC)[reply]
Let with and let n be the least positive integer with this property. Then, if m is a positive integer with , since K is a subgroup. Consequently, since 1 is an element of K and thus has order n, as desired. Note that I think my proof overlaps with EmilJ's, but hopefully I was a bit more explicit. --PST 03:34, 10 November 2009 (UTC)[reply]


November 10

Question About Primes Near nn

I'm wondering primarily whether anyone has found a solution to nn+2=Prime beyond n=3. Generally, has anyone taken up the question of the nearest prime values to nn? Has the question appeared in the literature at all? I imagine it must have.Julzes (talk) 17:56, 10 November 2009 (UTC)[reply]

I've just run a little procedure to check. I got n = 737 and n = 1,349. I had to stop the program after that point because the numbers were getting so big that my computer nearly exploded! ~~ Dr Dec (Talk) ~~ 18:05, 10 November 2009 (UTC)[reply]
Can your computer actually tell if such a number is prime? RSA encryption is based on the fact that it's hard to factor a 400 digit number, even with a supercomputer. When n = 1349, you're talking about a number with over 1 million digits around 4000 digits. At best, you can do primality testing to have a good probability that such a number is prime. Isn't this right? I'm not claiming I know exactly. StatisticsMan (talk) 18:59, 10 November 2009 (UTC)[reply]
Factorization is harder than testing for primality. Taemyr (talk) 19:05, 10 November 2009 (UTC)[reply]
(e/c) First, primality testing can be done deterministically in polynomial time (though that does not necessarily mean that's what Dr Dec did, since the AKS algorithm is actually rather inefficient in practice). Factoring and testing primality are in a completely different ballpark. Second, 13491349 has only ~4000 or so decimal digits, not a million. — Emil J. 19:08, 10 November 2009 (UTC)[reply]
Okay, good points, especially about the number of digits. I did 3^1000 = 1000000, instead of 3 * 1000 = 3000. So, my bad. By deterministically, do you mean that you can tell 100% for sure that a number is prime, correct? So, it's not a probabilistic test? StatisticsMan (talk) 19:13, 10 November 2009 (UTC)[reply]
Yes, that is correct. But 3^1000 is not 1000000. -- Meni Rosenfeld (talk) 20:38, 10 November 2009 (UTC)[reply]
Yea, I agree. I realized after I logged off that I said it wrong. I'm not sure what I did honestly. But, I agree with around 4000 digits. StatisticsMan (talk) 20:55, 10 November 2009 (UTC)[reply]

I used Maple and the command isprime. For example isprime(2); returns true whereas isprime(4); returns false. I ran the very simple program:

  • L := [ ]: (making an empty list)
  • for i from 1 to 1000 do (setting the variable and number of cases)
  • if isprime((2n+1)2n+1 + 2) = true then L := [op(L),i]: fi: od: (checking all odd n, if it's prime add n to the list)

At the end my list L came out as [1,368,674] meaning that nn + 2 is prime for n = 3, 737 and 1,349. ~~ Dr Dec (Talk) ~~ 19:26, 10 November 2009 (UTC)[reply]

P.S. I've asked Maple to factor 737737 + 2 and it seems to have either got lost or gone on strike. Either way, it's not very happy. As has been mentioned above: factorisation is much more difficult than primality testing. Hence the security of RSA encryption. ~~ Dr Dec (Talk) ~~ 19:35, 10 November 2009 (UTC)[reply]
According to MapleSoft's web page, isprime returns true if p is very probably prime. It does not say which algorithm it uses but gives references at the bottom. It says there is no known example of it being wrong and such a number would have to be hundreds of digits long. StatisticsMan (talk) 19:45, 10 November 2009 (UTC)[reply]
Mathematica confirms the results of User:Dr Dec. From the implementation notes:
  • PrimeQ first tests for divisibility using small primes, then uses the Miller-Rabin strong pseudoprime test base 2 and base 3, and then uses a Lucas test.
  • As of 1997, this procedure is known to be correct only for , and it is conceivable that for larger n it could claim a composite number to be prime.
  • The Primality Proving Package contains a much slower algorithm which has been proved correct for all n. It can return an explicit certificate of primality.
I can try to verify this using the last item if it is important to you. -- Meni Rosenfeld (talk) 20:38, 10 November 2009 (UTC)[reply]

This is interesting, but not terribly important. Thanks. I'm more interested in the general question of how to do a literature search. Is there a symbolic-string search engine that will pop out what's desired, or would one have to go through Mathematical Reviews' Number Theory subsection for all recent years?Julzes (talk) 23:17, 10 November 2009 (UTC)[reply]

oeis:A100407: "Numbers n such that n^n+2 is prime. 1, 3, 737, 1349." There are no others up to 3500 which corresponds to 12405 digits. It took me 12 minutes at 2.4 GHz to double check this limit with PrimeForm/GW 3.2.2. A search of n^n+2 [1] at http://www.primenumbers.net/prptop/searchform.php finds nothing. If any other terms had been found then they would probably have been submitted to that database of gigantic probable primes and turned up in the search. 1349^1349+2 is a 4223-digit probable prime. There is no known primality test which can quickly determine whether it is really prime, because the currently known quick tests require a large part of the factorization of p-1 or p+1 to be known. See more at http://primes.utm.edu/prove/index.html. The fastest algorithm to prove n^n+2 for large n is ECPP (Elliptic curve primality proving). The fastest publicly available ECPP implementation is Marcel Martin's Primo at http://www.ellipsa.eu/public/primo/primo.html. It might take around 1 GHz month to prove primality of 1349^1349+2 with Primo. Don't attempt any other public program like Mathematica's ProvablePrimeQ at this size. If you try a program and it claims prime in less than a GHz week then the program did not make a primality proof although it may have claimed to do so (some programs hide that they only make probable prime tests). oeis:A074966: NextPrime(n^n) - n^n. I easily found these OEIS sequences by computing the first few terms and search them in OEIS. PrimeHunter (talk) 00:42, 11 November 2009 (UTC)[reply]
I doubt Wolfram would outright lie about this - especially since it produces an Atkin-Goldwasser-Kilian-Morain certificate which I suppose can be easily checked. Anyway, I did run ProvablePrimeQ for 737 for several minutes and it did not terminate. -- Meni Rosenfeld (talk) 06:24, 11 November 2009 (UTC)[reply]
ProvablePrimeQ makes a primality proof but nobody uses it at sizes like 1349^1349+2 because it is too slow if it ever completes. Maybe it could handle 737^737+2 within a GHz week but I don't know. It is other programs which may give misleading messages about showing a number being prime. It was perhaps confusing that I mentioned this right after talking about ProvablePrimeQ. There are also many Mathematica users who have falsely thought that Mathematica's PrimeQ makes primality proofs. I don't have Mathematica but apparently some parts of the documentation can give the impression that PrimeQ shows primality. PrimeHunter (talk) 14:03, 11 November 2009 (UTC)[reply]
Indeed, the documentation for PrimeQ does not mention that it is probabilistic, but neither does it explicitly say otherwise. Also, that information is not exactly hidden for those who are aware of the issue and willing to look. -- Meni Rosenfeld (talk) 14:34, 11 November 2009 (UTC)[reply]
By the way - do you happen to have any data about how academic this discussion is? That is, was there ever a report that PrimeQ, or an equivalently accurate function, claimed a composite number was prime? -- Meni Rosenfeld (talk) 15:23, 11 November 2009 (UTC)[reply]

Thanks, PrimeHunter. I guess this is a question I might have tried a little harder to answer on my own, with my prior knowledge of OEIS. oeis:A074967: n^n-PrevPrime(n^n) also. There's still the question of literature, but a starting point for that is at OEIS anyway, for this specific problem at least.Julzes (talk) 01:18, 11 November 2009 (UTC)[reply]

One thing I noted is that oeis:A074966 has a strange quote in its comments. Asking whether there are any solutions of nn+1=Prime beyond the three solutions n=1, 2 and 4; it is stated that any such prime is greater than 1030000, according to Sierpinski. Now, this doesn't make a whole lot of sense. Any n that gives a prime for nn+1 must be of the form 2^2^k, with the result being a (2k+k)th Fermat number. Even if it wasn't known at the time of the quote that the 20th Fermat number is composite, this number is 315653 digits. At this date, not only the 20th but also the 37th Fermat number is known to be composite. The 70th appears to still be open (?), so the smallest possible prime would be this.Julzes (talk) 15:04, 11 November 2009 (UTC)[reply]

Infinite-time probability

This problem follows from a debate I was having with a friend in one of my classes. "If two people walk randomly around a room, given infinite time, will they always collide?" To simplify, if the chance of hitting in one minute is between 0 and 1 (say 0.01), what is the chance they will collide given an infinite number of minutes? As it happens, I said "1" and my friend said "almost 1".

Now, it would appear from our articles that the answer is that it will "almost surely" occur.


I'm afraid these paragraphs, as they apply to my problem, are a loss to me. Is it a certainty or not? (I realise that I probably technically won the debate here; the answer is 1. More interesting is the interpretation of that figure, it would seem.) - Jarry1250 [Humorous? Discuss.] 17:57, 10 November 2009 (UTC)[reply]

You can't actually do the experiment for an infinite amount of time, so we can't say what would actually happen. What we can say is that, as the length of time you do it for increases the chance of a collision tends towards one. In the limit as you do it for infinite amount of time we say the probability is one, but it is still possible that it wouldn't happen (both people could, purely by chance, walk round in small, non-intersecting circles for the whole time, for example). All that being said, you could both be wrong. It depends on what you mean by "walk randomly". Random walks are a much studied mathematical technique and different things happen in different situations. I think the best way to model your question is to consider the room as a grid and each person steps either north, east, south or west into the next square each second. You then need to decide on the distribution - are they more likely to keep walking in the same direction than they are to turn around and go backwards, for example? You can't calculate it based on the collision chance in a minute since each minute will be different depending on where they start that minute, and the minutes will be dependant on each other. If they move around really quickly (or the room is really small) then it is possible each minute would be approximately the same, but if they are going at a normal walking speed around a normal sized room, a minute is too short a time period. You may also be interested in Brownian motion, which is like the random walk I described but with an infinitely small grid (roughly speaking). --Tango (talk) 18:09, 10 November 2009 (UTC)[reply]
Thanks Tango - I understand the problem with modelling the exact scenario, to the point it doesn't really matter so much to me as the meta-issue. So the probability is 1, but they might not collide? Weird. - Jarry1250 [Humorous? Discuss.] 18:16, 10 November 2009 (UTC)[reply]
"Might" is such a strong word. The only thing separating "sure" from "almost sure" is that your sample space includes instantiations in which they don't collide. Tango's example of walking forever in circles matches the problem specification of walking inside the room. However, the probability that any of these instantiations will manifest is 0, and for any sensible notion of probability, this means that they will collide. -- Meni Rosenfeld (talk) 18:25, 10 November 2009 (UTC)[reply]
The notion of probability here is completely well-specified. What you seem to want is a different notion of possibility. If you declare probability-zero events to be impossible, you have to explain just how they get to be impossible, given that all initial segments of them are possible. I think that's contrary to the usual intuitive understanding of the modal operators. --Trovatore (talk) 00:15, 11 November 2009 (UTC)[reply]


For a reasonable mathematical model of the problem, the answer is : "The probability that they will collide at some point is exactly 1. They will almost surely collide at some point". The fact that these two statements do not contradict each other is what the paragraph you quote attempts to clarify. IMO, the distinction between "surely" and "almost surely" is too subtle to survive the translation to the physical world. -- Meni Rosenfeld (talk) 18:19, 10 November 2009 (UTC)[reply]
The distinction involves infinity - nothing involving infinity translates nicely to the physical world. --Tango (talk) 19:06, 10 November 2009 (UTC)[reply]
Think of "almost surely" as meaning something like this: pick a random real number in the interval [0,10] with uniform probability. If the random number is exactly π, they don't collide; otherwise, they do. Observations: 1) the probability of picking any specific number from an infinite collection is zero; 2) despite this, you can't exactly say the random number won't be π. The problem is "pick a random real number" is inherently imprecise. It's the same way with "run the experiment for an infinite amount of time". 69.228.171.150 (talk) 23:35, 10 November 2009 (UTC)[reply]

Why is nobody answering the question?

  • Random walks in 2 dimensions are recurrent.
  • Random walks in more than 2 dimensions are transient.

That means the probability of eventually coming within a distance ε of each other, no matter how small ε is, is 1, if they're in a plane. In 3-space, the probability is less than 1. (If it's a discrete random walk on integer points in 3 dimensions, with the usual assumptions (independence of steps, uniform distribution), then I think the probability they eventually meet again is something like 0.3.) I seem to recall that George Polya figured this out in about 1910 or something like that. Michael Hardy (talk) 04:18, 11 November 2009 (UTC)[reply]

If ε is finite, then ε^n represents a finite proportion of the total space in the room, no matter how many dimensions are involved, and thus the probability is 1. And if you make ε tend to 0 at the same time as the time tends to infinity, then the answer will depend on how the two are related.Ehrenkater (talk) 18:33, 11 November 2009 (UTC)[reply]
Since OP had already assumed recurrence I don't see how this is what the question was. Rather I read the OP question as dealing with the difference of "with probability 1" vs. "with surety". As such the answers given above is the proper answers to the question. Taemyr (talk) 05:25, 11 November 2009 (UTC)[reply]
Besides, the OP said they walked in a room, which I take to mean a bounded set. I think this guarantees recurrence no matter the dimension. -- Meni Rosenfeld (talk) 06:09, 11 November 2009 (UTC)[reply]
Thanks Michael for the answer, but yeah, I was kinda after the difference between "with probability 1" vs. "with surety" as it applied to this real life scenario. Sorry for not being more explicit. But anyhow, all useful information I will keep with me. - Jarry1250 [Humorous? Discuss.] 17:52, 11 November 2009 (UTC)[reply]

Set of integers

Resolved

Is there a simple name (e.g. weird number, prime number, even prime number, colossally abundant number) for this set of integers: 1, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53... ? (Note that this is essentially the set of odd prime numbers, except that 1 is not a prime number. These numbers all have the property that they cannot be divided by any number except 1 without becoming a non-integer number.) 68.116.36.179 (talk) 18:32, 10 November 2009 (UTC)[reply]

Some people used to include 1 in the primes but it just causes trouble. I'm not sure why you have excluded 2. Dmcq (talk) 18:37, 10 November 2009 (UTC)[reply]
Also a nitpick; 7 divided by 7 equals 1. Taemyr (talk) 18:54, 10 November 2009 (UTC)[reply]

Well, 2 is the only even prime number so being an odd prime is anything but special. If P denotes the set of prime numbers then your set is { { 1 } ∪ P } − { 2 }. So you seem to have substituted 2 for 1 in the set of primes. Why ought this new set have a special name? ~~ Dr Dec (Talk) ~~ 19:04, 10 November 2009 (UTC)[reply]

The phrase "odd prime" does come up quite often in number theory, since it is often desirable to exclude 2 (particularly when dealing with quadratics). Why anyone would want to include 1, though, I don't know. You could call that set "odd positive non-composite integers" if you wanted to, one being neither prime nor composite (and being unique among positive integers in having that property). --Tango (talk) 19:11, 10 November 2009 (UTC)[reply]
Thanks everyone, I think that that is indeed the closest name to what I wanted. I also saw it here: [2] (useful site, BTW). This site say they are the "odd noncomposite numbers". Re: the nitpick: OK, so I should have said "cannot be divided by any number except 1 or themselves without becoming a non-integer number." The reason I'm interested in having 1 in with the odd primes is because the number 1 shares some characteristics, but of course not all, with the other odd primes. I'm sure there are some applications for this - the reason I'm interested in this set is as a way to verify that a chemical equation cannot be reduced further. 68.116.36.179 (talk) 22:40, 10 November 2009 (UTC)[reply]
Then why are you excluding 2? I think you need to look at greatest common divisor - if the gcd is greater than 1, then you can cancel the common factor. --Tango (talk) 22:50, 10 November 2009 (UTC)[reply]
Oh yes, you're right. I failed to think about this very hard. How do you put the resolved checkmark on this thing? 66.178.144.142 (talk) 02:42, 11 November 2009 (UTC)[reply]
I have checked "resolved" for you. --PST 03:20, 11 November 2009 (UTC)[reply]
Thanks PST. 66.178.144.142 (talk) 18:06, 11 November 2009 (UTC)[reply]

Short division

I am trying to help my niece with her homework but I cannot remember doing short division at school and can't make any sense of any of the guides I have found. Can someone work through an example with me? Take 4 into 67 - I can get as far as getting to 16 and then I am stumped how I get to the .75? --Cameron Scott (talk) 19:38, 10 November 2009 (UTC)[reply]

If I remember well you do: 4 into 67 is 16 ,then you go on with the remainder 3=67-64. You put a 0 after 3 and a dot after 16, and go on this way: 4 into 30 gets 7 &c. Check short division also. --pma (talk) 19:45, 10 November 2009 (UTC)[reply]

What does "&C" mean? I couldn't make any sense of the short division article - it might as well have been in dutch. --Cameron Scott (talk) 19:52, 10 November 2009 (UTC)[reply]

&C is short for the borrowed latin phrase et cetera. 69.228.171.150 (talk) 20:22, 10 November 2009 (UTC)[reply]

Let's say that you want to divide 956 by 4. Well you write out

Now how many times does 4 go into 9? Well, 4 goes into 9 twice with 1 left over, so we write a 2 above the 9 and put a little 1 in front of the 5 (like carrying in addition). So we have

Next, how many times does 4 go into 15? Well, 4 goes into 15 thrice with 3 left over, so we write a 3 above the 15 and put a little 3 in front of the 6. So we have

Finally, how many times does 4 go into 36? Well, it goes 9 times with none left over. So we write a 9 above the 36 and stop because the remainder was zero. This gives:

If the remainder weren't zero then we could have carried on by considering

The net result is that 956/4 = 239. ~~ Dr Dec (Talk) ~~ 20:21, 10 November 2009 (UTC)[reply]

Thanks, that's a bit clearer. While I remember doing long division at school, I have no recollection of short division. Between three (!!!!) adults, we finally worked it out. --Cameron Scott (talk) 20:26, 10 November 2009 (UTC)[reply]

I dont understand why you are trying to do division by hand. The only reason I can think of for doing division by hand is to check that the calculator is giving me the right answer. And to check that the calculator is giving me the right answer, I have to learn doing multiplication by hand (for example: A divide B = C. Check by multiplying B by C). So I cannot think of any reason to do division by hand. 203.41.81.1 (talk) 21:08, 10 November 2009 (UTC)[reply]

Unfortunately at school they have an annoying habit of telling you to find the answer a certain way, even if it is ultimately pointless. - Jarry1250 [Humorous? Discuss.] 21:19, 10 November 2009 (UTC)[reply]
Indeed, she has to show her workings to indicate how she's done it. --Cameron Scott (talk) 21:33, 10 November 2009 (UTC)[reply]

Why's it "annoying" or "pointless"? There are things you can prove by manual division that a calculator can't. For example if you ask your calculator to calculate m/n and it returns the answer 0.123456789 then all you know is that m/n is 0.123456789 to nine decimal places (it might be 0.1234567891…, or 0.1234567886…, or infinitely many other numbers). If you do the division by hand you might notice that the remainders start to repeat after the 9 and so the actual number is 0.123456789. After practising the method for a while you get a feel for it and you can start to formulate some nice theorems. For example, what is the maximum possible period length of a repeating decimal (in the case of 13717421/111111111 = 0.123456789, the period length is nine). Well, if we divide m by n then there are n possible remainders and so m/n as a repeating decimal can have a period of at most n digits. Prove that with a calculator! ~~ Dr Dec (Talk) ~~ 22:04, 10 November 2009 (UTC)[reply]

A good apology for division by hand. But really? Blink? Staecker (talk) 00:50, 11 November 2009 (UTC)[reply]

Short division is not only a terrible excuse to name an obscure concept, but also meaningless and un-mathematical. As I have said, and shall continue to say, schools fail to give a proper mathematics education to their students these days (mathematicians do not find new ways for dividing one number into another - ultimately, this is non-mathematical). Proper mathematics occurs when students are asked to think deeply and creatively rather than memorize set methods to solve problems (as you will know, there is not shortage of open problems in mathematics, and new methods must be discovered to solve them - one cannot "copy" old methods). Additionally, many of these open problems are well beyond the comprehension of adults (who do not have university level mathematics education) - even formal mathematicians spend years on some of these problems (this is not to say that one needs mathematics education to do research - but it certainly helps). There is rarely a "formula" that can simply be applied (mathematics is not about formulaic approaches). The point is that although I would certainly discourage the teaching of "short division", I would encourage your niece to consider problems which require abstract thinking in relatively simple contexts. Although having an enormous amount of depth and breadth of knowledge is required before one can begin research in mathematics (nowadays), mathematical thinking can start at any point, and I would encourage your niece to consider this should she have the spare time. For instance, suppose I have a pair of numbers a and b. The GCD (Greatest Common Divisor) of a and b is the largest number g that divides both a and b. Examples include:

GCD(2, 5) = 1
GCD(3, 6) = 3
GCD(18, 45) = 9

Use the naive technique of long division to establish an algorithm by which one can determine the GCD of two numbers (the answer lies in Euclidean algorithm, but try to solve it without looking at the article). As a simpler problem, what is the relation between the least common multiple and the greatest common divisor? Lastly, invent your own questions during your investigation of the GCD - a mathematician is also someone who is able to pose good questions, and solve them. Nevetheless, these problems do not illustrate that mathematics is about basic arithmetic only - mathematics has hardly anything to do with basic arithmetic (it is the schools which give the wrong impression). The article mathematics should give a reasonable overview as to what mathematicians research. Hope this helps (and remember to ask your niece if she is interested in solving the problems I posed!). --PST 03:17, 11 November 2009 (UTC)[reply]

I'm fairly certain that the point of teaching methods of doing arithmetic is not to train students to be mathematicians. It's to teach them the basic skills that will presumably help them in everyday life. The vast majority will never need to think about actual problems in mathematics, but they may very well need to do arithmetic without having access to a calculator. I don't know how much justification there is for long division these days since if you can break out some paper you probably also have access to a computer, but that doesn't change the basic idea, which is that these are general life skills. It's silly to say that there's no place for that, even if it's not preparing anyone for "real math". Rckrone (talk) 06:36, 11 November 2009 (UTC)[reply]
That is the problem. The curriculum should be focused on training future mathematicians (as is done for other subjects. I am sure that many aspects of science or history have no purpose in "daily life". Why are these subjects taught at (primary) school?). Mathematics is the center of intellectual ability - it encapsulates all forms of thinking from abstract to practical. --PST 07:31, 11 November 2009 (UTC)[reply]
I agree that it's also important to expose students to real math, since clearly some of them need to end up becoming mathematicians or physicists or whatever. And the same can be said for most of the more specialized subjects taught in school. But there are some general skills that are important to everyone which include arithmetic, reading/writing, and maybe some basic history and civics. Encouraging future mathematicians doesn't need to come at the expense of establishing those skills for everyone else, and shouldn't. Rckrone (talk) 07:58, 11 November 2009 (UTC)[reply]
I completely agree with you. However, basic arithmetic appears to be taught for approximately 7 years at school. Certainly, it could be taught within the span of 5 years at the very most (but I feel that 3 years is enough). If children are too young in the earlier years of schooling to learn quickly, then introducing them to games like chess would be appropriate (whence they could have fun and improve intellectually). I know that what I am proposing is being implemented at some special programs these days, but the point is that the curriculum is not only too slow for many intelligent students, but also very ineffective (arithmetic need not be taught for over half of a student's schooling). Furthermore, weight should not be put on establishing "life skills" - there should be some balance between actual mathematical thinking and arithmetic. Training students to follow set methods to solve problems should be done first, but not at the cost of "dumbing them down" by discouraging creativity. --PST 09:37, 11 November 2009 (UTC)[reply]
mathematicians do not find new ways for dividing one number into another - ultimately, this is non-mathematical Certainly most mathematicians don't, but I don't think it could be called non-mathematical. The question of the best way to multiply two numbers (which is closely related to dividing) interested the likes of Andrey Kolmogorov, who conjectured that the "schoolchild" method cannot be improved upon, only to be proven wrong by Anatolii Karatsuba. --196.210.152.43 (talk) 20:20, 15 November 2009 (UTC)[reply]


What's funny is a lot of people still don't know arithmetic after learning it for 7 years. StatisticsMan (talk) 18:44, 12 November 2009 (UTC)[reply]

When I was 7, I was very fond of multiplications indeed, and I must say that my ability was very highly estimated by the class and by the teacher. Unfortunately, one day she started the divisions, and my favourite topic became suddenly kind of completely out. I felt that somehow unfair, and as a consequence, never managed to like divisions. --pma (talk) 20:58, 14 November 2009 (UTC)[reply]


November 11

mathematics

can we differentiate log log sin x? —Preceding unsigned comment added by Jituinfo007 (talkcontribs) 03:58, 11 November 2009 (UTC)[reply]

See chain rule. You will need to apply it 3 times. --Tango (talk) 04:07, 11 November 2009 (UTC)[reply]
No. Since |sin(x)|<=1, log(sinx)<=0, so we can't differentiate it because log of negative numbers are not real and log(0) is undefined. However we can differentiate it by treating it as a complex function. —Preceding unsigned comment added by Money is tight (talkcontribs) 04:11, 11 November 2009 (UTC)[reply]
Even in the reals, log log sin x denotes the function with empty domain by your argument, and we can certainly differentiate that: it equals its derivative. — Emil J. 12:36, 11 November 2009 (UTC)[reply]
Why does a function having an empty domain have a derivative equal to itself?--Shahab (talk) 15:20, 11 November 2009 (UTC)[reply]
The idea being that the derivative also has an empty domain and the same codomain. There's really only one such function, so they're equal. Rckrone (talk) 15:32, 11 November 2009 (UTC)[reply]
Why should the derivative have the same domain? For example the domain of log x and 1/x are different.-Shahab (talk) 15:37, 11 November 2009 (UTC)[reply]
The derivative of log x is not what you call 1/x, but the restriction of 1/x to (0,+∞). In order for a function to have a derivative in a point a, it has to be defined in a neighbourhood of a by the definition, so the derivative of f cannot have a larger domain than (the interior of) the domain of f. f being differentiable means that the derivative exists at each point of its domain, hence the domain of a differentiable function equals the domain of its derivative. — Emil J. 15:56, 11 November 2009 (UTC)[reply]
(Edit Conflict) If where , for instance, the domain of the derivative of f is precisely the set of all points within the domain of f at which f is differentiable, and thus contained within the domain of f. Hence, if the domain of f is empty, the same must be true of the domain of the derivative of f. Your counterexample is false because you are presuming that there is some universal set which contains both the domain of the logarithm function and the domain of - the domain of a function is simply an abstract set with no particular reference to any universal set (more precisely, one cannot infer a specific universal set from the domain alone). --PST 15:58, 11 November 2009 (UTC)[reply]

Obviously a function and its derivative don't generally have the same domain: sometimes the domain of the derivative is smaller than that of the original function (e.g. look at the square-root function, defined at 0, and its derivative, undefined at 0). And Shahab's example is a silly mistake, as has already been explained above. But this one particular function—the empty function—has the same domain as its derivative. Michael Hardy (talk) 19:28, 11 November 2009 (UTC)[reply]

Personally, I don't see any silly mistake in Shahab's post, but rather a misunderstanding on the vocabolary of the other posts. Or maybe I'm myself misunderstanding the meaning of "silly mistake". In any case it is quite common, and not only in elementary calculus, to call "domain" of certain analytic functions what is elsewhere called "natural domain" or "maximal domain" to avoid confusion. --pma (talk) 20:56, 11 November 2009 (UTC)[reply]
I think "silly mistake" just means a mistake that is obviously a mistake once it is pointed out to you and that the person making could reasonably have not made. For former criterion is certainly satisfied, but the latter might not be. It is a very common mistake to forget that the definition of a function requires defining the domain and that two functions which are equal on the intersection of their domains are not necessarily the same function - it is a mistake worth avoiding. --Tango (talk) 22:45, 11 November 2009 (UTC)[reply]
I will probably be on the bad books of Michael Hardy with this post but I think that technicalities do not lie at the real heart of mathematics. It is certainly important that one appreciates subtle errors in proofs (which happen quite frequently), and that he/she understands the importance of precision, but one can be a mathematician without passing these criteria. Shahab has made a mistake which should be noted, but I do not see any advantages in the fact that an empty function equals its derivative, except for conventional conveniences. --PST 03:34, 12 November 2009 (UTC)[reply]
but the empty function also solves and other equations as well... Will you acknowledge this at least..? ;-) --pma (talk) 04:40, 12 November 2009 (UTC)[reply]

You are right (as usual...I was hoping that I would not be caught :)). However, I would still think that classifying equations with an empty solution set is merely a conventional convenience - the empty function solves every equation on the empty domain (whether differential, algebraic or functional). Perhaps I shall acknowledge that the empty function is a trivial solution and nothing more. Luckily we are not living in finite group theory, in which case we would have a hard time in formulating a precise definition of what a "trivial solution/counterexample" is... :( --PST 07:12, 12 November 2009 (UTC)[reply]

The OP doesn't explicitly say that the function's domain is the real numbers; although it is implied by the use of the variable x. Consider the function ƒ : C → ℛ, where ℛ is the Riemann sphere, given by ƒ(z) = log(log(sin(z))). Then the differential is given by

This has essential singularities for z ∈ {πn | nN}. Besides that I don't really see a problem. ~~ Dr Dec (Talk) ~~ 16:26, 12 November 2009 (UTC)[reply]

There are quite a few additional problems: this is a multivalued function (as is the original f), you forgot about poles in points π(n + 1/2) for integer n, and it does not have an essential singularity in the points πn you mention, it's much worse (there does not even exist a continuous branch defined on a punctured neighbourhood of any of these points). — Emil J. 17:02, 12 November 2009 (UTC)[reply]
You're right: I forgot about the poles you mention; thanks for pointing that out. Multivaluedness isn't a problem; if it were then complex analysis would be one big problem. See principal values, branch cuts, etc. Are the points z ∈ {πn | nN} branch points? Even if it is a mix of poles, essential singularities, and branch points; that has to be better than the anti-derivative being a function of empty this and empty that. ~~ Dr Dec (Talk) ~~ 17:29, 12 November 2009 (UTC)[reply]
(You mean Z rather than N. Yes, they are; the function behaves like 1/(z log z) for z → 0, and similarly for the other points, hence it inherits all branching problems of log z there.) You seem to be mixing things up. Money is tight wrote in his post that one can differentiate the function as a complex function, but not as a real function. I corrected him by pointing out that the real function, being empty, is also differentiable, and a rather off-topic discussion on what is the domain of a derivative ensued. Nobody ever claimed here that the complex version of the function or its derivative is empty, only the real version is. — Emil J. 18:07, 12 November 2009 (UTC)[reply]
You're right again: I did mean Z instead of N. I was thinking about the log function over the reals for too long and kind of got stuck on the log function working on the positive real numbers (hence positive integers). Thanks for pointing that out. I'm not getting mixed up though. I'm trying to answer the OP's question. The posts above seem to have shifted from the original question, and there seems to be an argumentative feel to it all. That's why I didn't indent my post: it's a fresh take. Let's just consider the function ƒ : CC ∪ {∞} and there are less problems. Remember that our posts are not intended for ourselves; they ought to be directed at the OP. S/he was no doubt lost in this argumentative soup. ~~ Dr Dec (Talk) ~~ 19:33, 12 November 2009 (UTC)[reply]

Unrestricted Grammars

What kind of sequence would a(n) need to be so that there is no turing machine with symbols {0, 1} that accepts only strings 0^a(n) for all n? 66.202.66.78 (talk) 11:03, 11 November 2009 (UTC)[reply]

It depends on what you mean by "accepts only". If you mean that the machine always returns 0 when a string is in the sequence, and always returns 1 when the string is not in the sequence, then the sequences that have that property are the ones that enumerate decidable sets. If you mean that the machine always returns 0 for a string in the sequence and never halts for any string not in the sequence, the sequences with that property are the r.e. sequences. So the answer is either that a(n) does not enumerate a decidable set, or that a(n) is not an r.e. sequence, depending on what you meant. — Carl (CBM · talk) 12:35, 11 November 2009 (UTC)[reply]

Circle Theorem question

The angle subtended by an arc at the centre is double the angle subtended by it at any point on the remaining part of the circle.

This is the theorem. Suppose the figure is like this, we construct CO and produce it to P. Then, angle OCA = angle OAC (OC = OA and converse of isosceles triangle theorem) and thus angle OCA = 1/2 angle AOP (exterior angle teorem). Similarly, angle OCB = 1/2 angle BOP. Therefore, angle ACB = 1/2 angle AOB. But if the figure is like this, how do we prove it? Srinivas 14:27, 11 November 2009 (UTC)[reply]

We can do the same thing: extend CO to a point P on the other side of the circle. Using the same arguments angle OCA = 1/2 angle AOP, and angle OCB = 1/2 angle BOP. This time instead of adding these angles together, we subtract: angle OCA - angle OCB = angle ACB, etc. Rckrone (talk) 15:12, 11 November 2009 (UTC)[reply]

You can prove it using complex numbers. Start with a unix circle on the complex plane. The origin is the centre of the circle. The point C is at the location 1+0i and chose point A and B randomly on the unit circle. Then calculate the angles 203.41.81.1 (talk) 21:41, 12 November 2009 (UTC)[reply]

I am not that good to understand all that, 203.41.81.1, so what Rckrone is much easier for me. Thanks Rckrone. Srinivas 06:18, 14 November 2009 (UTC)[reply]
You put forward a problem in Geometry. There are several methods of solving the problem. One method is to use Complex Number/Euclidean vector. It will solve your problem. But if what you want is a Geometry only method for solving your problem, then you need to state so in your description of your problem.122.107.207.98 (talk) 00:28, 15 November 2009 (UTC)[reply]

November 12

November 13

Cartwright's Theorem

Hi Math Deskers. I got a question this afternoon from my mom asking about the importance and significance of Cartwright's theorem. While we have an article about Mary Cartwright, the information is pretty sparse on what the theorem is and why it is relevant; and the main article is currently redlinked. Is this theorem notable enough for its own article? Can an expert help create that article? Nimur (talk) 00:34, 13 November 2009 (UTC)[reply]

See this link, perhaps. --PST 00:55, 13 November 2009 (UTC)[reply]
Not being a mathematician, this function theory stuff is a bit dense... what is an analytic and p-valent function and why would it be decomposed in what appears to be a z transform? I think that this is actually defined in the chart linked above, but I'm trying to parse out some meaning from this and need some help understanding the terms. How can the function be multi-valued if it is defined by the series decomposition? Nimur (talk) 02:19, 13 November 2009 (UTC)[reply]
An analytic function is (essentially) one that can be written as a power series in z as described. The functions in question are not multi-valued; I'm not sure where you got that idea from. The term "p-valent" is defined in that pdf, immediately below the pictures: it means (essentially) that f takes each value no more than p times on the disc |z|<1. So for example, if f was 2-valent, then we could have f(0)=0 and f(1/2)=0 but not f(i/2)=0 as well. Algebraist 02:30, 13 November 2009 (UTC)[reply]
Ah. A crucial point I was missing is that the p-valent definition means f yields the same value, but for a different input, z. I think I got it. Nimur (talk) 14:23, 13 November 2009 (UTC)[reply]

finding homomorpisms

My first question is whether there exists an onto homomorphism between S3 and Z/3? I believe there is none. My second question is how to find all homomorphisms between these two groups. Thanks-Shahab (talk) 06:08, 13 November 2009 (UTC)[reply]

Never mind the first question. I am only interested in the second.-Shahab (talk) 06:39, 13 November 2009 (UTC)[reply]
Suppose is a non-trivial homomorphism. The image of f must be a subgroup of . Lagrange's theorem and the fact that f is non-trivial, implies that f is necessarily surjective.
The following (hidden) question may yield some information about f. If you wish to use the information that I have given you to think about f, you may keep the following question hidden (only click the "show" button if you feel that no further attempts at your question will come to any amount).
(Click the "show" button at the right to see the question or the "hide" button to hide it.)
Does there exist a normal subgroup of S3 with index three? The kernel of f necessarily satisfies this property.
The answer to the above question is hidden below, in case you want to think about the question first.
(Click the "show" button at the right to see the answer to the above (initially hidden) question or the "hide" button to hide it.)
There are three Sylow 2-subgroups of S3; the answer is therefore no. In particular, we arrive at a contradiction in assuming that f was non-trivial.
Hope this helps. --PST 07:16, 13 November 2009 (UTC)[reply]
By the way, we should consider using the "hide and show feature" for answering questions. It allows us to not only hide the answer, but also parts of the method which the OP might like to work out on his/her own. --PST 07:43, 13 November 2009 (UTC)[reply]
Thanks for explaining in this unique way. I had arrived at the conclusion on my own as well by figuring that there are no order 2 normal subgroups and the kernel must be necessarily of order 2 by the 1st isomorphism theorem.-Shahab (talk) 08:44, 13 November 2009 (UTC)[reply]

Which concept has the most definitions?

Gian-Carlo Rota wrote on a number of occasions that of all concepts in mathematics, the one with the largest number of characterizations is that of a matroid. But I wonder if the concept of algorithm may be a rival for that distinction? Michael Hardy (talk) 06:30, 13 November 2009 (UTC)[reply]

I may not understand your question, but how would you formally define an equivalence between two particular characterizations? --PST 07:46, 13 November 2009 (UTC)[reply]

Just that they characterize the same thing. For example:

  • a parabola is the set of points in a plane equidistant from the focus and the directrix, or
  • a parabola is the graph of a quadratic polynomial, or
  • a parabola is the intersection of a right circular cone with a plane parallel to one of the generators of the cone.

Those are equivalent. Any of the three could be taken to be the definition and one would get the same theorems on parabolas.

Rota was saying the list of bullet points one could list for matroid would be larger than for any other mathematical concept. My suspicion is that for algorithm one might be able to make a longer list. Thus an algorithm is

  • That which is done by Turing machines, or
  • What one does when one evaluates partial recursive functions, or
  • any of a long list of other proposed characterizations.

Michael Hardy (talk) 18:18, 13 November 2009 (UTC)[reply]

Try duality (mathematics) for a non trivial something that may be more what you want Dmcq (talk) 10:54, 13 November 2009 (UTC)[reply]

I wasn't looking for the concept with the most characterizations as much as I was wondering how algorithm would do in that competition. Michael Hardy (talk) 18:22, 13 November 2009 (UTC)[reply]

PS: Lately I keep seeing people use "non" as if it were a stand-alone word rather than a prefix. And no longer only within Wikipedia. Have they stopped teaching that there's such a thing as a prefix? Michael Hardy (talk) 18:23, 13 November 2009 (UTC)[reply]
At least be glad it's not spelt as "none". Or usually not: a fairly smart hotel near where I live used to have a sign saying that its restaurant was "open to none residents". AndrewWTaylor (talk) 18:53, 13 November 2009 (UTC)[reply]
It's not unreasonable to claim that every programming language gives a different characterization of algorithm, which would allow it to beat matroid hands down. Those probably aren't as interestingly different as some of the definitions of matroid are, though. Algebraist 20:30, 13 November 2009 (UTC)[reply]
Actually I think algorithm is missing even a single precise definition. Programming languages don't define algorithms; they define programs. An algorithm, whatever it is, is a higher level of abstraction than that. Yiannis Moschovakis has a candidate definition, involving something called recursors, but I was never fully convinced that it succeeds in capturing the informal notion, though this could be because I don't fully understand what informal notion it's trying to capture. --Trovatore (talk) 20:38, 13 November 2009 (UTC)[reply]
I doubt there's any clear single answer to the original question, but one concept that has a lot of equivalent-but-not-obviously-equivalent characterizations is that of an amenable group. Another is zero sharp. --Trovatore (talk) 20:55, 13 November 2009 (UTC)[reply]
Also, the complex field has a nice carnet of characterizations, including Gelfand-Mazur theorem. --pma (talk) 21:15, 13 November 2009 (UTC)[reply]
How does that work as a characterization? It says that the only complex Banach division algebra is the complex numbers, so it uses the complex numbers to characterize themselves. Is it the case, say, that C is the only field K such that K is the only Banach division algebra over K (whatever a Banach algebra over a general field may be)? Algebraist 22:11, 13 November 2009 (UTC)[reply]
I had the same objection for a moment, and maybe still now, but in fact I thought that a characterization needn't to be a definition, if we agree that a characterization of X is just a property satisfied by X and only by X. If that property uses the definition of X, it seems to me that it's a characterization as well -but I admit I've never met a precise definition of what a should be characterization . --pma (talk) 11:06, 14 November 2009 (UTC)[reply]
That seems a rather perverse notion of characterization. It allows me to use "is C" as a characterization of C. Algebraist 13:36, 14 November 2009 (UTC)[reply]
The trivial characterization, why. --pma (talk) 17:49, 14 November 2009 (UTC)[reply]
Another example. I suppose you agree that the completion of a metric space is characterized by a certain universal property. But, in the case of the metric completion of Q there is the small detail that we already need R to define what is a metric, so again we are naturally using an object to characterize itself. You may of course save your definition of "characterization" saying that R as codomain of a distance function is not the same of R as metric completion of Q (for the former is an ordered group, and the latter a metric space). But I think it is simpler to agree that a characterization of X is just a property satisfied uniquely by X (with the remark that there are interesting and less interesting characterizations), even because the restricted acceptance of "characterization" makes it synonymous with "definition". --pma (talk) 09:45, 15 November 2009 (UTC)[reply]


Sorry misunderstood question. Yes there's certainly a lot of ways of characterizing an algorithm. It's pretty amazing how simple one can make things and still end up with a universal Turing machine or whatever else equivalent is one's main choice. It's more interesting often just to find something that isn't universal like regular expressions. Dmcq (talk) 22:05, 13 November 2009 (UTC)[reply]

I am not sure what a "concept" is, but the axiom of choice is the thing for which I know the most equivalent statements. After this, results in Reverse mathematics give large numbers of equivalent ways of axiomatizing various subsystems of second-order arithmetic. — Carl (CBM · talk) 12:59, 14 November 2009 (UTC)[reply]

Regarding algorithms, you want to use the word "computable function" instead, since recursion theorists maintain a strange distinction between these. There are dozens of equivalent definitions of the class of computable functions. — Carl (CBM · talk) 13:04, 14 November 2009 (UTC)[reply]

There's a clear distinction between algorithms and computable functions: a computable function is just a function that happens to be computable, while an algorithm is an actual way to compute it. For example, natural multiplication is just one function, but there are a great many algorithms for it. Algebraist 13:36, 14 November 2009 (UTC)[reply]
Indeed, but there is no accepted definition of what an algorithm is or when two algorithms are "the same", so the literal answer to "how many characterizations of 'algorithm' exist?" is "zero". — Carl (CBM · talk) 15:37, 14 November 2009 (UTC)[reply]
Yes, Trovatore pointed that out already. But if you're going to replace "algorithm" with some more precise notion, then "program" seems closer than "computable function" (though closer in the opposite direction: there are many programs for each algorithm, and many algorithms for each function). Algebraist 17:22, 14 November 2009 (UTC)[reply]
Then it would not be clear there are multiple definitions for the same thing. I can define what I mean by a C program, and what I mean by a PERL program, but I don't think these count as different characterizations of "programs" because I am talking about a different set of programs in each case. The original question asked for a single class with multiple characterizations; the class of computable partial functions from the natural numbers to themselves is an example of that. — Carl (CBM · talk) 17:46, 14 November 2009 (UTC)[reply]
There was a discussion a bit further up about proof identity, the question of when two mathematical proofs are the same. The Curry-Howard correspondence explains that in a sense, proofs (at least in constructive logic) and algorithms are the same thing. So proof identity and algorithm identity are basically the same question. 69.228.171.150 (talk) 01:00, 15 November 2009 (UTC)[reply]
Won't that be just the equivalence for algorithms in λ calculus (which is what intuitionistic proofs correspond to under the Curry–Howard isomorphism)? How would it help with algorithms in Prolog or in x86 assembly language? — Carl (CBM · talk) 02:08, 15 November 2009 (UTC)[reply]
Hmm, good point. Though, (some meaningful subset of) x86 asm programs correspond to proofs in Hoare logic and I think there are some theorems showing how to interpret Hoare logic in type theory. I guess though that this does not exhaust the entire set of possible x86 programs. For example, there are functions expressible in x86 asm that are provably total, but only by stronger axioms than those of type theory. So type theory could not tell you whether two such algorithms were the same. 69.228.171.150 (talk) 03:24, 15 November 2009 (UTC)[reply]
Actually I don't know what to think of the premise. The paper on proof identify that I linked to was mostly about proofs in linear logic rather than intuitionistic logic, but I get the impression that the Curry-Howard isomorphism is still supposed to apply to it somehow. 69.228.171.150 (talk) 03:41, 15 November 2009 (UTC)[reply]
  • Looking to the very simplest areas of mathematics reveals concepts with numerous characterisations. For example, take multiplication: it can be thought of in terms of a function, an operator, a magma, as the second hyper-operator, and a group, among many others. I'm only an amateur, but I'd have thought simple objects like functions would have the most characterisations. (The mess of the function (mathematics) article is testament to that.) —Anonymous DissidentTalk 13:57, 14 November 2009 (UTC)[reply]

Largest parliament buildings

This may seem too architecture-related for this page, but I am looking for statistics; after much searching on-line, I haven't been able to find a single list of the largest parliament buildings in the world, either by floor space or by volume. I know that the biggest (and heaviest) is the Palace of the Parliament in Bucharest, but I'd like information on at least the top-five to top-ten. If the source is reliable, that would be a bonus. :-) Waltham, The Duke of 07:13, 13 November 2009 (UTC)[reply]

You want WP:RD/M, for miscellaneous questions, or maybe WP:RD/H, for humanities. This is WP:RD/MA, for questions about mathematics. 69.228.171.150 (talk) 09:18, 13 November 2009 (UTC)[reply]
I would guess that they posted here in case some calculations are needed to determine the volumes. StuRat (talk) 12:15, 13 November 2009 (UTC)[reply]
Nobody can say I didn't try... :-) I've mentioned that I've posted here because I seek information of statistical nature, which has nothing to do with artistic criteria, and the Reference desk main page says that the Mathematics section is for "Mathematics, geometry, probability, and statistics". Therefore, I think I am in the right section. If, however, there are further objections to my asking this question here, I'll go to Humanities.
StuRat, asking people to calculate the volumes themselves, even if it weren't an unreasonable request in terms of the time and labour that would have to go into the task, would break our policies on original research. The results might satisfy my own curiosity, but I was hoping to use the information in an article. Waltham, The Duke of 01:55, 15 November 2009 (UTC)[reply]

Question about alternating sum of the product of a binomial coefficient and a polynomial function

On the article "Binomial coefficient" here on Wikipedia, what are the proofs for equations 13a, 13b, and 13c? --199.191.74.20 (talk) 16:35, 13 November 2009 (UTC)[reply]

There are proof sketches for all of these directly in the article. Did you try to work them out? — Emil J. 16:46, 13 November 2009 (UTC)[reply]
BTW, here's the link for convenience: binomial coefficient#Series involving binomial coefficients. — Emil J. 16:48, 13 November 2009 (UTC)[reply]
A simple way to see (13a), is observing that for a polymonial P(x) of degree r, the polimomial has degree less than or equal to r-1. If you do this operation twice, you get
of degree less than or equal to r-2 and so on: the n-th iteration gives (check)
of degree less than or equal to r-n. So it's necessarily 0 if n>r; in particular for x=0 it gives you (13a).--pma (talk) 21:45, 13 November 2009 (UTC)[reply]
One way of proving it is to do this sum on each term of degree m of the polynomial and factor the coefficient out to get:
Notice that that is (n-j)^m. To prove the formulae in question, you can try proving that where the integers m and n , where is the Pochammer symbol representing a falling factorial product, and where ,
One way I've seen is to define so that ; to use the rule that (notice the exponent in the first one which didn't have a Pochammer symbol and the Pochammer symbol in the second one); to use the rule that , where is a Stirling number of the first kind; and to use the rule that (because the mth term of the sum is ).
Now using the rules and definitions above, use strong induction to prove (1). Note that this will be a form of strong induction where there is an upper bound as well as the lower bound of 0, essentially you will use:
1.) Prove that (1) is true for .
2.) Next, assuming that (1) is true for where (but u is not equal to n), show that (1) is also true for u+1, which can be less than or equal to n.
For both steps, you will start with the function , and use the rules above to get the unexpanded binomial form and you can substitute 1 for x to prove that case for F(m). Using the second step, you know that (1) will result in 0 for all , so you should start with and use the rules regarding the Stirling numbers to to split into 2 sums, changing the one that can be simplified to the unexpanded binomial form (left) and manipulating the other one (right) so it will contain as a sub-sum something very close to which will encompass all previous cases from 0 to u. Substitute 1 for x, and watch the one on the right become exactly and go to 0 for each case due to our inductive hypothesis and the fact that 0 to any power greater than 0 is 0, and watch the left one become .
If you want me to write it out, then I will. Other than that, I hope I at least gave you a good starting point to prove it on your own using a form of strong induction. --Cornince (talk) 04:22, 14 November 2009 (UTC)[reply]
Also, it seems like F(m) would do better in the binomial coefficient article, because it is not specific to polynomials, but showing a general alternating binomial sum. —Preceding unsigned comment added by Cornince (talkcontribs) 04:33, 14 November 2009 (UTC)[reply]
And, as I see it, the relevant information that the OP should retain is (1) the expansion , of course, also holds in any commutative ring, for instance when T is a matrix or a linear endomorphism on a vector space. (2) If is the translation operator taking the function to the function , then , and the the expansion of on a function f evaluated in the point x is
Moreover, I-T is a "finite difference operator" (unfortunately often denoted Δ), and (I-T)n kills polynomials of degree less than n in the same way that the n-th derivative does (see my post above). --pma (talk) 07:53, 14 November 2009 (UTC)[reply]
That is true, and makes it a somewhat simpler method than doing it algebraically with induction every time. Still, I understand that some of us like that good hard method. :) We're showing it from different angles. --Cornince (talk) 13:49, 14 November 2009 (UTC)[reply]
I'm too among the ones who like hard methods! But in this case, I had the impression that the OP would have liked to start with the simpler things ;) --pma (talk) 18:04, 14 November 2009 (UTC)[reply]
edited my post for error correction. --Cornince (talk) 13:49, 14 November 2009 (UTC)[reply]

November 14

what is maths —Preceding unsigned comment added by 124.43.90.158 (talk) 08:29, 14 November 2009 (UTC)[reply]

2=2= —Preceding unsigned comment added by 124.43.90.158 (talk) 08:30, 14 November 2009 (UTC)[reply]

See the article mathematics and in particular, Mathematics#Common_misconceptions (this particular section of the article illustrates that mathematicians do not care about basics like 2 = 2). For particular "tastes" as to what mathematics is, see the high quality articles vector space, group (mathematics), ring (mathematics) and set theory (set theory being the most important of all). The reason people think mathematicians care about 2 = 2 (and such trivial equations, formulas, or basic arithmetic), is because they think that mathematics has not advanced in the time of the Greeks (whereas, they know that science has advanced since that time and discoveries are made everyday). These people are terribly incorrect and obviously do not understand the significance of time. Please consider that mathematicians are intelligent (nothing to do with "human calculators") people who research abstract concepts - that is, the purpose of mathematics is not only for daily life. Abstraction subsumes objects which do not have any direct connection with our universe, as well as objects embedded within higher-dimensional analogues of our universe. Have you ever considered the idea of how higher dimensional spheres wrap around lower dimensional spheres? This is an example of mathematics (see homotopy groups of spheres). --PST 09:24, 14 November 2009 (UTC)[reply]

Mathematics is the application of logical thinking in the study of games which have non-contradictory rules. The word game is used in the most general sense. 122.107.207.98 (talk) 00:35, 15 November 2009 (UTC)[reply]

Maybe the article titled abstract structure is also relevant here. Michael Hardy (talk) 16:52, 15 November 2009 (UTC)[reply]

tessalation

tesselation —Preceding unsigned comment added by 124.43.90.158 (talk) 08:35, 14 November 2009 (UTC)[reply]

There will be no responses to this "word" due to its lack of clarity as a "question". Unless you illustrate that this question was asked with good faith, it may also be considered vandalizm (and consequently, be removed). --PST 09:12, 14 November 2009 (UTC)[reply]
Point-set topologist: The burden of proof does not lie with the person making the post; it lies with those assuming a lack of good faith. See WP:AGF. Also, the original post does not constitute vandalism. See WP:VAN, and its subsections WP:VAN#Types_of_vandalism and WP:VAN#What_is_not_vandalism. Finally, I recommend you read WP:BITE. ~~ Dr Dec (Talk) ~~ 23:31, 14 November 2009 (UTC)[reply]
There are two reasons behind my assertion that the OP's question was not asked with good faith. Firstly, the OP posted a question above ("what is maths?") in which he apparently sought to ridicule mathematics (he asked a question and answered his own question with "2=2="). It is quite clear that this was the case (nevertheless I still answered the question in a civil manner). Secondly, according to the guidelines at the top, the OP should in general give some importance to being specific in his/her question. The response given below by Bo Jacoby is the best response but I feel that my judgement of the nature of the question was correct. Although I try 95% of the time to assume good faith, this particular instance quite clearly resembles the OP either in pursuit of wasting our times, or trying to ridicule mathematics. I do not see the point in continuing this thread since I remember clearly the reasons for the most recent dispute between us (you appear to have forgotten them). Consequently, I recommend that you do not reply to this post (you may reply, but I shall not respond). Although I agree that my post was on the borderline of the policies above, I should repeat that, in my opinion, my judgement was perfectly correct - we should not give importance to people who do not wish to cooperate. As you will notice, my answer to the above question ("what is maths?") (by the same OP) was perfectly appropriate - the only reason I responded differently in this instance was because it was clearly not asked with good faith. Please interpret policies appropriately or do not mention them - WP:AGF asks to assume good faith when unsure. However, I was sure in this particular instance. --PST 03:28, 15 November 2009 (UTC)[reply]

See the article Tesselation. Bo Jacoby (talk) 09:49, 14 November 2009 (UTC).[reply]

When written in capitals as 'TESSELATION' every letter has a symmetry and most can be tesselated when represented using unit squares. You can't fill the gap in 'A' though. Perhaps I'm wittering. No signal in tends to lead to noise out. Dmcq (talk) 12:28, 14 November 2009 (UTC)[reply]

Symmetric difference

How to prove A∊P(M)∧B∊P(M)⇒(A∆B)∊P(M), where A, B, and M are sets? --84.61.179.99 (talk) 15:12, 14 November 2009 (UTC)[reply]

A∪B∊P(M), and A∆B⊆A∪B...
Damnit, I've said too much. I was supposed to like, ask what you have done with this problem before starting to give you some, like, vague direction, without full-out giving you all the answers, as I've done above.
I'm a failure. :( --COVIZAPIBETEFOKY (talk) 16:03, 14 November 2009 (UTC)[reply]
Is there anything to prove? What's your definition of A∆B? (I wonder if there is a definition of A∆B for which that implication is not obvious). --pma (talk) 20:36, 14 November 2009 (UTC)[reply]

Fractal complexity

Whence comes the complex, feature rich properties of 2 and 3 dimensional fractals? A good example might be the screenshots on this page, but let's restrict ourselves to the 2d Mandelbrot set, governed by the rule .

Is the properties of the complex numbers that cause the complex structure, or the rule itself? A multiplication and an addition doesn't seem to be a particularily powerful transformation, but I've not seen what the Mandelbrot set looks like under a small amount of iterations, so while it starts off simple, a modification is added each time, and the visual representation of the set becomes increasingly complex.

It just seems so arbitrary. There is obviously reflectional symettry about the x-axis, but that's really the only thing. Could we perhaps see an example of a function such as graphed over increasing iterations if treated like a mapping? —Preceding unsigned comment added by 94.171.225.236 (talk) 19:55, 14 November 2009 (UTC)[reply]

You might like the article chaos theory. 69.228.171.150 (talk) 00:57, 15 November 2009 (UTC)[reply]

November 15

Equivalence of Vector Norms

Studying for numerics, I came across the problem to show that are all equivalent as vector norms on an n dimensional space. I have shown all the other inequalities except for a vector . I have tried all the algebraic "tricks" I know but to no avail. A push in the right direction will be appreciated. Thanks!75.171.178.10 (talk) 00:57, 15 November 2009 (UTC)[reply]

What do you mean by "equivalent"? 69.228.171.150 (talk) 03:44, 15 November 2009 (UTC)[reply]
Well, if you show that are equivalent and are equivalent, then it follows that are equivalent. Your definition is that two norms r and s are equivalent if there exist constants a and b such that , right? So, can't you just use the constants from the other two equivalences to get the constants for the third? StatisticsMan (talk) 04:37, 15 November 2009 (UTC)[reply]
Here is what I mean. You do not need to show . You just need to show there is some constant such that this inequality is true. But, you said you have already shown the rest. So, you have probably shown and . Thus, . —Preceding unsigned comment added by StatisticsMan (talkcontribs) 04:45, 15 November 2009 (UTC)[reply]

You are right and I should have worded the question carefully. The question is not to just show that they are equivalent (where any constant is okay) but rather to show the inequalities with specific constants. So here is my question. How do I show that for a vector ? Thanks!75.171.178.10 (talk) 04:53, 15 November 2009 (UTC)[reply]

It's just the Cauchy-Schwarz inequality, that also tells you that is the best (=smallest) constant. Apply it to the scalar product of (|x1|,..,|xn|) and (1,..,1). More generally, notice that always holds true for all x and all 1 ≤ q  ≤ p  ≤ ∞. To get the other inequality with the best constant n 1/q -1/p use analogously the Hölder inequality. Notice also that all norms in a finite dimensional space are equivalent, so as SM remarks, things are simpler if you do not care about optimality of constants. --pma (talk) 07:11, 15 November 2009 (UTC)[reply]

what do you call this matrix?

Resolved

I have a matrix Is there a name for that matrix so I can search for articles on it? I am interested in one that is even more special because Mij = 0 for all i <= j. I am really interested in properties of N-1 but would love even just the name of the most general form of N. BTW, I realize N is symmetric, I am looking for information on this particular type of symmetric matrix (which the WP page has nothing on?). PDBailey (talk) 04:13, 15 November 2009 (UTC)[reply]

Any symmetric matrix N can be decomposed into M + MT for some upper triangular matrix M. Let Mij = Nij for i > j, Mij = Nij/2 for i = j, and Mij = 0 for i < j. So saying that N is symmetric pretty much sums it up. The fact that you have M with 0 along the diagonal also tells you that the diagonal of N is 0, but I don't know of a special name for that. Rckrone (talk) 04:37, 15 November 2009 (UTC)[reply]
Good point, thanks. PDBailey (talk) 11:06, 15 November 2009 (UTC)[reply]

Cauchy sequences and continuity

Resolved

How can I show that functions which preserve cauchy sequences between metric spaces are continous? Is the converse true as well? Thanks.-Shahab (talk) 07:17, 15 November 2009 (UTC)[reply]

Assume f : X → Y preserves Cauchy sequences. Let x any point in X and (xj) any sequence converging to x. Consider the sequence (uj) obtained merging (xj) with the constant sequence (x), that is, u2j=xj and u2j+1=x for all j in N. The sequence (uj) still converges to x, so it is a Cauchy sequence, so by assumption f(uj) is a Cauchy sequence too, and since it is obtained merging f(xj) with the constant sequence f(x), the sequence (f(uj)) being Cauchy means that f(xj) converges to f(x). Therefore f is continuous.
The converse is not quite true whitout additional assumptions. But for instance if f is uniformly continuous, it certainly takes Cauchy sequences into Cauchy sequences. Also, it's of course true if you assume X complete for then a Cauchy sequence on X is convergent, then its image through a continuous f is convergent, hence a Cauchy sequence. On the other hand, if X is not complete, there is a continuous function f : X → R that fails to preserve Cauchy sequences: consider a point ξ on the completion of X but not in X; define a real-valued function on X letting f(x)=1/d(x,ξ) for all x in X, where d is the completion of the distance of X. Then f is continuous on X; there is a Cauchy sequence (xj) in X converging to ξ; clearly f(xj) diverges so it's not Cauchy in R. Is all that ok? --pma (talk) 08:26, 15 November 2009 (UTC)[reply]
Thanks for the quick and clear response.-Shahab (talk) 08:47, 15 November 2009 (UTC)[reply]
An edit conflict arose with pma in my first explanation, and afterwards, when trying to reinsert my explanation, an edit conflict arose with Shahab! I will include my post here anyway (although I see that Shahab is content with pma's explanation). Here is an explanation that functions which preserve limits of convergent sequences must be continuous (note that, below I am proving another point which is used in the first paragraph of pma's proof above):
Suppose f has the above property. To show that f is continuous, we must show that f is continuous at every point in its domain (refer to this metric space as X). Suppose x is in X. Let be an open ball containing x of radius , for each . Note that this is a descending chain of open sets . Suppose V is an open set containing - then . For each n, let . If x is not an interior point of , exists for all . Prove that converges to x, but that the image of this sequence under f does not converge to . We arrive at a contradiction in assuming that x was not an interior point of ; therefore, f is indeed continuous as desired.
See also the article Cauchy-continuous function (but try to explore this notion in greater depth before doing so).
A counterexample to the converse is the function defined by . However, do you think that the extra hypothesis that f be uniformly continuous ensures the truth of the converse? Hope this helps. --PST 08:51, 15 November 2009 (UTC)[reply]
Thank you PST. As you said, I was content with pma's explanation but your post was insightful.-Shahab (talk) 10:03, 15 November 2009 (UTC)[reply]
Yes, that is what I was also thinking: "f uniformly continuous" also makes the converse true. I'm not sure if you meant to suggest it as a further exercise for Shahab, so I'll hide the answer below as I learned from you, not to spoil the exercise ;-)
(Click the "show" button to see the answer or the "hide" button to hide it.)
If f is uniformly continuous with modulus of continuity ω, and if (xj) is a Cauchy sequence in X, then f(xj) is a Cauchy sequence in Y, because dY(f(xp), f(xq))  ≤ ω(dX(xp, xq)) for all p and q.

Proving formula for Riemann-Zeta function, k even

Resolved

Hey, I'm trying to work through the proof of the formula for the Riemann-Zeta function for even k, which involves the Bernoulli numbers, in Koblitz's book on elliptic curves and modular forms and I am stuck. I believe I am fine up to

.

The book then says to let so I assume I get rid of all a's by using . This gives

I then substitute the formula and subtract the to the other side. I also thought it might be simpler to multiply through by in the sum. So, I do this and get

.

Now, I'm not sure what to do. Koblitz basically says we're done here. I thought about doing a power series for both terms. So, we have

Putting everything together gives

.

Now, the problem is, I am supposed to compare powers to get the formula. I compare second powers of x and I get

.

This is not a sum of the form of the Riemann-Zeta function. Do any of you see what I am doing wrong? Perhaps using the power series was not the right idea? Thanks. StatisticsMan (talk) 17:44, 15 November 2009 (UTC)[reply]

if you kick your last sum a bit you get . Isn't this the formula you wanted? It's not clear to me what you're looking for here... Tinfoilcat (talk) 18:31, 15 November 2009 (UTC)[reply]
Yea. I was just not thinking straight. Thanks. StatisticsMan (talk) 21:25, 15 November 2009 (UTC)[reply]

(combinatorial) description of 126 points and 56 symplecta of E7 polytope (gosset polytope)

Hello,

as hinted at in the column on the right of Gosset_1_32_polytope, that polytope has 126 6-cells of one type (corresponding with the leftmost node in the Coxeter-Dynkin diagram given there) and 56 6-cells of another type ((corresponding with the rightmost node in the Coxeter-Dynkin diagram given there). I am almost exclusively interested in a combinatorial model (so the angles and distances don't really matter to me)

I would like to understand both types. I already found this description of the 126 on the left: they are the 56 vectors of length 8 with 6 zero entries, one +1 and one -1, and the 70 vectors of length 8 with 4 entries equal to 1/2, and 4 equal to -1/2.

(Note that the inner products between two of them can be -2,-1,0,1,2)

Is a similar thing possible for the 56 other 6-cells (allowing me to see which are incident?)


In order to clarify my question, F4_polytope#Constructions already answers this question for the 24-cell:

24 vertices:

8 vertices obtained by permuting

(±1, 0, 0, 0)

and 16 vertices of the form

(±½, ±½, ±½, ±½).

The vertices of the dual 24-cell are given by all permutations of

(±1, ±1, 0, 0).

Many thanks,81.82.86.83 (talk) 20:24, 15 November 2009 (UTC)[reply]

Free Burnside group B(2,4)

I want a permutation representation of the generators of the free Burnside group B(2,4). --84.61.166.115 (talk) 21:04, 15 November 2009 (UTC)[reply]

The article Burnside group should answer your question (particularly a section towards the botton of the article). I think that this is merely a matter of looking at the appropriate definition. Hope this helps. --PST 04:12, 16 November 2009 (UTC)[reply]

Which one of these is wrong?

I have little formal training in computer science, so forgive me for my ignorance. I was looking through some articles on complexity classes, and I noticed this on the TFNP page:

TFNP = FP would imply P = NP coNP, and therefore that factoring and simplex pivoting are in P.
TFNP ≠ FP would imply NP = coNP.

So either factoring is in P, or NP = co-NP. Unless all the computer scientists in the world are wrong, one or both of the statements in the article are incorrect. This was also mentioned by an IP on the talk page. I'd like to know what reality is so that the article can be fixed. « Aaron Rotenberg « Talk « 22:49, 15 November 2009 (UTC)[reply]

I think the article may be wrong in stating that is known. It might be that implies P=coNP, but FP may be bigger than TNFP. It's well known though that factoring is in NP coNP. The complexity zoo is a good reference for these obscure classes. 69.228.171.150 (talk) 03:15, 16 November 2009 (UTC)[reply]

November 16

Exponential Sum

Is there a general solution to the equation: ax + bx = cx ? If not, I ask why. —Preceding unsigned comment added by 189.24.43.141 (talk) 01:55, 16 November 2009 (UTC)[reply]

For x an integer greater than two, there are no (non-trivial) solutions. That is Fermat's Last Theorem. For x=2 there are infinitely many solutions, they are called Pythagorean triples. I don't know about non-integer x's. --Tango (talk) 02:34, 16 November 2009 (UTC)[reply]
But what is given (as parameters) and what is unknown in this equation? (Igny (talk) 02:45, 16 November 2009 (UTC))[reply]
It's unknown for x>2 and coprime a,b,c even if a different x is allowed on each term. See Beal's conjecture. 69.228.171.150 (talk) 03:20, 16 November 2009 (UTC)[reply]
Following Igny's lead, it could be that a, b, c are constants, and x is unknown. For a, b, c positive and x real, then for the case where a < c and b < c, and the case where a > c and b > c there will be one real solution for x, but I'm not sure there's a closed form for it. Otherwise there will be no real solutions since ax + bx > cx for all x. Rckrone (talk) 04:22, 16 November 2009 (UTC)[reply]

How to solve

Hello all.

I'm looking for a way to isolate x in this formula. My own mathematical skills look overwhelmed. I searched the web for some time. But, those kind of thing look hard to find (search engines gets nuts). I tought one of you could have the answer at hand.

I don't want to give anybody headach, just see if someone have the answer or a good pointer readyly. Thanks. --Iluvalar (talk) 02:48, 16 November 2009 (UTC)[reply]

First I went to Wikipedia talk:WikiProject Mathematics. They told me the question was more apropriate here. However Dmcq noted that it might have some link with the Lambert W function. The graph there look pretty much as a rotation of my fonction. Thanks for your help. --Iluvalar (talk) 04:15, 16 November 2009 (UTC)[reply]
This equation probably cannot be solved in the way you want. If it was a little simpler - for example, - then Lambert's W function would indeed come to the rescue.
What you can do is find solutions numerically for specific values of n and y. For example, if then the solution is . -- Meni Rosenfeld (talk) 05:41, 16 November 2009 (UTC)[reply]