Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 129.67.186.200 (talk) at 16:39, 24 August 2009 (General formula for power sum). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:



August 16

Bounded linear operator on sigma-finite measure space

Let be a -finite measure space. Let and . Show that the operator , , is bounded, and .

Proof starts:

T is bounded: If , then

.

If then it's easier, . So, for , we have . So, T is bounded and this is half the inequality.

I'm not exactly sure where to go from here. I have a friend's solution but they do something which is obviously wrong at this point. Any ideas? Thanks! StatisticsMan (talk) 00:56, 16 August 2009 (UTC)[reply]

For simplicity, make . By definition of the norm, there is a set , with such that on . Let us put on , and 0 otherwise. Then , and . Phils 02:45, 16 August 2009 (UTC)[reply]
I was thinking about a similar thing, but if , that just means it's essentially bounded (we can just say bounded). So, g = 1 is such a g, even satisfying . But then, no matter what is. StatisticsMan (talk) 03:25, 16 August 2009 (UTC)[reply]
Oh, I get it. You're not defining E to be the set where . If that set is real big, just take a part that has finite measure. And, since with , we can look at the set where is true and intersect it with to get a set where it is true such that the set has finite measure, just as you said. Thanks! StatisticsMan (talk) 03:28, 16 August 2009 (UTC)[reply]

Entropy

What is a simple derivation for the formula for calculating entropy? Mo-Al (talk) 08:14, 16 August 2009 (UTC)[reply]

In what context? Do you mean in statistical mechanics? In which case, entropy is , which sums over the different microstates that correspond to a given macrostate. But I wouldn't call this "derivable", rather, that it's a definition of entropy.--Leon (talk) 14:38, 16 August 2009 (UTC)[reply]
Well I suppose my question is really what motivates the definition -- why is this formula a natural thing to use? Mo-Al (talk) 19:03, 16 August 2009 (UTC)[reply]
Ah! I was taught that the definition allows one to "correctly" derive ALL the thermodynamics of any particular system; that is, a non-trivially different definition would lead to different, incorrect (in accordance with experiment) thermodynamic predictions. This definition, however, allows one to correctly predict thermodynamic behaviour. However, there is an intuitive "logic" to it, in that the more microstates corresponding to a given macrostate, the lower the information communicated by the macrostate variables. For instance, in a lowest entropy configuration, with one microstate corresponding to the macrostate in question, the macrostate tells us EVERYTHING about the system. For a high entropy configuration, the system contains much more information than the macrostate variables (temperature, pressure etc.) can communicate.

Does that make any sense?--Leon (talk) 19:20, 16 August 2009 (UTC)[reply]
And see this.--Leon (talk) 19:25, 16 August 2009 (UTC)[reply]
You may also want to take a look at Entropy (information theory). -- Meni Rosenfeld (talk) 20:02, 17 August 2009 (UTC)[reply]
In particular Claude Shannon's 1948 paper "A Mathematical Theory of Communication" gives a derivation/plausibility argument for the entropy formula. He lists some properties such a formula should have and it naturally leads to the familiar result. —Preceding unsigned comment added by 87.102.1.156 (talk) 18:36, 19 August 2009 (UTC)[reply]

Please fill the gaps in a table

Please help me to fill the gaps in this table: Derivatives and integrals of elementary functions in alternative calculi--MathFacts (talk) 08:25, 16 August 2009 (UTC)[reply]

This sort of thing should be filled from looking up a book or journal - and I doubt you'll find much in that way for those systems, they're pretty obscure! Anyway they can mostly be filled in fairly automatically by formulae in the systems once you can do the normal version so overall I'm not certain about the point. Dmcq (talk) 12:50, 16 August 2009 (UTC)[reply]
My computer algebra systems do not give expressions for the empty cells.--MathFacts (talk) 13:39, 16 August 2009 (UTC)[reply]
I don't think an algebra system producing results from things you feed in is counted as a notable source. And I had to cope with such a result stuck in an article just a day or so ago where the results were not quite right and had ambiguities and besides weren't in simplest terms. Dmcq (talk) 13:45, 16 August 2009 (UTC)[reply]
Source can not be notable or unnotable. It's reliable or unreliable. Notable or unnotable may be topic.--MathFacts (talk) 13:56, 16 August 2009 (UTC)[reply]
Running expressions you think of through a program and sticking them in a table is counted as original research. The stuff in wikipedia really does have to have some bit of notability and if you can't find some tables giving the expressions or something very similar then the subject really doesn't satisfy notability criteria. Wikipedia isn't for publishing facts you dreamt might be useful and worked out, they have to be notable. Dmcq (talk) 14:25, 16 August 2009 (UTC)[reply]
Nobody will publish something that can be derived with a machine - it is simply ridiculous. Only scientific discoveries are published. Using machine or calculator is not a research of course.--MathFacts (talk) 15:16, 16 August 2009 (UTC)[reply]
What you put in is original research as far as wikipedia is concerned. Please read the leader of that article. It is quite specific and is a core wikipedia policy. I know maths doesn't always follow that to the letter and it shouldn't either for straightforward things. However you have set up an article that reflects nothing in published literature full of things you thought of yourself and generated the results using a program without any results in sources to check them against. That really is going way beyond the bounds. Interesting articles I would have preferred kept where the person had cited the facts but where the synthesis was not something that had been written about have been removed because of that rule. Dmcq (talk) 16:11, 16 August 2009 (UTC)[reply]
There are published rules on how to compute such things and anyone can prove them either by himself or using some mathematical software. Regarding integrals anyone can take a derivative to verify.--MathFacts (talk) 16:19, 16 August 2009 (UTC)[reply]
I would like to point out that there is a link at the top of each column in this article (except for 1) which takes you to the article on that specific subject. And, those likely have most of the formulas in the table. So, it's not original research, at least mostly. StatisticsMan (talk) 16:29, 16 August 2009 (UTC)[reply]
Check them yourself and you'll see they don't. Dmcq (talk) 16:31, 16 August 2009 (UTC)[reply]
And ones which have seem to have been filled in by MathFacts presumably the same way as he did this list. Generating content using his computer without looking up things. Dmcq (talk) 16:41, 16 August 2009 (UTC)[reply]
(ec) One of the gaps to be filled in your table asks for the "discrete integral" of . Checking your link, I see you want a solution of the functional equation . By the way, the definition there is a bit misleading: you should be aware of the fact that the solution is unique up to a 1-periodic function, not just up to an additive constant C (and the analogous remark holds for your "multiplicative discrete integral"). That said, a particular solution is
not a particularly relevant information as far as I know.--pma (talk) 16:39, 16 August 2009 (UTC)[reply]
Yes. There is a inconsistency in that article. It needs clarification. I know about it. Still not have enough time to clarify. The abovementioned equation is not enough to define the sum. But it is usually defined through Faulhaber's formula or equivalent.--MathFacts (talk) 17:33, 16 August 2009 (UTC)[reply]
So, I don't see a real case of original research, but I do not see any reason for the name "alternative calculi", either. --pma (talk) 18:46, 16 August 2009 (UTC) [reply]
Any suggestions?--MathFacts (talk) 18:47, 16 August 2009 (UTC)[reply]
"A table of formulas"? Maybe it's too generic... ;) --pma (talk) 07:51, 17 August 2009 (UTC)[reply]

Learning about multiple regression online

It is many years since I last was conversant with multiple regression, I need a refresher. And I have never used any recent MR software. Could anyone recommend any easy online learning materials to use please? I want to do multiple regression on a number of economic time series with the aim of forecasting the independant variable. Forecasting, not modelling - I think this means that correlations between the variables is not important, as it would be if I was modelling; but I'm not sure. I'm also aware of the different types of MR and unsure which would be best to use. Thanks. 78.144.207.41 (talk) 17:12, 16 August 2009 (UTC)[reply]

Uniformly convergent sequence of polynomials

Characterize those sequences of polynomials such that the sequence converges uniformly on the real line.

Here's another qual question for which I have no solution. If you happen to know this is not true or the solution is very complicated, you can just say that. If you know of a somewhat elementary solution, any help would be great. Thanks. StatisticsMan (talk) 19:27, 16 August 2009 (UTC)[reply]

Any such sequence must be a sequence of either constant polynomials, or polynomials with identical leading coefficient after finitely many terms, no? Otherwise, for , which for , is arbitrarily large as . Fredrik Johansson 19:45, 16 August 2009 (UTC)[reply]
By the same reasoning, wouldn't the next term need the same coefficient after finitely many terms? If the first terms of the polynomials had same coefficient, they cancel out and you're left with a polynomial of degree 1 less. So, would the answer be that the sequence of polynomials must differ only in the constant term after finitely many terms of the sequence, and those constant terms must converge to some real number? StatisticsMan (talk) 20:16, 16 August 2009 (UTC)[reply]
Yes --pma (talk) 20:21, 16 August 2009 (UTC)[reply]
Of course; my blunder. Fredrik Johansson 22:09, 16 August 2009 (UTC)[reply]
I don't think the above is correct. The polynomials don't even have to be bounded in degree (see Taylor series example below) as long as the high order terms decrease in size fast enough. 67.117.147.249 (talk) 04:57, 17 August 2009 (UTC)[reply]
What pma said (both above and below) is correct. Eric. 216.27.191.178 (talk) 03:05, 18 August 2009 (UTC)[reply]

Since you were asking complex analysis questions earlier, this current question may be getting at the idea that the Taylor series of an analytic function is uniformly convergent. Those are sequences of polynomials whose terms look like (x-a)^k/k! so the higher order coefficients if you treat them as polynomials in x don't stay the same as you change a slightly, but those terms get squashed down by the k! divisor as the order gets higher. 67.117.147.249 (talk) 00:18, 17 August 2009 (UTC)[reply]

Is a Taylor series of an analytic function uniformly convergent everywhere, rather than just on compact sets? --Tango (talk) 00:29, 17 August 2009 (UTC)[reply]
It's only convergent inside the circle of convergence, and (maybe) at some points on the boundary. (Or another possibility is that I've gotten confused and am thinking of something else--I don't remember this stuff very well any more). 67.117.147.249 (talk) 04:51, 17 August 2009 (UTC)[reply]
Actually I probably do have it wrong and am mixing several ideas together incorrectly. The article uniform convergence gives exp z as an example of a function whose Taylor series is uniformly convergent, but it doesn't say that's the case for all analytic functions (within the radius of convergence around a given point). Analytic functions are uniformly continuous but I guess that's not enough. Maybe some expert here can straighten this out. 67.117.147.249 (talk) 05:46, 17 August 2009 (UTC)[reply]
What they say is correct; the thing is very simple. The only polynomials that are uniformly bounded on R are the constants. Hence two polynomials have a finite uniform distance ||p-q|| if and only if they differ at most by the constant term. So a sequence of polynomials uniformly convergent on R is, up to finitely many polynomials, a sequence of polynomials that differ at most by the constant term. (The exponential series is not uniformly convergent on C, but only uniformly convergent on compact sets of C) --pma (talk) 06:21, 17 August 2009 (UTC)[reply]
Most analytic functions are not uniformly continuous either. f(z)=z2 is an example. Algebraist 12:40, 17 August 2009 (UTC)[reply]
Here's an intuitive way to think about which may or may not be correct. If we're talking nonconstant entire functions, then the derivative is also an entire function. Only if the original function was a linear polynomial is the derivative a constant. Otherwise, the derivative is unbounded. So, it would seem to me that any nonconstant entire function, other than a linear polynomial, is obviously not uniformly convergent on the entire complex plane. Of course, a linear polynomial isn't either as has already been discussed. It is true for sure that if a holomorphic function has a power series which converges inside some open disk that the power series is uniformly convergent on any closed subset of that disk. You can't really say anything more than that without a specific function. StatisticsMan (talk) 12:43, 17 August 2009 (UTC)[reply]
Linear polynomials are uniformly continuous, in fact Lipschitz continuous. Conversely, if f is an entire uniformly continuous function, we can find a δ > 0 such that |f(z) − f(w)| ≤ 1 whenever |zw| ≤ δ, which implies |f(z)| ≤ |f(0)| + |z|/δ + 1, i.e., f is bounded by a linear function, and therefore it is itself linear by (one of the variants of) Liouville's theorem. — Emil J. 12:53, 17 August 2009 (UTC)[reply]
Good point. Let be given. If f(x) = ax + b and , then . I take back what I said. StatisticsMan (talk)


August 17

Tuk-tuks

How many Tuk-Tuk's (three wheeled vehicles) are in India? —Preceding unsigned comment added by Lanceboe (talkcontribs) 01:01, 17 August 2009 (UTC)[reply]

Since I doubt that there are accurate figures on the number of tuk-tuks in India, this sounds like a Fermi problem. In which case, how would you set up the problem, and which factors do you have trouble estimating? Confusing Manifestation(Say hi!) 05:12, 18 August 2009 (UTC)[reply]

QQ vs. YoY

I’m pulling my hair out trying to calculate quarter-to-quarter annualized real economic growth such that my results match those published by statistical authorities. Singapore, for example, contracted 3.5% in the second quarter (year-on-year), or it grew 20.7% on a quarter-to-quarter annualized basis. In Excel, my formula for year-on-year growth is =sum((Q2year2 –Q2year1)/Q2year1)*100, which give me, say, 4.3 or a 4.3% rise.

What formula should I use for quarter-to-quarter calculations? DOR (HK) (talk) 03:49, 17 August 2009 (UTC)[reply]

I don't understand the word "sum". The expression ((Q2year2 − Q2year1)/Q2year1)*100 is just one term. What terms are you adding? Michael Hardy (talk) 04:42, 17 August 2009 (UTC)[reply]
=sum is commonly used in Excel equations, although the "sum" may be optional. DOR (HK) (talk) 07:39, 17 August 2009 (UTC)[reply]
Similar terms are "=count" and "=average," if that helps. DOR (HK) (talk) 08:24, 17 August 2009 (UTC)[reply]
I am not certain what you are trying to do here - it might be clearer if you can show us your underlying data. Cheap and cheerful way to annualise quarterly returns is to multiply by 4, so 4.3% quarterly growth would be 17.2% annual growth. More accurate method is to compound quarterly growth using the formula (1 + r)4 - 1. So 4.3% quarterly growth annualises to (1.043)4 - 1 = 0.183 = 18.3% annual growth. Gandalf61 (talk) 08:45, 17 August 2009 (UTC)[reply]

OK, some real-world data:

US GDP, in chained 2005 $ billion

Qtr/Yr $ billion YoY Growth Qtr-Qtr Growth
Q1 2008 13,367 NA NA
Q2 2008 13,415 NA +1.5%
Q3 2008 13,325 NA -2.7%
Q4 2008 13,142 NA -5.4%
Q1 2009 12,925 -3.3% -6.4%
Q2 2009 12,892 -3.9% -1.0%


In the last line, year-on-year is calculated thus:

=((12,892 – 13,415) / 13,415) = -0.03898 (i.e., -3.9%)

The quarter-to-quarter annualized growth rate is reported as -1.0%. Question: what is the formula for arriving at -1.0% (or, any of the right-hand column numbers) using this data? Thanks. DOR (HK) (talk) 03:09, 18 August 2009 (UTC)[reply]

I think the Qtr-Qtr growth figures are calculated by finding the percentage growth since the previous quarter, then annualising this with compounding. For example, in Q2 2009 we have:
  • % growth since previous quarter = (12,892 - 12,925) / 12,925 = -0.00255 = -0.255 %
  • Annualised % growth = (1 - 0.00255)4 - 1 = -0.01017 = -1.02 %
Using this method, I get values of +1.44%, -2.66%, -5.38%, -6.44%, -1.02% as compared to the published figures of +1.5%, -2.7%, -5.4%, -6.4%, -1.0%. Gandalf61 (talk) 11:20, 18 August 2009 (UTC)[reply]

I agree. Michael Hardy (talk) 16:15, 18 August 2009 (UTC)[reply]

Many thanks for that. What I have not been able to do is to construct an Excel formula that duplicates the results. Any thoughts? DOR (HK) (talk) 01:33, 19 August 2009 (UTC)[reply]

Could it be that you're rounding too early? A small rounding error in an intermediate step can in some cases result in a large error in the bottom line. Unless you've got a really good handle on how big the effect of rounding at some intermediate stepp will be on the bottom line, you should not round beyond what the machine forces you do to, until the last step. Michael Hardy (talk) 01:57, 19 August 2009 (UTC)[reply]
...or it could be that the published figures are based on rounding too early. Michael Hardy (talk) 01:58, 19 August 2009 (UTC)[reply]

Got it! =(((Q2-Q1)/Q1)*4)*100 Been driving me crazy for the longest time. Many thanks! DOR (HK) (talk) 03:45, 19 August 2009 (UTC)[reply]

quick maths question

if ab > 0 and b > 0, can you take out b from the inequality, leaving a > 0? —Preceding unsigned comment added by 59.189.62.104 (talk) 05:46, 17 August 2009 (UTC)[reply]

Yes. If ab > 0 then either {a > 0 and b > 0} or {a < 0 and b < 0}. If you know that b > 0 then you also have a > 0. Gandalf61 (talk) 08:34, 17 August 2009 (UTC)[reply]
Alternative method, you can divide both sides by b. You can't do that without knowing it is positive since when you divide by a negative number you have to flip the inequality, but you do know it is positive, so that's ok. --Tango (talk) 13:23, 17 August 2009 (UTC)[reply]

How many eighth degree monic polynomials are such that...

I don't know if any programmer out there wants to tackle this, but my question is: How many monic polynomials of eighth degree have remaining coefficients in the set {0, 1, 2, ..., 41} and are composite when the variable is between 1 and 41 (inclusive), but prime when it is 42? I have one example, and I would like to know how many if any others there are. My example is

.Julzes (talk) 15:17, 17 August 2009 (UTC)[reply]

How accurate do you want this figure? A Monte Carlo estimate shows that about 0.2% of the polynomials matching the coefficients criterion also match the primality criterion. Since there are of those, this turns out around . One pleasant-looking example I've found is . -- Meni Rosenfeld (talk) 20:29, 17 August 2009 (UTC)[reply]
I guess I'd like a slightly better estimate with the range expanded to between 0 and 68 with the exception at 42, as 69 is where the next prime comes in in my particular example. Ideally, I'm curious about just how far up the most extreme case goes (what the polynomial which presents itself as a prime the second time at the highest value is), but I don't suppose that is a practical inquiry. I'm surprised that my initial question got any kind of answer and so quickly, and that the answer is on the order of one in every five hundred.Julzes (talk) 21:15, 17 August 2009 (UTC)[reply]
The high proportion should be no surprise at all - most numbers are composite, so it's not very demanding to require that many values of the polynomial will be. Neither should getting an answer so quickly - this is WP:RD/math. :)
About 0.022% of polynomials are prime for 42 and composite elsewhere between 0 and 68, which is about .
If I understood correctly that your "ideal inquiry" is to find a polynomial P such that:
  • The coefficients criterion is met.
  • is prime.
  • is composite for .
  • N is as high as possible.
Then satisfies this with . Of course, there may be polynomials with even higher N; only an exhaustive search (which would take quite a while) or some intelligent reasoning can find the absolute best. -- Meni Rosenfeld (talk) 16:14, 18 August 2009 (UTC)[reply]
PS: "Quite a while" = 5 years with my current program and hardware. So far I've improved N to 1697. -- Meni Rosenfeld (talk) 16:29, 18 August 2009 (UTC)[reply]
Ok. Thanks for all that. How about prime at n=-41 and n=42, but composite in between, if you're interested in continuing? Of course, you'll need either the broad definition of primality or to take absolute values. I'm not at all surprised it would take 5 years to answer the bigger question.Julzes (talk) 22:05, 18 August 2009 (UTC)[reply]
I just noticed that the constant term has to be 25 (if P(0) is to be composite but P(42) prime). Also, the condition on negatives to but not including -41 should be more stringent than the condition involving some number greater than or equal to 69. Julzes (talk) 03:17, 19 August 2009 (UTC)[reply]

question about probability

The time on digital clock is 10.38 .where here 1 is in column p ,0 is in q ,3 is in r and 8 is in column s .after x hours time is noted. find the probability that the number in

1:- column q is "9"
2:- column q is less than 5.

—Preceding unsigned comment added by True path finder (talkcontribs) 16:58, 17 August 2009 (UTC)[reply]

As you described it, the situation is perfectly deterministic, there is no probability involved. The answer is thus either 0 or 1 depending on the value of x. — Emil J. 17:15, 17 August 2009 (UTC)[reply]
(ec) Unless there is some constraint on x them it doesn't matter what the time now is. Is the clock a 12 hour or a 24 hour clock? 1:- 1/12 (09) or 2/24 (the same value) (09, 19) 2:- 7/12 (01, 02, 03, 04, 10, 11, 12) or 14/24 (01, 02, 03, 04, 10, 11, 12, 13, 14, 20, 21, 22, 23, 24) -- SGBailey (talk) 17:18, 17 August 2009 (UTC)[reply]
SGBailey, you seem to be assuming x is a uniformly distributed random variable. Why are you assuming that. This is a very vaguely stated question where we have to guess what was meant, so it doesn't seem safe to just assume things. And you didn't even state that assumption. Michael Hardy (talk) 17:37, 17 August 2009 (UTC)[reply]
AFAIK, time is uniformly distributed. I take that as a given unless something else is stated. I did say "unless there is some constraint on x" which was intended to specify the conditions of my answer. With no x constraint then the time after x is equally likely to be anywhere in a 24 hour cycle. The OP has now constrained x to be from T+1 to T+25 - this happens to be a 24 hour period, so I think my answer still applies. -- SGBailey (talk) 20:24, 19 August 2009 (UTC)[reply]
As a minor aside (obviated now that OP has clarified) it is not even possible for x to be uniformly distributed. As I said minor, but relevant to the solution of some apparent paradoxes.--SPhilbrickT 20:11, 18 August 2009 (UTC)[reply]

sorry, condition on x is 1 <x <25.and answer on book is 0.1 of first and 0.5 of second. —Preceding unsigned comment added by True path finder (talkcontribs) 19:55, 17 August 2009 (UTC)[reply]

Okay, so the time at which we next look at the clock is later than 11:38 today and earlier than 11:38 tomorrow. Let's assume, for the sake of argument, that the time is uniformaly distributed between those limits. Then I don't think I can see how the answer you gave can be reached. Are you sure you have told us the right column - column q is the least significant digit of the hours number, yes ? Gandalf61 (talk) 12:13, 19 August 2009 (UTC)[reply]

round robin tournament

I'm wondering about ways of proving that "a round robin tournament is always possible to construct for an even number of players"

The algorthym in section Round-robin_tournament#Scheduling_algorithm shows that it is always possible. However it's not clear how I could obtain that algorthym without having a 'flash of inspiration'. ie to me it seems that the algorhtym is non-obvious... Is there a more workmanlike proof of do-ability.?83.100.250.79 (talk) 19:43, 17 August 2009 (UTC)[reply]

Number the players p1,p2,...p(2n). Draw a point for each player and draw lines (edges) between every pair of points (complete graph on 2n nodes). There will be (2n)(2n-1)/2 = n(2n-1) such edges where each edge represents a game between the two players it connects. Then just enumerate the edges in any way you like, and take n of them in every round, giving a 2n-1 round tournament. 70.90.174.101 (talk) 03:14, 19 August 2009 (UTC)[reply]
Yet taking the edges off in certain orders can result in having unplayable rounds - ie certain combinations don't work.
Is there a way to proof that there will always be a valid combination of games for all rounds for any n?83.100.250.79 (talk) 12:36, 19 August 2009 (UTC)[reply]
I've designed such tournaments from time to time, using the following method. Whether it is equivalent to the algorithm you cite I can't say, but it works for any number of players (for an odd number, create an extra dummy player to denote an idle round). To avoid confusing (to me, anyway) generality, I'm taking the case of 8 players from A to H, but I can see that by simple extension it will work for any number. A table will be constructed showing the number of the round in which each pairing will occur, with rows from A to G and columns from B to H - ultimately, only the "northeast" part will apply. From the top, put 1 and 2 in col B, 2, 3 and 4 in col C, ..., 6, 7, 1, 2, 3, 4 and 5 in col G and 7 in col H. Col H is then completed by transferring the "below diagonal" entry in each row, i.e. 2 in row B, 4 in row C, etc, giving col H the successive values 7, 2, 4, 6, 1, 3 and 5 from the top. You'll see that this process guarantees the 28 matches will be played in 7 rounds.
For any number of players, the columns for all but the last one are filled immediately, that for the last one is completed by transferring the numbers below the stepped diagonal. The quick way to fill the last column, once you've seen the pattern, is to put the highest round number (odd) at the top, then to follow it with the even rounds from 2 upwards then the other odd ones from 1 upwards.→217.43.210.186 (talk) 19:19, 19 August 2009 (UTC)[reply]
It sounds basically the same in that it gives one solution, but apart from working - doesn't explain how...

I was wondering if anyone had found how to calculate the number of different sets of complete rounds constructable for every n (clearly each round has (n/2)! degenerate permutations, and the sets of rounds assuming order of play is irrelevent is also (n-1) degenerate).

But ignoring the degenerate cases is there ever more than one way to contruct an entire set of rounds of games?83.100.250.79 (talk) 22:30, 19 August 2009 (UTC)[reply]

Yes, there are additional ways. For example, with players 1, 2, 3, 4, 5, 6, we could have a set of rounds containing (1, 2)(3, 4)(5, 6) and another set of rounds containing (1, 2)(3, 5)(4, 6), and these two sets are not related by the degeneracies you identified. But that raises the question of whether there are additional ways after accounting for degeneracies from permuting players. Eric. 216.27.191.178 (talk) 02:17, 20 August 2009 (UTC)[reply]
Yes thanks, that's obvious though I didn't think of it.83.100.250.79 (talk) 11:36, 20 August 2009 (UTC)[reply]
There are distinct ways to create a tournament with 8 players even after identifying tournaments related by permutation of round order, permutation of games within a round, and permuting the identity of the players. Eric. 216.27.191.178 (talk) 02:27, 20 August 2009 (UTC)[reply]
Thanks, I was wondering about that, but haven't suceeded in finding a way to calculate the number.
In all honesty I imagine any solution (if known) might be too complex for me to understand.83.100.250.79 (talk) 19:19, 21 August 2009 (UTC)[reply]


August 18

Combinatorics question

I have a list of N categories with Nk members each. (i.e. the kth category has Nk members, and these number of members may be unique, but not necessarily distinct.) I'd like to take a random member from only 3 of the categories. How many ways can I do this? The brute force way to do select 3 groups (and there are obviously N choose 3 ways to do this) and then multiply the cardinality of each of those 3 groups to get the number of ways to choose an item from each. Iterate over all distinct groups of 3 categories. Is there an analytical way to do this? or a combinatorial formula? Any relevant literature?

Here's an example if my above explanation was confusing.

|A| = 4, |B| = 3, |C| = 5, |D| = 1.
A, B, C = 4*3*5 ways = 60
A, B, D = 4*3*1 ways = 12
A, C, D = 4*5*1 ways = 20
B, C, D = 3*5*1 ways = 15
                     
Sum                   107

Also, if anyone knows how to do this in R (I'm sure it's really simple, that would be much appreciated.)

Thanks, --Rajah (talk) 03:32, 18 August 2009 (UTC)[reply]

Notice that, for instance, (a+b+c+d)*(a+b+c+d) = aa+bb+cc+dd+2(ab+ac+ad+bc+bd+cd). So, if the sum of a through d is s1, and the sum of the squares of a through d is s2, ((s1^2)-s2)/2 is the sum of all pairs of different letters. If Sn is the sum of all the nth powers of the letters, I think what you want is . That would be, in this case, (1/6)(a+b+c+d)3-(1/2)(a+b+c+d)(a2+b2+c2+d2)+(1/3)(a3+b3+c3+d3) = (1/6)(13)3-(1/2)(13)(51)+(1/3)(217) = 107. It doesn't look too helpful for a small example, but imagine how much easier it would be for a list of 20 numbers. Black Carrot (talk) 05:25, 18 August 2009 (UTC)[reply]
Some related ideas: multiplicative functions (such as the divisor function) and symmetric polynomials. Black Carrot (talk) 05:34, 18 August 2009 (UTC)[reply]
Yes, you want the 3rd elementary symmetric polynomial, and as BC shows you can express it in other bases of symmetric polynomials. If you need to deal more heavily with the combinatorics of symmetric polynomials you may find useful R.P.Stanley's book "Enumerative Combinatorics", (chapt 7, vol 2). [1]. PS: what do you mean by how to do this in R? --pma (talk) 06:01, 18 August 2009 (UTC)[reply]
Yeah, I meant R, the programming language, as Bo Jacoby pointed out. I was just being lazy, sorry about that. Elementary symmetric polynomials are exactly what I wanted, nice work. --Rajah (talk) 15:12, 18 August 2009 (UTC)[reply]

R (programming language) probably. Note that f(x)=(x+a)(x+b)(x+c)(x+d)= x4+(a+b+c+d)x3+(ab+ac+ad+bc+bd+cd)x2+(abc+abd+acd+bcd)x+abcd, so the number you want is the coefficient of x in the taylor expansion of f(x). A solution coded in J (programming language) looks like this

  (<-4 3 5 1)&p. t. 1
107

Bo Jacoby (talk) 14:11, 18 August 2009 (UTC).[reply]

Awesome, thanks. All of these replies are very helpful! --Rajah (talk) 15:00, 18 August 2009 (UTC)[reply]

Limit of integral

Show that if

,

then there is some constant c such that a.e.

Here's the thing. I have a proof, but I just read it through again and I have a statement in the middle that I am not sure is true. The first thing to do is, for any , given

for (where comes from since the limit is 0). What this proves is

for any such . From here, what I have down is set and that since f is integrable on [a, b] for , then a.e. But, I never showed f is integrable and I am not sure if it is. Any ideas?

Just so you know, the next step I have, assuming that is right:

Assuming f is integrable, F is absolutely continuous so that we end up getting the limit to be for those where and thus it is equal to the same constant almost everywhere. StatisticsMan (talk) 21:37, 18 August 2009 (UTC)[reply]

Ok, so we only have with no assumptions of integrability nor measurability; for all in that limit is 0 and the thesis is f(x)=0 a.e. (Alternatively, one can assume the limit to be 0 of a fixed pair a<b, and then the conclusion is that f(x)=0 a.e. in the interval [a,b]. One can easily prove each of the two statements from the other).
Maybe there are simpler ways, but one possibility is doing your program for the function for any fixed and draw the consequences. By hypothesis, each , and since we have
you can do your complete argument and conclude that the function is a constant (depending on y). Going back to the hypothesis on f(x), this tells us that the integral in is an integral of a constant function, thus just multiplication by (b-a), that is, f is a function with in the classical sense for all x, and the conclusion follows. --pma (talk) 07:07, 19 August 2009 (UTC)[reply]
But in fact in the above argument I assumed to be locally integrable. Assuming also w.l.o.g. that f is compactly supported, one can write e.g. for N large enough, so f is too. If we assume the minimal requirement only, that is, that is (loc) integrable for all h, I am not sure about whether f is measurable. For sure the original statement is already of interest in the assumption of f measurable or integrable; a result under a more general hypothesis wouldn't necessarily be more useful.--pma (talk) 12:06, 19 August 2009 (UTC)[reply]
Well, I have seen a "solution" from a professor where he provides almost no detail. It took me a long time to fill them in, perhaps wrongly. But, he seems to be assuming f is measurable, though it is not in the problem. I actually did not notice that I was assuming that without being told, so good point. His entire solution is: "If the limit holds at the end points, then for every a < x1 < x2 < b it is surely true that as h goes to 0 and hence the limit without the absolute value signs inside also tends to zero with h. However
and hence the result." Perhaps he is also assuming that it is integrable. It seems so, right? StatisticsMan (talk) 12:25, 19 August 2009 (UTC)[reply]
Nice; I suppose you mean the last expression to be written without the big parentheses. For the last equality, f should be assumed to be integrable, yes. After all it seems that the point of the exercise is not that. NOte that your proof with F(x) is almost identical to your professor's one (in both the key point is using that (F(x+h)-F(x))/h converges to f(x) as h tends to 0; you are not using anything else about absolute continuity, and you may even avoid mentioning it) --pma (talk) 14:23, 19 August 2009 (UTC)[reply]
What you said would make it what I meant, but what I really meant to do was the big parentheses and not have the 1/h on the inside. So, I just took that out. Thanks for the info. I guess if he is assuming those things as well, I am fine with assuming them. I just didn't know for sure. My test starts tomorrow morning! So, I must go study some more. StatisticsMan (talk) 15:19, 19 August 2009 (UTC)[reply]
very good; as your coach, I recommend you to stop studying in the afternoon and relax till tomorrow. You have all to do it. and everybody in the RD/M roots for you!  ;) --pma (talk) 16:33, 19 August 2009 (UTC)[reply]
Hehe, thanks. Unfortunately, I am not going to take the afternoon off because I am trying to read through every solution I have written up to qual problems. This will end up taking about 2 full days and I still have 30 problems to read. Then, I have some other problems with solutions I want to look through as well. Thanks for the support. StatisticsMan (talk) 18:58, 19 August 2009 (UTC)[reply]
Well, I did enough to pass if everything I did was correct, or so I think. And, I think it was but I could be wrong. They aren't real clear about what it takes to pass. But, the first problem on the real analysis side was to show if the integral of a function is 0 on ever interval, then the function is 0 a.e. So, thanks for explaining to me how to do that problem, pma. Without it, I definitely did not pass. I will find out in a week or so if I passed. StatisticsMan (talk) 20:34, 20 August 2009 (UTC)[reply]

August 19

Is this polynomial most extreme?

The polynomial

takes on prime or almost-prime (product of two primes) values for x=1*,2*,7*,16,29,30,31,32,33,36,37*,..., where it is prime for the numbers marked (*). Is it likely that there is another polynomial with coefficients in {0,1} such that over half of the values of x for which it is an almost-prime or prime up to some point at least as high as x=33 occur in a string at least five long ending at that point? Julzes (talk) 06:57, 19 August 2009 (UTC)[reply]

I took the liberty of formatting your formula with LaTeX --pma (talk) 07:09, 19 August 2009 (UTC)[reply]

Note that I suspect that "five" could probably be replaced with "two" and the value 33 reduced also. Julzes (talk) 09:33, 19 August 2009 (UTC)[reply]

Vector perpendicular to plane

So given the equation of a plane. Say... 3x+y-2z=10 How would you find the unit vector orthogonal to it? I ask because I'm not very familiar with vector algebra and this would help me finish a few proofs.--Yanwen (talk) 20:44, 19 August 2009 (UTC)[reply]

See Surface normal#Calculating a surface normal. —JAOTC 21:47, 19 August 2009 (UTC)[reply]
(ec) The vector v = (3, 1, -2) is a normal vector for the plane . This is why: suppose and are two points on the plane P; thus we know . Then the vector connecting is , and the dot product of with v is
,
so v and are orthogonal. Therefore v is a normal vector of the plane P. To get a unit normal vector, just divide v by its length. Eric. 216.27.191.178 (talk) 21:52, 19 August 2009 (UTC)[reply]

Let's say the plane has equation for some, not all zero, real numbers a, b, c, and d. A normal vector would be (a,b,c), and so the two unit normal vectors are

In fact, if you have some surface S given by an equation f = 0, (where f is a smooth function). So

then a normal vector to S is given by (fx,fy,fz) where fx, fy and fz are the partial derivatives of f with respect to x, y and z respectively. The surface S will be singular at a point when

Assuming that not all three partial derivatives are zero we have two unit normals given by

In the plane example f(x,y,z) = ax + by + cz - d, and so fx = a, fy = b and fz = c. In your example a = 3, b = 1 and c = -2 so

~~ Dr Dec (Talk) ~~ 15:56, 20 August 2009 (UTC)[reply]

So for a surface
Would the normal vector at (3,4,5) be --Yanwen (talk) 21:57, 20 August 2009 (UTC)[reply]
That looks right to me. Eric. 98.207.86.2 (talk) 06:31, 21 August 2009 (UTC)[reply]

Thanks, All!--Yanwen (talk) 12:46, 21 August 2009 (UTC)[reply]

No, that's not right! First of all are two vectors: there's a choice of sign. Both and are unit normal vectors. They are both unit length and they are both perpendicular to the plane. Notice that . The expression

gives two choices of unit normal vector at each point of the plane. Think about the plane z = 0; both (0,0,1) and (0,0,−1) are unit normal vectors. There is always a choice of two. Notice that is independent of x, y and z. So if a vector is a unit normal at one point of the plane then it will be normal vector at every point of the plane. So, the two choices of unit normal vector would be

I'm not sure why you've re-written the vectors with in the denominator. It looks more complicated. The simpler form would, IMHO, be the one already given. I hope this helps... ~~ Dr Dec (Talk) ~~ 22:30, 21 August 2009 (UTC)[reply]

I'm not sure exactly what you think is not right here, but you do realize that Yanwen's vectors were explicitly supposed to be normals of the surface , not of the plane in the original question, right? —JAOTC 00:02, 22 August 2009 (UTC)[reply]
What isn't right is that he said the unit normal vector is . Once you make a choice of sign you get a unit normal vector. The expression gives two unit normal vectors. He seems to have misunderstood my notation and has taken to be a single vector, which it is not: it is a pair of vectors differing by sign. I thought I made that quite clear in my last post. ~~ Dr Dec (Talk) ~~ 10:32, 22 August 2009 (UTC)[reply]
Ah, I see now that Yanwen wrote "vector" in place of "vectors". I think it's a fair bet that he already understood that there are two unit normal vectors, but I see your point. —JAOTC 11:41, 22 August 2009 (UTC)[reply]

August 20

Quadratic turd

My friend tell me there is a number called "quadratic turd? Why is mathematicians so vulgar to call a number turd? 67.101.25.201 (talk) 01:28, 21 August 2009 (UTC)[reply]

Quadratic surd Black Carrot (talk) 04:34, 21 August 2009 (UTC)[reply]
"Surd" from Latin surdus meaning deaf or mute - see Jeff Miller's "Earliest Known Uses of Some of the Words of Mathematics". Gandalf61 (talk) 12:12, 21 August 2009 (UTC)[reply]

Significant digits

Does the fact that 304.52 has 5 significant figures while 4.52 has 3 mean that the former is any more accurate? DRosenbach (Talk | Contribs) 03:59, 21 August 2009 (UTC)[reply]

If they've been rounded, they're equally precise relative to a given unit of measure, and the first is more precise relative to its size. Significant digits is a loose measurement of precision relative to size. Black Carrot (talk) 04:35, 21 August 2009 (UTC)[reply]
Quite right, but precision and accuracy are not the same. Accuracy is closeness of a measurement to the "actual" value. Precision is how specific the measurement is i.e. how many digits after the decimal point are displayed. For instance, my computed value of pi at 9.7564648389271381932 isn't very accurate, but it's pretty precise by most standards!--Leon (talk) 11:23, 21 August 2009 (UTC)[reply]
But since they share the same number of post-decimal digits, I figure the former measurement expresses no greater precision. If I am measuring the lengths of two pieces of wire, it's merely a reflection of our 10-base scale that makes a length of 9 one significant figure but 10 two significant figures. I mean, let's say we had an 11 base system with the symbol ¿ serving to signify the number "one greater than 9" and "one less than 10." Is measuring with our new number system make my measurement any less accurate? At first blush, I reflect that the smaller one's scale, the more precise a measurement should be, because it's easier to think about one unit, half a unit, a third of a unit, a sixth of a unit, etc., and each of these successive fractions is a smaller measurement for a smaller scale unit. I'm going off on a tangent here with my philosophical queries (I suppose there are about 4 different topics I've touched on above) but my main issue in asking this question is why adding pre-decimal digits adds precision when there doesn't seem to be any greater specificity in my measurement...It's just that one runs out of equivalent number of digits in our counting system and one must therefor carry it over to the next space. I hope this makes sense. DRosenbach (Talk | Contribs) 13:28, 21 August 2009 (UTC)[reply]
The reasons you mention are part of why significant figures aren't used in serious work. They are only really used in schools. Actual scientists will specify the precision in ways that don't depend on what base you are writing the numbers in. For example, you could say "10 +/- 2". You can measure precision either absolutely ("+/- 5") or relatively ("+/- 5%"), which is more useful depends on what you are doing with the numbers. --Tango (talk) 13:41, 21 August 2009 (UTC)[reply]
Are you talking absolute accuracy or relative accuracy? As written, both have the same absolute errors: ±0.005 units. They differ in their relative precision, though. 4.52±0.005 has a 1100 part per million error, while 304.52±0.005 has a 16 ppm error. Another complication is if the two numbers are in different units: 4.52±0.005 light years has much larger errors in both absolute and relative terms than 304.52±0.005 km. -- 128.104.112.102 (talk) 16:00, 21 August 2009 (UTC)[reply]
Yes -- that distinction definitely answers my query. Wow...that was spectacular. Thank you! DRosenbach (Talk | Contribs) 02:12, 23 August 2009 (UTC)[reply]
Wait...could you explain the 1100 vs. 16 ppm portion, I didn't get that. DRosenbach (Talk | Contribs) 02:15, 23 August 2009 (UTC)[reply]
"ppm" essentially means . A relative error of 1 ppm would mean that the error is a millionth of the actual value. In the 4.52±0.005 case, the relative error is
-- Meni Rosenfeld (talk) 10:57, 23 August 2009 (UTC)[reply]
That much I understood -- my followup was more so directed at how the other calculation possesses a denominator of greater magnitude yet results in ppm of much lesser magnitude. Was it a typo? Forgive my temporary idiocy. Got it, thanx! DRosenbach (Talk | Contribs) 13:06, 23 August 2009 (UTC)[reply]

August 21

August 22

Pi

In the same vein as my question above on significant figures, how can we say that pi extends indefinitely if it is the answer to a division problem, named 22/7, in which the LCD of significant figures is 1? Is it because we are not measuring anything with the 22 and the 7, and we really therefor mean 22.000000000000000000000000000000000000000000000000000/7.0000000000000000000000000000000000000000000, but of course with the zeros continuing forever? DRosenbach (Talk | Contribs) 02:18, 23 August 2009 (UTC)[reply]

π ≠ 22/7 Staecker (talk) 02:21, 23 August 2009 (UTC)[reply]
So how do we know what pi is other than dividing a circumference by a diameter, neither of which we are able to measure to an infinite number of decimal places? DRosenbach (Talk | Contribs) 02:56, 23 August 2009 (UTC)[reply]
Pi also can be derived with a variety of other techniques. The most common ones that come to mind are some easy integrals, which are part of elementary calculus. You can read pi, and Proof that π is irrational. I think you might be stuck on the "repeating decimals" thing - you have to recognize that irrational number is a more subtle mathematical concept than just the number of decimal-places. The fact that the digits never repeat is an artifact of the way we represent numbers with decimal place-values, but in reality, the concept of irrational numbers is much more precisely defined than the non-repeating decimal representation. There is no number system (not binary, not octal, not rational numbers, nothing)... which can express the exact value of an irrational number; it can not be exactly bound by two other numbers greater and less than it (because for any arbitrarily sized distance, there are smaller numbers which make a better bound). There's a lot of subtle advanced mathematics here - the best place to start would be to really read and understand our irrational number and real number articles. That being said, we can both prove that pi is irrational (meaning we can not find an exact value for it); and at the same time, we have lots of techniques and algorithms to approximate its value to any desired level of precision (if we are willing to work out the math to that accuracy). In any case, 22/7 is a terribly inaccurate approximation - they differ by more than 0.001 (which is a lot!) Nimur (talk) 03:02, 23 August 2009 (UTC)[reply]
A small nitpick: there are number systems in which pi can be expressed exactly. Base pi is the obvious one. Not that I would ever want to use base pi for anything. Rckrone (talk) 05:53, 23 August 2009 (UTC)[reply]
Rckrone, the nitpick wasn't very constructive. I think Nimur gave a very nice reply to the earlier post. I have only ever come across number systems to integer bases. Talking about base π seems a little wishy-washy to me. What's the expansion of 7 in base π? It would be irrational! Infact every number except multiples of π would be irrational. Would it even be well defined? ~~ Dr Dec (Talk) ~~ 17:06, 23 August 2009 (UTC)[reply]
Some irrational bases are perfectly sound—Golden ratio base comes to mind. But π is a quite different matter, of course. —JAOTC 17:28, 23 August 2009 (UTC)[reply]
Looking at the Golden ratio base article, it seems to support what I was saying. The arcticle says that 11φ = 100φ. So in this base 11 = 100. So there's no uniqueness without talking a standard form. ~~ Dr Dec (Talk) ~~ 18:47, 23 August 2009 (UTC)[reply]
A standard convention for eliminating these duplicates is to insist that there are no consecutive 1's. -- Meni Rosenfeld (talk) 19:29, 23 August 2009 (UTC)[reply]
That is no worse than base 10 where, for example, 0.999...=1. This kind of notation is usually non-unique, regardless of the base. --Tango (talk) 19:39, 23 August 2009 (UTC)[reply]
But the article also says that golden base ratio has this problem too, i.e. equalities along the lines of 0.999... = 1. It seems that every choice of base gives this same problem with infinite decimal expansions. But what base 10 doesn't say is that 11 = 100. ~~ Dr Dec (Talk) ~~ 12:30, 24 August 2009 (UTC)[reply]
Dr. Dec, I'm sure you know this but "irrational" means not a ratio of integers, which is a property of the number rather than its representation. 7 is rational whether you express it in base 10, 2 or π - "having no repeating expansion in base π" is the correct term.
Now, whether Rckrone's comment is central to the discussion is irrelevant - Nimur has made a factually incorrect statement, and correcting it is appropriate. -- Meni Rosenfeld (talk) 19:29, 23 August 2009 (UTC)[reply]
Meni Rosenfeld, number theory isn't really my thing so you're going to have to help me out. I'm getting a bit confussed. What is π in base-π? Well one representation would be that ππ = 1. Now π10 has a non-repeating decimal expansion where as ππ has a repeating decimal expansion. Now π is irrational base 10, but π is rational base π. ~~ Dr Dec (Talk) ~~ 12:39, 24 August 2009 (UTC)[reply]
There is no such thing as "rational in base 10" or "rational in base π". A real number is either rational or not. From Rational number:
"a rational number is any number that can be expressed as the quotient a/b of two integers, with the denominator b not equal to zero".
The number 7 can be expressed as the quotient 7/1, where 7 and 1 are integers. Therefore, 7 is a rational. It doesn't matter at all how you choose to represent the number. You can represent it as "7", "3+4" or "" - it is the same number, and it is rational.
As it happens, there is a theorem that says that a rational number has a repeating expansion in any fixed integer base numeral system. However, the word "rational" does not mean "has a repeating expansion", and the distinction becomes important when we discuss non-integer bases.
So the number 7 has an expansion in base π which starts with "20.202112..." and happens to be non-repeating. But the number 7 is still rational.
Similarly π has a base-π expansion "10", which is repeating (terminating, even). But that does not make it any more rational, since it is still not a ratio of integers.
Anyway, you have also mixed up your notation - digitsbase means "the number whose expansion in the given base consists of the given digits". So 10π = π rather than ππ = 10. -- Meni Rosenfeld (talk) 13:22, 24 August 2009 (UTC)[reply]
Meni Rosenfeld, I've already confessed that number theory isn't my thing, so I would ask you to kindly tone down your rhetoric a couple of notches, and to show patience and good humour. We have that π10 = 10π. Now 10 is an integer and so a rational number. Now 10 is a rational number, but π10 = 10π, so why isn't π10 writen to the base π a rational number? ~~ Dr Dec (Talk) ~~ 13:46, 24 August 2009 (UTC)[reply]
I guess you need to say that a number is rational when it can be writen as the quotient of two integer, base-10. Like we have seen: the rational number 710 written in base-π gives a number that would be irrational if it were to base-10. Similarily the irrational number π10 written in base-π gives a number that would be rational if it were to base-10. ~~ Dr Dec (Talk) ~~ 13:53, 24 August 2009 (UTC)[reply]


I apologize; I should not have said "there is no number system..." which can represent pi. What I meant to say is, there is no number system which can represent all irrational numbers exactly. If you switch to a number system based on an irrational number, you will be mapping a different subset of real numbers into rationals and irrationals; but there will still not be an exact representation for all real numbers. These are moot points though - I shouldn't have made such a strong statement about pi. In any case, the OP is clearly unaware of the concept of rational and irrational numbers - so let's try to phrase our responses to help him/her understand that, before diving into such subtleties. Nimur (talk) 21:55, 23 August 2009 (UTC)[reply]

Lagrangian is invariant

What does that mean? Let's for simplicity take a system where the Lagrangian L is invariant under U(1) (which I understand contains the complex numbers with absolute value 1). But invariance can not mean -L = L. 93.132.179.71 (talk) 07:58, 23 August 2009 (UTC)[reply]

Just to fix the vocaboulary: if is any function (usually with ) and if G is a group with an action on , one says that is invariant for the action of G, or G-invariant, iff for all and there holds . One also says that G is a group of symmetries of f. So in any case there's no "-L=L". Maybe you have in mind a situation where there is a G action on Y too, (usually, but not necessarily, the same action, with X=Y), and satisfies for all and ; in this case one says that is equivariant wrto the two G actions, or G-equivariant. --pma (talk) 10:19, 23 August 2009 (UTC)[reply]
Thank you, that excludes some of the possibility for misinterpretation. So f is constant on the equivalence classes of X induced by G. Now with the Lagrangian itself operating on functions I'm still not clear if in this case G acts on these functions (as the domain of L) or on the domain of each of these functions. 93.132.179.71 (talk) 13:26, 23 August 2009 (UTC)[reply]
(By the way, the equivalence classes induced by a group action have a special term, "orbits", for those days when you get tired of writing "equivalence class induced by the group action". Eric. 98.207.86.2 (talk) 21:01, 23 August 2009 (UTC))[reply]

Proof?

Hi, indeed I mostly reply questions at Arabic Wikipedia, and one question stopped me where I couldn't think where to classify (regardless whether or no being a homework). I'll try to translate it in English symbols as follows: For any natural numbers n, k; Prove that

always holds.

I guessed it can be done by mathematical induction but failed. This is the original question in Arabic. Thanks in advance, --Email4mobile (talk) 11:11, 23 August 2009 (UTC)[reply]

These problems are usually best done with modular arithmetic. Take the equation mod 6 ( is even), what does this tell you about mod 6? If is mod 6 then you have for some ... 129.67.186.200 (talk) 11:40, 23 August 2009 (UTC)[reply]
Taking the equation mod 6 and probing for the possible values of x^3 mod 6 shows k = 1 mod 6. But I can't see any use of this fact. 93.132.179.71 (talk) 13:31, 23 August 2009 (UTC)[reply]
Me neither. Maybe there is an elementary reason, maybe not so elementary; for instance the classical diophantine equation is studied factorizing the LHS in (so here one would work in instead). --pma (talk) 13:51, 23 August 2009 (UTC)[reply]
Once you know that , you get . This gives , but has no roots mod 6. 129.67.186.200 (talk) 15:38, 23 August 2009 (UTC)[reply]
I'm not very good with number theory: it's not my thing. I've given this a once over and here are my thoughts. The LHS is always congruent to 1 modulo 6. That means that can always of the form for some integer m. This has reduced the problem from a quadratic to a linear problem. Why not try more modular arithmetic on this linear expression. ~~ Dr Dec (Talk) ~~ 17:19, 23 August 2009 (UTC)[reply]
p.s. I've just tested it for and it holds. ~~ Dr Dec (Talk) ~~ 17:30, 23 August 2009 (UTC)[reply]
Yes, seeing that it's 6m + 1 is certainly the first step. Seeing why m can't be 57 (or any other number making 6m + 1 a cube) is the second. (This is what 129.67.186.200 did two hours ago.) —JAOTC 17:49, 23 August 2009 (UTC)[reply]
Really? If he did then it's very convoluted. I didn't see that he said that. If I didn't see that then I'm sure the person making the original post didn't see it either. Jao, please do me a favour and calm down. From this post, and earlier ones, you seem to be quite confrontational. So what if someone said something earlier? Does it really matter? Is there any need to make a sly comment? No! ~~ Dr Dec (Talk) ~~ 18:09, 23 August 2009 (UTC)[reply]
Hi, 129 here, I don't think my solution is convoluted. All I did was to observe 1) the LHS is 1 mod 6 so k must be 1 mod 6 (I left this as a hint, 93 spelled it out) 2) knowing this you are lead to the equation n(n+1)+2 = 0 mod 6 which is easily seen to have no solutions (in my 15:38 post). Jao is only saying (I think!) that you should read the thread carefully before you reply, it gets confusing otherwise. Anyway, the problem is done. 87.194.213.98 (talk) 19:33, 23 August 2009 (UTC)[reply]
I did read the thread carefully, it wasn't very long. You gave very little explanation and just wrote a lot of maths. I found that convoluted. Clearly, you wouldn't think that your solution is convoluted: you wrote it! People ask questions because they don't understand the topic. There's no point writing a reply that assumes they have the required knowledge to solve the problem. If they did then they would have solved it, and if they'd have solved it then they wouldn't be asking the question. And whatever Jao was trying to say, he didn't say it in a very nice way. Scroll up and read more of his posts, he's not the most civil or polite. ~~ Dr Dec (Talk) ~~ 20:54, 23 August 2009 (UTC)[reply]
Injecting myself into this conversation, I'd like to say that Jao has always been civil in the lengthy period of time I've seen him contributing to the RD. I can sort of see how Jao's comment in the unit normal discussion appears sharp, but that can't possibly be a big deal, can it? Eric. 98.207.86.2 (talk) 21:14, 23 August 2009 (UTC)[reply]
I found Jao's remark a gentleman post. Very opportune, too. --pma (talk) 06:57, 24 August 2009 (UTC)[reply]

I'd like to thank you all for you wonderful cooperation and interaction. To tell you the truth, I feel such proofs are totally new to me because I didn't study about "modular arithmetic"; although I think the one who asked this in Arabic Wikipedia must have studied it unless there's another way to treat such a problem.--Email4mobile (talk) 18:27, 23 August 2009 (UTC)[reply]
Well, 129's proof is clever and clear. It reduces to an elementary fact (i.e., n(n+1)+2 is never divisible by 6) that one can easily check. Most likely you are able to understand it perfectly, but of course, if any point is still not clear to you, I guess you are welcome to ask for further details. --pma (talk) 06:57, 24 August 2009 (UTC)[reply]


I started reading about modular arithmetic in order to learn more, and I hope that you give me some more recommendations to study this subject. --Email4mobile (talk) 10:14, 24 August 2009 (UTC)[reply]

Average

In a philosophically mathematical manner, is it ridiculous to ask if "the average person is average"? I suppose there could be some term switching in that both averages would not refer to the same method of obtaining an average (e.g. mean vs. median), but that would sort of be a fallacy of four terms. My sister-in-law asked me this question, and at first I thought it was insightful, but shortly thereafter figured that it's probably not that insightful at all. Any comments? DRosenbach (Talk | Contribs) 13:38, 23 August 2009 (UTC)[reply]

I'd say not; I guess it can be translated in a more mathematical language as asking whether the variance is small. An average gives only a first description of a distribution; then you can ask how close the individuals are to the average value. For instance, in a society of clones, everybody is very close to the average person (and I'm trying to recall a sentence in a novel by Oscar Wilde, asserting that the typical Englishman is not typical...)--pma (talk) 14:16, 23 August 2009 (UTC)[reply]
If "the average person" refers to someone chosen at random, there is a 1 in 2 chance that he is not average because only 50 of the population placed on a bell curve would be considered average. So perhaps it's not such a bad statement after all. DRosenbach (Talk | Contribs) 14:36, 23 August 2009 (UTC)[reply]
I've never heard of "average" meaning "random" or "in the middle 50%". --Tango (talk) 19:43, 23 August 2009 (UTC)[reply]
Another interpretation would be to relate this to ergodicity. (My understanding of ergodic theory isn't great, so please correct me if this makes no sense). For instance, if you were interested in which parks in a city were well-liked by people, you could can do two things. You can look at a lot of people in a single instant and see which parks are busy and which are empty. You can also follow a single person for a year, and see which parks the person frequents.
These method will likely give very different results, whatever person you choose. This means that even if you choose the "most average" person in the city (by whatever definition) his path (the parks he visits) will still diverge from the average. In that sense human beings have a tendency to be non-average (which, I think is more subtle than just having a high variance, although that may be an effect). risk (talk)

A game with positions expressable as vectors

I'm looking for a reasonably complex game of perfect information, where each position (ie. the state of the game after a player has made her move) can be fully expressed as an n-dimensional vector in a straightforward manner. I'm sure that the board-positions of Chess and Go can be encoded in this way, but any method I can think of is rather convoluted, 'hiding' the relevant information, or leads to a massive, very empty space.

Can anybody think of a game that fits this description? I'm sure such a game would be very useful in Game Theory (relating game theory to geometry), so I figured someone might have invented one just for that purpose. risk (talk) 16:02, 23 August 2009 (UTC)[reply]

A very simple example would be naughts and crosses. There are nine positions (represent this by ), and each place can have a naught, a cross, or be empty (represent this by ). So any game would be a vector in Alternatively, a game could be given by a mapping Of course not all of these vectors and maps would represent a valid game: for example, you couldn't have a cross in all nine positions. As to your problem about big empty spaces. The layout of a game has myriad posibilities, and so the space of all game states would be huge – if not infinite. So you would only ever have a single vector for a given game state sat in a massive space of game states. I've thought about something similar with snooker. The pocket is on a two-dimensional table and the trajectories of the cue ball which lead to a pot are limited. But when you add side, top, bottom and power and other such factors you can think of your pockets being multi-dimensional pockets on a multi-dimensional table. ~~ Dr Dec (Talk) ~~ 18:34, 23 August 2009 (UTC)[reply]
I was thinking along similar lines for Chess or Go, but I figure that even if the space of all games is near infinite, the space between 0 and 1 in even one dimension is actually infinite, so it could easily accommodate all those positions. Consider, for instance a representation of a chess position as a binary string. You can interpret each binary string as a number in (0, 1), which gives you the required densely packed representation (informally; it wouldn't actually be a dense set, I guess).
The problem with this is that the transformation removes too much information and the geometric interpretation of this set is fairly meaningless. I was trying to think of a game where there is a more natural transformation from positions to euclidean points, so that things like the euclidean distance are more meaningful. risk (talk) 19:37, 23 August 2009 (UTC)[reply]
Wouldn't be a more obvious vector space for describing games states in noughts and crosses? Each position is represented by a dimension and what is filling that position is given by the coordinate in that dimension. I'm not sure how your suggestion would work - you would need 9 points in that space to determine a game state and the first coordinate is completely redundant as long as those points are ordered (and the coordinates in a vector are always ordered). --Tango (talk) 20:03, 23 August 2009 (UTC)[reply]
Yeah, I'm not sure why I wrote I was thinking one thing and writing another. would indeed be the vector space to use. An empty board would be prepresented by the zero vector. A board filled with naughts (although not a valid game state) would be (1,…,1) and a board filled with crosses (although not a valid game state) would be (2,…,2). Tango, thanks for the correction. ~~ Dr Dec (Talk) ~~ 20:26, 23 August 2009 (UTC)[reply]
Risk, I see what you're saying about the interval. Is there really a difference? Is a point any more or less isolated in the interval [0,1] as a vector is in I would say that a point is more isolated in the interval than a vector in The vector where could be set to correspond to the number What about irrational numbers like π/4 ≈ 0.7854? They don't correspond to any element of In fact the numbers in [0,1] which don't correspond to elements of form an open and dense subset of [0,1]. ~~ Dr Dec (Talk) ~~ 20:42, 23 August 2009 (UTC)[reply]
That's a good point. In every point is a legal board position. In that sense, you're absolutely right, the space is fully packed with positions (though not all legal). But the structure that those points make when you view the set as a subset of , isn't very interesting (essentially a 3x3x3x... grid), and in that sense it does create more 'empty space'. Of course, a carefully designed game could get around this by making all the real or rational vectors possible/legal moves, which is more or less what I'm looking for.
I should perhaps elaborate a little on what I'm trying to do. One of the things I'm toying with is the idea that in such a representation you can talk about the (fractal) dimension of the total set of board positions, or the dimension of the sets of winning/losing positions, or the dimension of the boundary between these points. (You could do that anyway by just defining a metric on the positions, but that would introduce the possibility that the choice of metric is more important to the dimension than the game). Of course any game with a finite number of positions has dimension zero, but there are ways around that.
Another option would be to treat a move as a map. This would open up the possibility of treating the game like a discrete dynamical system. risk (talk) 21:40, 23 August 2009 (UTC)[reply]
Are you restricting yourself to vector spaces or is any manifold acceptable? There is an additional requirement that the subsets you are interested in actually be subspaces/submanifolds, otherwise the dimensions you talk about aren't well defined. I'm sure we can invent a game that works, but finding an existing one (other than the kind of strange games mathematicians invent for various purposes) seems unlikely. Most games have a finite number of possible game states since there is usually a finite number of possible moves at any given point and a finite amount of time to play the game (although I suppose it is possible that time could be unbounded, leading to countably infinitely many game states, but that isn't much help anyway). If we want to have interesting geometry over the real numbers we obviously need uncountably many game states, that means uncountably many decisions at a given point. The only such games I can think of are sports where you have things moving around in space, but they aren't usefully described in the way you want. For football (soccer) you can have the position of the ball represented by a 2D vector, the possible game states are a rectangle and the winning game positions are a line segment at each end of the pitch. Not very interesting, but it fits your requirements technically. You could try snooker with 2 dimensions per ball, the possible game states would be those where the balls are in a rectangle and there is a minimum distance between them (the diameter of the balls). The end game states would be a discrete set where all the balls are in the pockets (the minimum distance rule doesn't apply there), and you need 2 extra discrete dimensions for the scores. So, you can do the kind of things you are talking about, but they aren't useful. I think it is only going to work nicely if you invent a game for that purpose. --Tango (talk) 22:29, 23 August 2009 (UTC)[reply]
I guess you're right. Specifically, the difference between a countable and an uncountable number of positions may not be so easy to gloss over as I had thought. I gues I have some more thinking to do. Thank you both for the insights. risk (talk) 11:11, 24 August 2009 (UTC)[reply]

BN-pairs: conjugate generators weyl group give double cosets of same size

Hello,

I read (B, N) pair, but I still have a question about the correspondence described there between elements of the Weyl group and double cosets.

Suppose and are both generators of the Weyl group, and suppose that they are conjugate (within the Weyl group). Is it correct, and trivial to see(?), that the double cosets and have the same cardinality? (I'm rather sure it should be, because in geometric language, this is the number of chambers through one panel in a thick building).

However, I'm afraid it's not like we can prove that by proving that both double cosets are conjugate as well (because I think they aren't). Do you have any ideas? Thanks!

Evilbu (talk) 21:25, 23 August 2009 (UTC)[reply]

So by w_1 and w_2 being generators, I'm assuming you mean they are both part of a single (Coxeter) generating set. The Weyl group of type An is a little weird because individually all of its generators are conjugate (they are just two cycles in the symmetric group), but within the particular Coxeter system they are quite different. For instance, take the group to be SL(4,p^f) of type A3. B is a Borel subgroup, the normalizer of a Sylow p-subgroup. B is a semidirect product of T and the Sylow p-subgroup, and N is the normalizer of T in G. N/(BnN) is isomorphic to the symmetric group on 4 points, a Coxeter group of type A3 with Coxeter generating set w1=(1,2), w2=(2,3), and w3=(3,4). Letting n1, n2, n3 be preimages of w1, w2, and w3, one has that the double cosets Bn1B and Bn2B have different sizes. For instance, when p^f=3, Bn1B has size 157464 and Bn2B has size 17496. Now, if w1 and w2 are conjugate under a graph automorphism of the Weyl group, then I think the group itself should have a graph automorphism so that Bn1B and Bn2B are conjugate. The graph automorphism is unlikely to be an inner automorphism, so probably they are not conjugate within the group itself, but they would be in the holomorph. JackSchmidt (talk) 00:34, 24 August 2009 (UTC)[reply]
Hello, thanks for your reply. Indeed, I mean that they are both part of a single coxeter generating set (sorry if that was unclear). I must admit that I do not know what a "graph automorphism of a group" is. I do know "automorphisms of a graph" (and there is plenty on that on the internet) but that's something else (it seems). I was looking at the numbers you gave me so I could try it myself. Do you also happen to know the size of B? If , I find that it should be 2^3 3^6=5832. In general, if I let q denote p^f, I find that: (which is 17496 if q=3). So I don't understand what is going on. Note that the cardinality of each double coset must be a multiple of the cardinality of a coset of B. (And to be honest : it is in fact the quotient that I am interested in). Many thanks.ıı
Oops, sorry, my explicit calculation was wrong (it took a B and an N, but not form the same (B,N) pair, so my n1 was actually of length 3 in the true Coxeter generators). The general calculation made it clear:
For SL(n+1,q) of type An, B has order (q-1)^n*q^(n(n+1)/2) and it has a very important subgroup U of order q^(n(n+1)/2). U is made up of subgroups X_i, each of order q, called root subgroups, that are in one to one correspondence with the positive roots. For a given element w of the Weyl group, there is a subgroup U_w of w which is generated by all X_i such that w(i) is a negative root. So U_1 = 1, and U=U_{w0} where w0 is the longest element of the Weyl group. The double coset BwB is also equal to BwU_w and has cardinality |B|*|U_w|. In particular, for w=w1, U_w1 = X_1 has order q, and for w=w2, U_w2 = X_2 has order 3, so the quotient |BwB|/|B| = |U_w| is equal to q whenever w has length 1 (that is, whenever it is one of the generators from the Coxeter generating set). Indeed, if w has length l (it is a product of l generators, but not a product of fewer than l generators), then |U_w|=q^l.
For general (B,N) pairs I don't know a formula for the size, but for so called "split (B,N) pairs of characteristic p satisfying the commutator relations" this same thing occurs, BwB = BwU_w and |BwB|/|B| = |U_w|. However, even for algebraic groups, the size of U_{w1} can depend on which root you take, though this only happens in the twisted groups. Most of this can be found in Carter's Simple Groups of Lie Type. Hopefully I'll have time later today to write out the SU(n,q) case. JackSchmidt (talk) 13:06, 24 August 2009 (UTC)[reply]
Hello, I certainly enjoy "debating" with you (and I think we have also crossed paths elsewhere). If it suits you better, this discussion can also be continued on my talk page. I somewhat recognise the stuff you wrote about U_w. I read that in general, there is a q_i associated with each generator w_i of the coxeter generating set. The correct formula for the size of U_w would then be q_1^{a_1}\ldots q_n^{a_n} where a_i is the number of positive roots, conjugate to w_i, turned negative by w. (so we use weighted lengths instead. So for SU(6,q) there would be 6 positive roots associated with q^2, and 3 positive roots associated with q. So the size of U=U_{w_0} would be (q^2)^6 (q)^3=q^15.
It is exactly those q_i that I am after. They should also be the cardinalities |B w_i B|, divided by |B| itself. And they should be equal if the corresponding generators are conjugate. (so that is why there is only one q involved in the projective case, the A_n case). A lot of people have told me that Carter's book is a good reference, but I couldn't find an answer in there. That is in generaly my problem with theory like this, too much stuff I picked up "from the street". I was very curious if there is a really quick way to see that those q_i must be equal for conjugate generators Evilbu (talk)
It's nice to have the question asked this way. I'll try to check the SU case today (I was thinking that this would be an example where the wi could be conjugate but have different q_i, but perhaps they happen to be). Unfortunately the Weyl group is type Bn in the SU case, and so there does seem to be a good chance the q_i will respect conjugacy. The size of the q_i though is supposed to depend on the size of the orbit of the root under the graph automorphism that defined the twisted group, and I don't see why that should be particularly related to conjugacy in the Weyl group. If it is true (in some generality), then Carter's Finite Groups of Lie Type (the thick brown one, not the thin black/gray one) might have the appropriate setup in its early chapters. I prefer the thin "Simple" book for most things, but it treats twisted and untwisted separately. JackSchmidt (talk) 15:52, 24 August 2009 (UTC)[reply]

August 23

August 24

Probability

I've got a question on Probability which I found on one of my reference books, which I couldn't figure out. Guys, this is not homework, just an interesting question which I ran into. Say a police officer is investigating a case. He is 60% convinced that X is the culprit. Suddenly, he finds a new piece of evidence, which is a mark on the culprit's face, which 90% of the population don't have. If X has the mark, what is the probability that X is the prisoner ? Actually, though this question looks simple enough, I just can't crack it ! Excuse me if I had asked a very obvious question, but I need some help. Rkr1991 (Wanna chat?) 10:54, 24 August 2009 (UTC)[reply]

Let A be the event "X is the culprit" and B "X has a mark". We have . By Bayes' theorem the posterior probability of A given that we know B is
-- Meni Rosenfeld (talk) 12:16, 24 August 2009 (UTC)[reply]

Either there's a typo in the question or it's a trick question. The probability that X is the prisoner cannot be calculated from the police officer's certainty that X is the culprit and the ratio of people carrying a certain facial mark. We're not told anything about any trial turning a suspected culprit into a prisoner. ~~ Dr Dec (Talk) ~~ 12:24, 24 August 2009 (UTC)[reply]

Great, that's the right answer. Thanks a lot !!!! Rkr1991 (Wanna chat?) 13:25, 24 August 2009 (UTC)[reply]

General formula for power sum

Today I am the one who asks about a general formula for the following sum, of integers.

For example, the simple form is given by the arithmetic progression sum, in which k = 1, is given as

If there exists an article, I'd be glad to be guided to.--Email4mobile (talk) 16:27, 24 August 2009 (UTC)[reply]

Have a look at Bernoulli number which answers your question. 129.67.186.200 (talk) 16:39, 24 August 2009 (UTC)[reply]