Wikipedia:Reference desk/Mathematics
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
September 17
L'Hopital's rule
How do I apply L'Hopital's rule to a limit function with two or more variables? Plasmic Physics (talk) 10:45, 17 September 2012 (UTC)
- With two or more independent variables, I think you need to define the path along which you approach the limit point (and so reduce the problem to a one-variable problem) before applying L'Hopital's rule. This is because the limit can depend on the path taken. To take a simple example (which doesn't really need L'Hopital's rule, although you can use it), consider the limit of the function
- at (0,0). If we approach (0,0) along the line x = 0 then the limit is 2; along the line y = 0 then the limit is 1/2; and along the line y = x then the limit is 1. Gandalf61 (talk) 11:11, 17 September 2012 (UTC)
- What if I use a mixed double partial derivatives of both the numerator and denominator? If I do that for the limit of f(x,y) as (x,y)->(0,0), then this approach would give 2/2, which simplifies to 1. Is this just a coincidence? Plasmic Physics (talk) 13:17, 17 September 2012 (UTC)
- Not sure what you mean by "mixed double partial derivatives". If you mean
- then you are implictly choosing the path y = x (or at least, a path that is tangent to y = x at (0,0)). Gandalf61 (talk) 13:51, 17 September 2012 (UTC)
- No, I mean the product not the sum, as in over . Plasmic Physics (talk) 21:16, 17 September 2012 (UTC)
- Not sure what you mean by "mixed double partial derivatives". If you mean
- Roughly, expand f and g in their multivariate Taylor series at (0,0). If the Taylor series have a common factor, then you can cancel this common factor and evaluate the limit. Of course, this works much less often than in one variable, since two functions can be zero at a point yet have no common factor (e.g., f(x,y)=x and g(x,y)=y are both zero at the origin, with no common factor). Sławomir Biały (talk) 00:03, 18 September 2012 (UTC)
- To show that a limit exists in the multivariate context, though, you need to show that the limit is the same along all paths. I'm not sure l'Hopital's rule is going to be much help (it's really just a short-cut anyway - if you can find a limit using the rule then you can find it using elementary techniques almost as quickly). --Tango (talk) 23:12, 17 September 2012 (UTC)
Definite integral from -1 to 1 of 1/x
Since 1/x is discontinuous and undefined at x = 0, my understanding is that the definite integral therefore has no limit and is undefined (and claiming that
is sketchy, dubious, and unrigorous). However, I have heard that it is possible to rigorously evaluate so that the result is not undefined. Can someone explain how this is done? —SeekingAnswers (reply) 15:34, 17 September 2012 (UTC)
- See Cauchy principal value. Your integral is discussed in the Examples section. --Wrongfilter (talk) 16:18, 17 September 2012 (UTC)
- See also Principal value for the complex number case and list of integrals#Integrals with a singularity which has a note about both cases. Dmcq (talk) 16:42, 17 September 2012 (UTC)
simple Differential equation
hi,
I would like to know the solution for:
f(x)*f^2(x)=1 15:50, 17 September 2012 (UTC) — Preceding unsigned comment added by Exx8 (talk • contribs)
- What have you tried so far on solving the problem? —SeekingAnswers (reply) 16:28, 17 September 2012 (UTC)
- Is this supposed to be ? Looie496 (talk) 21:40, 17 September 2012 (UTC)
no, it is Exx8 (talk) —Preceding undated comment added 23:16, 17 September 2012 (UTC)
- Really the second derivative? This would be easy to deal with if it was the first derivative, but with the second derivative, it is essentially the equation for the motion of a charged particle in a repulsive inverse square field, which doesn't have a solution that can be written in closed form (as far as I know). Looie496 (talk) 19:23, 18 September 2012 (UTC)
- I agree that it seems implausible that this is correct equation. It does have a solution though: . Dragons flight (talk) 19:41, 18 September 2012 (UTC)
Correct you are!! Do you have some software or you did it alone? Thank you! Exx8 (talk) 21:26, 19 September 2012 (UTC)
Skewed distribution with given median, mean and SD
For what skewed generalizations of the normal distribution can parameters be estimated given the mean, median and standard deviation of a sample but no other information about the sample? NeonMerlin 16:37, 17 September 2012 (UTC)
- Are you asking for named types of distributions? Otherwise the answer is banal: you could do it for almost any family of distributions that has three parameters. Looie496 (talk) 18:36, 17 September 2012 (UTC)
- These are not skewed, but the elliptical distributions are a generalization of the normal distribution in which one parameter is the median (which equals the mean if the mean exists) and the other parameter is the standard deviation. Duoduoduo (talk) 18:53, 17 September 2012 (UTC)
- If you type three parameter distribution into the Wikipedia search box, a number of promising hits come up. Duoduoduo (talk) 19:03, 17 September 2012 (UTC)
When does additivity imply homogeneity?
There are usually two requirements for R-linearity of a function f : U → V, where U and V are modules over a ring R (here x, y ∈ U, α ∈ R):
- f(x + y) = f(x) + f(y) (additivity)
- f(αx) = αf(x) (homogeneity).
The two requirements are clearly closely related (partially redundant conditions). When does the first imply the second? For example, if R = ℤ, it clearly does. Wherever R is not commutative, it clearly doesn't. Does the implication extend to when R = ℚ? To other commutative rings? The reverse implication clearly holds whenever the module U can be generated by a single element (i.e. when it is one-dimensional over R). — Quondum 21:15, 17 September 2012 (UTC)
- Yes, it holds for ℚ. As you noted, for an integer. So , and thus . So .
- It doesn't hold for , however. Let be a basis for over , and let be projection onto the first basis element.--121.73.35.181 (talk) 22:38, 17 September 2012 (UTC)
- Thanks. Nice counterexample, which by analogy severely restricts the rings for which the implication holds. — Quondum 03:44, 18 September 2012 (UTC)Resolved
September 18
The continuum and cardinality
Hi. Not homework, but this is something I'm curious about. It is easy to show that R and (0,1) have the same cardinality, by taking the function f: (0,1) ↔ R f(x) = tan[π(x-1/2)] which clearly is a bijection. How would one show that (0,1) and [0,1] have the same cardinality, by a similar argument (constructing a function)? Thanks. 24.92.74.238 (talk) 00:33, 18 September 2012 (UTC)
- Well, it can't be quite that simple, because (0,1) and [0,1] are not homeomorphic (for example, [0,1] is compact, and (0,1) is not). So the function can't be bicontinuous. But you can find a general method for doing that sort of thing at Schroeder-Bernstein theorem. Someone may be able to give a cleverer answer. --Trovatore (talk) 01:12, 18 September 2012 (UTC)
- I don't know about clever, but here's another: map to and to for a positive integer, and leave every other point fixed. This is a bijection from [0,1] to (0,1).--2404:2000:2000:5:0:0:0:C2 (talk) 03:05, 18 September 2012 (UTC)
- That's not quite a map from one of the sets to the other, but the general idea works. Essentially if you have an infinite set M and a finite or countable set S then to find a bijection between M and you take a countable set and construct a bijection from L to using the idea of Hilbert's Hotel, a construction that uses the enumeration of L like the 2404 IP above has suggested. —Kusma (t·c) 07:41, 18 September 2012 (UTC)
- Whoops, for some reason I was thinking [-1,1] instead of [0,1].--121.73.35.181 (talk) 08:06, 18 September 2012 (UTC)
- Just define a sequence in (0, 1), e.g. and shift it by two positions. The two endpoints of [0, 1] then land in the two just released points:
CiaPan (talk) 09:04, 18 September 2012 (UTC)- Some people think I am homeomorphic, but I am actually bicontinuous. μηδείς (talk) 18:50, 18 September 2012 (UTC)
Complexity class of minimalist pangram construction
Some time ago, I solved a word puzzle which asked the puzzler to find the minimal number of names necessary out of a given list of 100 names to construct a pangram. I was able to solve the problem without too much difficulty by inspection alone; what I did was I looked for the least common letters, which appeared in only a few names, and then worked backwards from the names containing the least common letters to construct the minimal pangram.
I was thinking some more about the problem, and while the 100-name problem was easy to solve manually, I suspect that the same problem with large inputs might be quite difficult, even for a computer.
Consider the following generalized version of the problem:
- You have an alphabet consisting of c characters and a list of s strings of variable lengths which altogether contain at least 1 of every character of the alphabet. Your task is to construct a minimalist (defined as requiring the fewest possible strings) pangram out of those strings.
The difficulty is the "minimalist" part. It is fairly easy to construct a pangram; indeed, you can easily see that there is an upper bound of c on the number of strings required, as, at most, you need one string that represents each character of the alphabet. The lower bound, though, seems very hard. The algorithm I outlined above of working backwards from least common letters seems extremely inefficient when c and s are large numbers, as the branching factor of possibilities is so large even after minimizing the tree by working starting from the least common letters. It can heuristically produce a close-to-minimal pangram very quickly, but the absolute minimal pangram with that approach would require going through that entire huge tree.
So my question is, does anyone know the complexity class of this problem? Is this pangram-construction problem a well-known problem in the literature which already has many papers written on it? If so, does Wikipedia have an article on it? (Perhaps linked somewhere from list of NP-complete problems, if my intuition that the problem is computationally difficult is correct; there are so many problems in that list, and I don't really know what to look for.) Or perhaps I am overestimating the difficulty of the problem, and some efficient algorithm is known; if so, how does that efficient algorithm work?
—SeekingAnswers (reply) 02:43, 18 September 2012 (UTC)
- Assuming that there are no grammatical constraints on the sentence, it's a set cover problem. 130.188.8.27 (talk) 10:14, 18 September 2012 (UTC)
- That's it, exactly! Thanks. —SeekingAnswers (reply) 03:37, 22 September 2012 (UTC)
see humanities reference desk
i'm asking purely in terms of elegance and what is a good definition. i understand the current definitions. 80.98.245.172 (talk) 19:02, 18 September 2012 (UTC)
- Summarizing the question stated there: If the count of hours turns over at 12:00 and the count of months turns over at 1/1, one of them must be wrong, so which is it? —Tamfang (talk) 20:11, 18 September 2012 (UTC)
- Inconsistent, yes. Wrong, no. Mathematicians cannot agree on whether to start the natural numbers at 0 or 1. Dbfirs 20:21, 18 September 2012 (UTC)
- You misspelled "some people, even some mathematicians, erroneously start the natural numbers at 1". Hope this helps. --Trovatore (talk) 22:34, 18 September 2012 (UTC)
- Naturally! — but only in the last 200 years. I recall a lecturer who insisted on using "zeroth" when most people said "first". Dbfirs 07:16, 19 September 2012 (UTC)
- You misspelled "some people, even some mathematicians, erroneously start the natural numbers at 1". Hope this helps. --Trovatore (talk) 22:34, 18 September 2012 (UTC)
- I happen to know what question you folks are talking about, because I participated in the responses. But in general, please provide a link to whatever it is you're referring to. The above would completely bamboozle many people. -- ♬ Jack of Oz ♬ [your turn] 22:14, 18 September 2012 (UTC)
- Inconsistent, yes. Wrong, no. Mathematicians cannot agree on whether to start the natural numbers at 0 or 1. Dbfirs 20:21, 18 September 2012 (UTC)
- Agreed, but I wish you would have provided the link, rather than just complain about the lack of one. Here it is Wikipedia:Reference_desk/Humanities#Either_the_Calendar_is_wrong_or_the_Clock_is._It.27s_that_simple.. StuRat (talk) 22:18, 18 September 2012 (UTC)
- Have to agree, an RD star for that one, as actually providing the needed link. μηδείς (talk) 22:22, 18 September 2012 (UTC)
- I wanted to explain why it's important for the instigator of the thread, or the first respondent, to provide a link in these circumstances. I wasn't complaining per se. I identified the issue and suggested a solution. Now, I'm in trouble. Interesting. And someone who did actually complain gets a star. Interesting. -- ♬ Jack of Oz ♬ [your turn] 22:30, 18 September 2012 (UTC)
- This reminds me of the debate over where computer arrays should start counting. In Fortran, they traditionally start with one, so the first object in the array is the first element: ARRAY(1). In C, they start counting at zero, so the first object in the array is the zeroth element: ARRAY(0). (The C convention does makes sense in terms of the first element having a zero offset, however.) StuRat (talk) 22:25, 18 September 2012 (UTC)
- Anything that involves modular arithmetic is a nuisance when arrays start at 1, and, as you say, in languages where actual memory locations and offsets are visible, it makes sense for the first element to be at offset zero. Other than that, I cannot think of any situation where arrays starting at 0 are helpful from a programmers' perspective. 86.176.210.77 (talk) 00:08, 19 September 2012 (UTC)
- I'm taking an online computer class at UC Berkeley, and they have 4 quizzes, called Quiz 0 through Quiz 3. I'm serious. :-) StuRat (talk) 04:36, 22 September 2012 (UTC)
September 19
Cohen and the axiom of choice
The article on Paul Cohen mentions that:
Cohen is noted for developing a mathematical technique called forcing, which he used to prove that neither the continuum hypothesis (CH), nor the axiom of choice, can be proved from the standard Zermelo–Fraenkel axioms (ZF) of set theory.
I am trying to find out the exact year in which Coehn first proved the result concerning the axiom of choice (not the continuum hypothesis) and the year and the paper when it was first published, if ever. Thanks--Shahab (talk) 11:21, 19 September 2012 (UTC)
- In 1963 according to this:
- In 1942 Gödel attempted to prove that the axiom of choice and continuum hypothesis are independent of (not implied by) the axioms of set theory. He did not succeed, and the problem remained open until 1963. (In that year, Paul Cohen proved that the axiom of choice is independent of the axioms of set theory, and that the continuum hypothesis is independent of both.)
- It was published in two parts, but the result had previously appeared in april 1963 and were presented may 3, 1963 at Princeton, according to the booknote at end of second paper: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC300611/?page=6
- First paper: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC221287/?page=1
- To work out where exactly the independence of ZFC of the axiom of choice is proven, I would first have to figure out what all this means...
- Other info: http://www-history.mcs.st-andrews.ac.uk/Biographies/Cohen.html Ssscienccce (talk) 17:26, 19 September 2012 (UTC)
Solve this please
(Moved by User:JackofOz from Miscellaneous Ref Desk)
Questions 1 - 5
There are six soccer teams - J, K, L, M, N and O - in the Regional Soccer League. All six teams play each Saturday at 10 a.m. during the season. Each team must play each o f the other teams once and only once during the season.
Team J plays team M first and team O second. Team K plays team N first and team L third. Team L plays team O first.
On the first Saturday, which of the following pairs of teams play each other ?
(A) J and K ; L and O ; M and N (B) J and K ; L and N ; M and O (C) J and L ; K and N ; M and O (D) J and M ; K and L ; N and O (E) J and M ; K and N ; L and O
Which of the following teams must K play second ?
(A) J (B) L (C) M (D) N (E) O
What is the total number of games that each team must play during the season ?
(A) 3 (B) 4 (C) 5 (D) 6 (E) 7
— Preceding unsigned comment added by 175.110.112.185 (talk) 19:43, 19 September 2012 (UTC)
- You are essentially told the answer to the first question directly. I'm tempted to say it's obviously (E) (can you see why that's the case?)
- Question two is done by process of elimination. K does not play N or L second because it plays them first and third respectively. J and O compete against each other in the second match so neither of them can play K. So the answer is M.
- The answer to the third question is 5, obviously. For each other team, there are 5 other teams, and each team must compete against each other team exactly once. Widener (talk) 20:22, 19 September 2012 (UTC)
- Creating a chart typically helps on this type of problem:
O P P O N E N T S TEAM 1st 2nd 3rd 4th 5th ---- --- --- --- --- --- J K L M N O
- Try filling that in with what you know already. (Although, this doesn't look like so much of a logic problem as a reading comprehension problem.) StuRat (talk) 00:37, 20 September 2012 (UTC)
Show that a series is not Absolutely convergent.
Show that this series is not absolutely convergent: .
This is my approach:
This is almost the harmonic series. The desired result follows if you can show that for some for every in an arithmetic progression, for example. Equivalently,
might be easier to show.
Then it suffices that for all for all in an arithmetic progression which is why I asked a similar question a while ago. That way, the number inside the absolute value signs will always have a nonzero imaginary part and therefore will have nonzero absolute value. It's not actually enough that for infinitely many as I later realized, for instance the result doesn't (at least not obviously) follow if its only true for the squares.
Anyway, do you have suggestions for proving this result? — Preceding unsigned comment added by Widener (talk • contribs) 20:07, 19 September 2012 (UTC)
exp(-i n b) - exp(- i n a) =
exp[-i n (a+b)/2] {exp(i n (a-b)/2) - exp(-i n (a-b)/2)}
So, the summation of the absolute value of the terms is:
If you replace infinity by N in the upper limit, you can say that the value is larger than what you get if you replace the absolute value of sin by sin^2. Then you can write sin^2 in terms of cos of the double argument, this yields one divergent summation (in the limit of N to infinity) and a convergent summation. Count Iblis (talk) 20:55, 19 September 2012 (UTC)
- Hey, that works! Thanks! Widener (talk) 21:16, 19 September 2012 (UTC)
The maze on today's main page.
As I posted earlier on Wikipedia:Main Page/Errors, the caption for this was a little confusing. Can anyone here figure out what it was trying to say: "In these mazes, it will typically be relatively easy to find the origin point, since most paths lead to or from there, but it is hard to find the way out"? As far as I can tell, since everywhere is connected to everywhere else, and there is neither a marked 'start' or 'end' once the maze is constructed, the statement makes no sense.
Incidentally, I'm fairly sure that there is a name for this particular type of maze - it has no 'loops' so can be navigated in its entirety simply by 'following a wall' - does anyone know the name? AndyTheGrump (talk) 21:08, 19 September 2012 (UTC)
- It sounds like the "all roads lead to Rome" concept. In the case of the Roman Empire, all roads would actually also lead to any connected city. What they are really saying is that it's the central hub. Think of it like an airline's hub city, too. Yes, you can get to any city they service eventually, but the direct routes all lead to or from the hub city. StuRat (talk) 00:34, 20 September 2012 (UTC)
- The phrase "simply connected" comes to mind, and it would technically be correct for a certain view of this (no loops) maze, although I'm not sure if anyone in the maze community uses precisely that term. (Although simple graph would be more relevant from a graph theory perspective.) I would also imagine (though don't have any proof/calculations to show) that the depth-first generation method would produce a maze with a low average degree of branching, which should aid in maze traversal, though I don't know if it would differentially affect and given position. Although it might be that since branching preferentially happens at the tips of the growing maze, you might expect a gradient of degree, with points near the origin having lower average degree than those further from it, leading to search being simpler in the region of the starting point. -- 205.175.124.30 (talk) 02:11, 20 September 2012 (UTC)
- It appears that "perfect maze" is the most common name for loop-free mazes. -- BenRG (talk) 06:10, 20 September 2012 (UTC)
- Why you force SSL in the HTTP link...? http://www.google.com/search?q=perfect+maze
See also directly pages Maze and Maze solving algorithm, they both say the same: Mazes containing no loops are known as "standard", or "perfect" mazes, and are equivalent to a tree in graph theory. --CiaPan (talk) 05:23, 21 September 2012 (UTC)- Why you force unsecure plaintext in the HTTP link? Nil Einne (talk) 13:32, 25 September 2012 (UTC)
- Beacuse it doesn't need any 'security' and there's no reason to hide it. There's nothing secret in using Google search machine or in the query itself or in its results. Additionally pasting the plain URL makes it easier for readers to see where they are directed. It is also short enough, so it does not disturb in reading surrounding text. --CiaPan (talk) 15:25, 25 September 2012 (UTC)
- Why you force unsecure plaintext in the HTTP link? Nil Einne (talk) 13:32, 25 September 2012 (UTC)
- Why you force SSL in the HTTP link...? http://www.google.com/search?q=perfect+maze
September 20
Lanchester's Law and Game Theory
Let us consider Lanchester's square law and three groups A,B,C which want to go to war. Making up numbers, let's say A,B,C have armies of sizes 45, 40, and 35 respectively. I am concerned with the optimal strategy for group C. From what I understand, B and C should form an alliance. They'll have a vastly superior force than A. They will beat A and both B and C will suffer casualties in proportions. After A has been beaten, then B and C can fight each other. It seems to be that if C has the smallest army, then it will always lose in the end. A,B,C can randomly all shoot at each other or B and C can form an alliance, wipe out A, and then turn against each other. Is there ever an case (with army sizes being A < B < C...notice strict inequalities) where Lanchester's Law and Multiparty Game Theory predict that C will win? A numerical counterexample is what I am looking for of course. If not then, C will always lose, right? So what is C to do? What incentive is there for C to form an alliance with B? It knows it will be killed in the end anyway so why help B (over A) survive? Is this the best strategy? If yes, then why because C's destruction seems inevitable so why should C care what it does and who it helps and what strategy it chooses? - Looking for Wisdom and Insight! (talk) 04:59, 20 September 2012 (UTC)
- I assume you meant "A > B > C" Dbfirs 07:54, 20 September 2012 (UTC)
- I can see three possible winning strategies for C:
- 1) The obvious one is to make peace with both A and B.
- 2) Form an alliance with one and hope that it holds after the other is defeated.
- 3) The most Machiavellian way for C to win is to secretly instigate a war between A and B, then, after one is destroyed and the other is weakened, attack. StuRat (talk) 05:18, 20 September 2012 (UTC)
- See truel for something very similar. Since peace treaties as unnecessary between friends I view them as declarations of war but not yet. Option 3 certainly seems the best option for C, the problem is what happens if A or B discover it. C could also send an army of size 5 to 'help' B. It would be interesting to try and figure out the strategies to stay alive the longest if they are all fighting each other. Say army A devoted a(t) of its effort to destroying B and 1-a(t) to destroying C, so C was being destroyed by a force of A(1-a(t))+Bb(t), B by a force of C(1-c(t))+Aa(t) and A by B(1-b(t))+Cc(t). Dmcq (talk) 12:40, 20 September 2012 (UTC)
- Peace should break out all over in ABC world, for no one can afford to attack alone, nor be the weaker ally in an attack. Rich Farmbrough, 21:10, 20 September 2012 (UTC).
- Yep as far as I can see if they have a three way war then all three will be completely destroyed at the same time if the two smaller ones could defeat the largest. What'll happen is that the smaller ones will gang up on the largest until it is no longer the largest, then the two larger will reduce each other till all three are the same size as the smallest. Dmcq (talk) 22:31, 20 September 2012 (UTC)
- Peace should break out all over in ABC world, for no one can afford to attack alone, nor be the weaker ally in an attack. Rich Farmbrough, 21:10, 20 September 2012 (UTC).
- See truel for something very similar. Since peace treaties as unnecessary between friends I view them as declarations of war but not yet. Option 3 certainly seems the best option for C, the problem is what happens if A or B discover it. C could also send an army of size 5 to 'help' B. It would be interesting to try and figure out the strategies to stay alive the longest if they are all fighting each other. Say army A devoted a(t) of its effort to destroying B and 1-a(t) to destroying C, so C was being destroyed by a force of A(1-a(t))+Bb(t), B by a force of C(1-c(t))+Aa(t) and A by B(1-b(t))+Cc(t). Dmcq (talk) 12:40, 20 September 2012 (UTC)
League problem
For a game I'm trying to come up with a particular league system, and I'm unsure how to go about doing it. I tried to do it manually but I couldn't come up with an algorithm that led me to a solution. Basically I have 16 teams in a league, but for logistics' sake I need to break it up into four 4-team series. I figure if I can break it up right, then I can each team have faced every other team in one of the series once, and only once. So in the first leg I have four series:
Series 1 | Series 2 | Series 3 | Series 4 |
---|---|---|---|
Team A | Team E | Team I | Team M |
Team B | Team F | Team J | Team N |
Team C | Team G | Team K | Team O |
Team D | Team H | Team L | Team P |
Each team has to face 15 other teams, and in each leg it faces 3 other teams, so then in 5 legs any given team will have faced all 15 other teams. I can come up with two other legs: one by simply flipping it around the diagonal (so Series 1 would be teams A, E, I, and M), and the other by taking diagonals for each series (so Series 1 would be teams A, F, K, and P), but beyond that I start running into problems. Maybe I'm backing myself into a corner, I don't know. And I calculate that there are 2386 possible ways to sort the 16 teams into 4 series, so I'm not about to check all of those, to find five "orthogonal" legs.
Is there a better way of going about this, in a more mathematical way? Thanks —Akrabbimtalk 16:47, 20 September 2012 (UTC)
- I think your approach is reasonable. Your 4th and 5th groups will be "what's left":
Series 1 | Series 2 | Series 3 | Series 4 |
---|---|---|---|
Team A | Team E | Team I | Team M |
Team B | Team F | Team J | Team N |
Team C | Team G | Team K | Team O |
Team D | Team H | Team L | Team P |
Series 1 | Series 2 | Series 3 | Series 4 |
---|---|---|---|
Team A | Team B | Team C | Team D |
Team E | Team F | Team G | Team H |
Team I | Team J | Team K | Team L |
Team M | Team N | Team O | Team P |
Series 1 | Series 2 | Series 3 | Series 4 |
---|---|---|---|
Team A | Team E | Team I | Team M |
Team F | Team J | Team N | Team B |
Team K | Team O | Team C | Team G |
Team P | Team D | Team H | Team L |
. . .
- Also note that there are many possible solutions. StuRat (talk) 17:12, 20 September 2012 (UTC)
- But the other diagonal will intersect. One direction will be A,F,K,P, and the other direction will be A,N,K,H. A and K are in both. —Akrabbimtalk 17:29, 20 September 2012 (UTC)
- Yes, you're right, I've revised my advice above. I'm going to write a program to solve this and post it tomorrow. StuRat (talk) 17:38, 20 September 2012 (UTC)
- Thanks, I'll be interested to see what you come up with and your algorithm to solve it. I was trying to work it out with Matlab, but it was just turning into a clumsy brute force algorithm, and I wasn't getting anywhere. But I charted it out, and three legs that we have here are definitely not part of the solution. For example, for A and G to meet, the only teams that haven't met A and G would be J and N, and they have both played each other already, so I right when I thought I had painted myself into a corner. —Akrabbimtalk 17:56, 20 September 2012 (UTC)
(Teams have numbers rather than letters, obviously)
Series 1: 1, 2, 3, 4
Series 2: 5, 6, 7, 8
Series 3: 9, 10, 11, 12
Series 4: 13, 14, 15, 16
Series 1: 1, 5, 9, 13
Series 2: 2, 6, 10, 14
Series 3: 3, 7, 11, 15
Series 4: 4, 8, 12, 16
Series 1: 1, 6, 11, 16
Series 2: 2, 5, 12, 15
Series 3: 3, 8, 9, 14
Series 4: 4, 7, 10, 13
Series 1: 1, 7, 12, 14
Series 2: 2, 8, 11, 13
Series 3: 3, 5, 10, 16
Series 4: 4, 6, 9, 15
Series 1: 1, 8, 10, 15
Series 2: 2, 7, 9, 16
Series 3: 3, 6, 12, 13
Series 4: 4, 5, 11, 14
81.159.104.182 (talk) 03:41, 21 September 2012 (UTC)
- You beat me too it, 81. Yes, that's a valid solution. 81's version of diagonals in the third grouping seems to work, while mine does not. (Running my program, I found one solution given his first 3 groupings, and no solution given my first 3 groupings.) StuRat (talk) 06:17, 21 September 2012 (UTC)
- BTW, Akrabbim, where did the 2386 number come from ? My program seemed to come up with 2,627,625 possible groupings. Here's the formula:
16! --- 4!5
- StuRat (talk) 06:36, 21 September 2012 (UTC)
- (formerly 81.159) I agree: 16!/(4!^5). I don't understand where that 2386 number comes from either. 86.160.216.189 (talk) 11:28, 21 September 2012 (UTC)
- Hey, that's great! Thanks! Can I ask how you figured it out? And about the 2386: I was thinking it would be choose 4 for the first group, then 4 for the next group from the remaining, and so on. So (16/choose/4) * (12/choose/4) * (8/choose/4) * (4/choose/4) = 1820 * 495 * 70 * 1 = 63,063,000 (when I first punched it in accidentally did + instead of *, which is where I got 2386). I haven't thought about probability at all in years though, so I'm not surprised I'm wrong. —Akrabbimtalk 14:34, 21 September 2012 (UTC)
- Arrangements in which the split into groups is the same but groups are listed in a different order are counted separately in your count, but only count as one arrangement in StuRat's count. So your count is 4! = 24 times as large as StuRat's count. Gandalf61 (talk) 14:40, 21 September 2012 (UTC)
- Agreed, there are the 24 ways to arrange the series, but those are all equivalent. For example, these two are equivalent:
Series 1 | Series 2 | Series 3 | Series 4 |
---|---|---|---|
Team A | Team E | Team I | Team M |
Team B | Team F | Team J | Team N |
Team C | Team G | Team K | Team O |
Team D | Team H | Team L | Team P |
Series 4 | Series 3 | Series 2 | Series 1 |
---|---|---|---|
Team A | Team E | Team I | Team M |
Team B | Team F | Team J | Team N |
Team C | Team G | Team K | Team O |
Team D | Team H | Team L | Team P |
- As to the method I used, it wasn't very elegant, I just used brute force. Can we mark this Q resolved ? StuRat (talk) 16:31, 21 September 2012 (UTC)
- "when I first punched it in accidentally did + instead of *" ... Ooops!! 86.160.216.189 (talk) 17:28, 21 September 2012 (UTC)
x/y in IV quadrant, is the given ratio negative or positive?
question: "Suppose that the point (x,y) is in the indicated quadrant (IV). Decide whether the given ratio is positive or negative." Is it not positive, simply by sketching? How would it be negative? thanks.24.139.14.254 (talk) 20:08, 20 September 2012 (UTC)
- Just look at the signs of x and y. The ratio of two numbers with the same sign is positive; the ratio of two numbers with opposite signs is negative. Widener (talk) 20:28, 20 September 2012 (UTC)
September 21
Integration of 1/x in Quadrant I
Hi. First of all, this isn't an actual homework problem, but a conceptual question I asked a mixed crowd of people, some of whom know integration techniques and some of whom which who whom don't. Everybody seemed to know intuitively or by proof that the area underneath the curve in quadrant one is infinite, because since the graph never touches either axis, the area underneath each section is infinite. This didn't make sense to me, because I intuitively compared it to a converging sum such as 0.999..., but apparently the hyperbolic function does not converge. Therefore, I must be severely deluded. The integration technique also relies on the fact that zero invalidates the integration, resulting in infinity; if one of the axes were shifted away from the zero-point, the area would still be infinite. This brings up the question: since shifting both axes away from zero would immediately cause the function's area to become finite, is there a tipping point of some sort? This is hillarious because the hyperbola itself is tipping pointential. Therefore, perhaps at the exact tipping point upon which the area oscillates between finite and infinite, causing the creating of a complex number. However, since this is a bi-axial method, the convergence paradigm is both null and void.
I must admit, quite embarrassingly, that I've never done mathematical proofs for more than one hour in my entire life. Therefore, I may need to rely on intuitive techniques such as visual calculus to get the point across to myself about why this function has an infinite area. I incorrectly assumed that the total area underneath the hyperbola in the first quadrant is reducible to be equivalent to the area to the bottom left of a linear function with the slope -1. However, I then remembered my Cartesian plane-onto-sphere method, which was improperly answered because my method assumes an infinite Cartesian surface, whereas stereographic projection relies on a finite Earth. It would be more like taking the membrane of an open universe and reconstructing it to make it closed. Anyway, I proved to myself that in this projection, the four points of (0,0), (x-fin,x-infinlim), (y-fin,y-infinlim) and (±∞,±∞) together comprise the manifold geodesic of a sphere in 2.5 dimensions (clarification, citation, objectivity and sanity needed), in each of the four quadrants (actually, there are a total of eight). This is where {x,y-fin/infinlim} form the continuum where the grid system transforms from finite in one direction to infinite in the opposite direction (from the vantage point of the "anti-origin"), forming two points on a sort of central meridian line, though that's irrelevant largely to solve this problem. So in the Q-I space, the first quadrant forms a circle, and that circle's area is infinite. The perimeter of this circle must also be infinite. The area under the hyperbola, thus, represents an area around the circumference of this circle, although it is widest at one point of the circle and converges at the other end of the circle. Despite the distance from this circumference getting smaller as the function recedes from the 2-space origin, the area still goes onto infinity, thus allowing the total area by integration to be infinite. Phew, though this is not as clear to most people, so there has to be a better intuitive way of proving it.
Someone enlighten me by dialing 0 on the Hamiltonian operator. Thanks. ~AH1 (discuss!)
- If you're having trouble seeing that the area under the curve y = 1/x in the first quadrant is infinite, consider the following picture, similar to those commonly drawn to illustrate Riemann sums:
- All of the blue rectangles are included in the area under the red curve y = 1/x, so the area under the curve must be at least as large as the total area of all the rectangles. Every rectangle has width 1. The first rectangle has height 1, the second has height 1/2, the third has height 1/3, and so on. So the total area of the rectangles is the sum of the harmonic series 1 + 1/2 + 1/3 + …, and this sum is infinite:
- Since the total area of the blue rectangles is infinite, the area under the curve y = 1/x must also be infinite. —Bkell (talk) 08:21, 21 September 2012 (UTC)
- See also Gabriel's Horn for the nice paradox that the "horn" formed by rotation of y = 1/x (x>1) about the x axis has finite volume but infinite area, so it can be filled with paint but its surface can't be painted. AndrewWTaylor (talk) 09:51, 21 September 2012 (UTC)
- Good example, but Gabriel's horn has x^2 in the denominator, and 1/x^2 has a finite integral from 1 to infinity. SemanticMantis (talk) 15:15, 21 September 2012 (UTC)
- No, AndrewWTaylor was correct when he said that Gabriel's horn is constructed by rotating y = 1/x about the x-axis. The volume integral has a 1/x2 in it, but that's not the defining curve. —Bkell (talk) 02:59, 22 September 2012 (UTC)
- Good example, but Gabriel's horn has x^2 in the denominator, and 1/x^2 has a finite integral from 1 to infinity. SemanticMantis (talk) 15:15, 21 September 2012 (UTC)
- See also Gabriel's Horn for the nice paradox that the "horn" formed by rotation of y = 1/x (x>1) about the x axis has finite volume but infinite area, so it can be filled with paint but its surface can't be painted. AndrewWTaylor (talk) 09:51, 21 September 2012 (UTC)
- Also, the "graph never touches either axis" is not a valid argument for infinite area, so perhaps your reticence to accept it was more correct than you knew! Consider a piecewise defined function, f=1/x^(0.5), x in (0,1), f=1/x^2, x in [1,infinity). This function has a finite integral over (0,infinity), and the graph never touches either axis. SemanticMantis (talk) 15:15, 21 September 2012 (UTC)
Limits of Integration
Is it true in general that ?
Widener (talk) 02:57, 21 September 2012 (UTC)
- Consider
Or
CiaPan (talk) 05:30, 21 September 2012 (UTC)- Good answer. Interchange_of_limiting_operations has scant information, and could use some improving if anyone is interested. SemanticMantis (talk) 15:03, 21 September 2012 (UTC)
- Oh, of course, but what if you add the condition that the limits and exist? Widener (talk) 16:34, 21 September 2012 (UTC)
- . -- Meni Rosenfeld (talk) 17:04, 22 September 2012 (UTC)
- Hm. Okay. I think it is true that though.
- If you can interchange the limit and integral somehow you get
- Widener (talk) 18:12, 22 September 2012 (UTC)
- If the integrand in the last integral is finite (bounded) in the neighbourhood (x,ε) → (1,0), then ... — Quondum 20:56, 22 September 2012 (UTC)
- . -- Meni Rosenfeld (talk) 17:04, 22 September 2012 (UTC)
Inverse birthday problem
The birthday problem of course describes the probability that two people in a given set of people will share birthdays.
But what are the odds in a group of n people that there are days where there are no birthdays?
The birthday problem's solution implies that birthdays often cluster. Would that mean that unbirthdays are also clustered?
This isn't homework of any sort; I'm just curious. I have ~500 Facebook "friends" — what are the odds that none of them have a birthday today? --Mr.98 (talk) 13:39, 21 September 2012 (UTC)
- Off the top of my head, I think this is easier than the birthday problem, because we don't have to be careful about things like Alice and Bob share some birthday, OR Alice and Charles share some birthday, etc... If we assume that birthdays are uniformly distributed, and each person's birthday is independent of all the others, then we can just multiply to answer your final question. Because they are uniformly distributed, the probability that person A's birthday is not today is 364/365. Likewise for person B. Because they are independent, we can multiply to get the probability that today is not A's birthday AND is not B's birthday is 364/365 X 364/365. So the probability (not odds) that no friend has a birthday today is (364/365)^500=0.2537. Note that your last question is different from your second (i.e. some B-day free day exists, vs today is a B-day free day); that one takes a little more work. SemanticMantis (talk) 16:15, 21 September 2012 (UTC)
- More concisely, the last question is the same as "if 500 people roll a 365-sided die, what is the probability that none of them roll a 265 (today)", while the second question is "if n people roll a 365-sided die, what is the probability that there is some number which does not occur." SemanticMantis (talk) 16:20, 21 September 2012 (UTC)
- It's not so easy to work out a formula, but the probability that there is some day with no birthday is near certainty until the number of people gets well up into the thousands. Looie496 (talk) 16:24, 21 September 2012 (UTC)
- Isn't the chance that a day exists where nobody in a group of 365 or more has the same birthday given by 1-(1-(364/365)n)365 ? So, when n = 500, we get 1-(1-(364/365)500)365 = 1-(1-0.2537)365 = 1-(0.7463)365 = 1 - 4.1779×10-47, or, for all practical purposes 1.0, a virtual certainty that at least one birthday-free day exists. Here's the calculations with different values of n plugged in:
n probability of a birthday-free day existing ----- ------------------------------------------- 500 ≈1.0 1000 0.99999999997 1500 0.9975 2000 0.78 2500 0.32 3000 0.093 4000 0.0062 5000 0.00040 6000 0.000026 7000 0.0000017 8000 0.00000011 9000 0.0000000069 10000 0.00000000044
- I am immediately suspicious of that formula 1-(1-(364/365)n)365 as it gives a smooth transition of probabilities at the point n = 365. 86.160.216.189 (talk) 17:59, 21 September 2012 (UTC)
- It does seem odd that it gives a probability of almost 1, but not exactly 1, when used for fewer than 365 people. It's within 1.5×x10-78 of 1, though. Is this the chance that the universe will end before the year ends ? :-) StuRat (talk) 18:08, 21 September 2012 (UTC)
- The formula is only an approximation, because it treats the days as independent, which they aren't really. Looie496 (talk) 18:40, 21 September 2012 (UTC)
- In what sense ? I suppose birthdays aren't completely evenly distributed, in that slightly more people are born at some parts of the year than others. However, I don't think that's the issue here. StuRat (talk) 18:44, 21 September 2012 (UTC)
- I agree with you StuRat, independence of birthdays is not the problem with your formula (that is why I phrased the problems with dice above, just to be clear about the assumptions.) Nevertheless, I also think your formula is wrong. Why should it be right? You haven't explained your reasoning at all. Also, you can easily write a program to get some good approximate answers. I bet if you do that (correctly), you'll get answers noticeably different from your table above. SemanticMantis (talk) 18:53, 21 September 2012 (UTC)
- Well, if the chance of a given day being birthday free is 0.2537, then the probability that it has at least one birthday is 1-0.2537, or 0.7463. The chances of 2 such independent days having one or more birthday each then becomes 0.74632. The chances of 365 independent days having one or more birthday each then becomes 0.7463365. The chance of this not being the case then becomes 1-0.7463365. StuRat (talk) 19:07, 21 September 2012 (UTC)
- I think I see what Looie means, though. Once we know that one day contains (or doesn't contain) a birthday, this can change the count of potential birthdays that can fall on the other dates. So, yes, in this sense my formula is an approximation, with an error under 1.5×x10-78, in the case of 364 people. StuRat (talk) 19:12, 21 September 2012 (UTC)
- I haven't actually checked anything numerically, but the fact that the formula is inexact, and the error bounds have not been mathematically quantified, leaves open the possibility that it is badly wrong for other values. The 1.5×10-78 example by itself not necessarily confidence-inspiring. 86.160.216.189 (talk) 19:31, 21 September 2012 (UTC)
Consider the total number of ways to distribute the birtdays of the n persons over the 365 possible dates. You can represent each possible distribution as a string of the form
00|0|0000|0||00|0|00|....
where each "0" represents a birtday of the person and the "|" are the separations between different days. So, on Januari 1 you have two birtdays, on Januari 2 there is one birtday etc. etc. Then everyu possible string o can write down corresponds to a valid distribution and voce versa, so you just need to count the number of strings. We have n "0"'s and we have 364 "|"'s so the total number is Binomial[364 + n,n].
Then if we demand that on every day you have at least one birtday, then this means that you must leave at least one "0" at each position in the string. This means that you have n-365 "0"'s that you can freely shuffle around. You can, of course, also consider strings with n -365 "0"s, themapping between a birthday distribution is then the number of "0"'s at some date plus one. So, the total number of such distributions is Binomial[n-1,n-365].
The probability that you always have at least one birtday on each date is thus Binomial[n-1,n-365]/Binomial[364 + n,n], the probability that there are days where you don't have birthdays is thus given by:
1 - Binomial[n-1,n-365]/Binomial[364 + n,n]
Count Iblis (talk) 17:08, 21 September 2012 (UTC)
- I have doubts that this is correct. For a start, you seem to be assuming that your Binomial[364 + n,n] combinations are all equally likely, but it seems to me that they are not. 86.160.216.189 (talk) 17:46, 21 September 2012 (UTC)
- Yes, you are right, although it is easy to correct this by imposing an order on the birthdays. I'll correct the solution. Count Iblis (talk) 18:13, 21 September 2012 (UTC)
I corrected the above problem, I found that the probability p for there being one or more days on which there are no birthdays is given by:
where r = 365
Count Iblis (talk) 19:50, 21 September 2012 (UTC)
- What is k ? Also, would you like to run that against the numbers in my chart above, to see how the two compare ? StuRat (talk) 23:27, 21 September 2012 (UTC)
- I tried it earlier and they seem to correspond quite well, so assuming Count Iblis' formula is correct, it seems like yours is a pretty good approximation after all. (As far as evaluating the formula is concerned, k is just an internal counter which does not need to be assigned a meaning, but probably you know that and are asking about its meaning in terms of the formula derivation.) 86.160.216.189 (talk) 00:17, 22 September 2012 (UTC)
- I tried it earlier and they seem to correspond quite well, so assuming Count Iblis' formula is correct, it seems like yours is a pretty good approximation after all. (As far as evaluating the formula is concerned, k is just an internal counter which does not need to be assigned a meaning, but probably you know that and are asking about its meaning in terms of the formula derivation.) 86.160.216.189 (talk) 00:17, 22 September 2012 (UTC)
- Looks ok. There are possibilities. To get the ones with birthdays on every day, you have to subtract the ones where at least one day has no birthdays. There are ways to distribute them over 364 days, and 365 ways to choose those 364, so ; however, now you removed the ones with two "birthless" days twice. So you add those: there's ways to choose those days, giving you . Same thing again: some are counted twice, so you must subtract ; repeat the same proces till you arrive at all birthdays on one day, divide by total number of possibilities gives the odds of all days having birthdays, subtract from 1 to get the odds you wanted. Ssscienccce (talk) 01:30, 22 September 2012 (UTC)
About the derivation, you can use inclusion/exclusion as Ssscienccce explained above. Another way is to directly evaluate evaluate the number of ways you can have one or more birthday on each date by summing over each distribution of the birth dates over the year. So, if there are birthdays on the jth date, then the total number of ways to have such a distribution is:
If we sum this over all possible values of we should find . In the summation, you have to impose the constraint . You can do this using generating functions. You compute the function:
The coefficent of x^n is then the desired answer. Taking out the factor n!, the summation factors into identical summations, and each summation is the series expansion of exp(x), so f(x) = n! exp(rx), the coefficient of x^n is thus r^n. If we now consider only cases where you have one or more birthdays on each date, then each of the summation starts at 1 instead of zero, so, you get f(x) = n! [exp(x) - 1]^r. The coefficient of x^n can be obtained by first expanding [exp(x) - 1]^r and then extracting the coefficient of x^n from each term, you then get the above formula.
You can also use the generating function directly to find approximations for large n, you can write the coefficient of x^n as the contour integral of f(z)/z^(n+1) around the origin and then using the saddle point method obtain approximations to that integral. Count Iblis (talk) 02:00, 22 September 2012 (UTC)
- Sorry if I'm making a mistake here, but I think the formula
- is not correct. Let n=4 (number of people) and r=3 (number of days in the year). The number of ways to get at least one untaken day is the number of ways to get 2 untaken days (=3) plus the number of ways to get 1 untaken day (=3×24), for a total of 51 out of 34=81 possible arrangements. But the above formula gives
- due to the alternating signs term (-1)r-k, whereas every term in the numerator except the last should have a minus sign. Duoduoduo (talk) 13:56, 23 September 2012 (UTC)
- Sorry if I'm making a mistake here, but I think the formula
cells within a table
A | B |
C | D |
E | F |
G | H |
The above table has 2 columns and 4 rows. In the above example I have assigned 8 different values to the 8 cells of that table. Let us say that we can rearrange the values at will. By rearranging the values, how many different arrangements are possible? Bus stop (talk) 19:46, 21 September 2012 (UTC)
- see permutation. Widener (talk) 21:53, 21 September 2012 (UTC)
- Thanks, Widener. That number surprises me as I would have guessed a far lower number. By the way I see the number at Factorial. Bus stop (talk) 01:31, 23 September 2012 (UTC)
A B C D E F G H
- Followup question: Does the same calculation apply in the case of the above arrangement? Bus stop (talk) 11:06, 23 September 2012 (UTC)
- Yes. The formula is not based on the physical arrangement. The idea is this: There are 8 choices for where to put A. After A has occupied a box, there are 7 choices for where to put B. So we could have had A in the first box and B in any of 7 other boxes, or we could have had A in the second box and B in any of 7 other boxes,...,so for each of 8 possibilities for A there are 7 possibilities for B, hence we have 8×7 possibilities so far. Then for each of those 8×7 possibilities there are 6 remaining possible choices for where to put C, giving 8×7×6 possibilities so far. And so on until the end, where we have 8×7×6×5×4×3×2×1 possibilities. Duoduoduo (talk) 13:36, 23 September 2012 (UTC) possibilities possibilities
- Thank you. That is really interesting. I have to think about that. Bus stop (talk) 14:08, 23 September 2012 (UTC)
- Yes. The formula is not based on the physical arrangement. The idea is this: There are 8 choices for where to put A. After A has occupied a box, there are 7 choices for where to put B. So we could have had A in the first box and B in any of 7 other boxes, or we could have had A in the second box and B in any of 7 other boxes,...,so for each of 8 possibilities for A there are 7 possibilities for B, hence we have 8×7 possibilities so far. Then for each of those 8×7 possibilities there are 6 remaining possible choices for where to put C, giving 8×7×6 possibilities so far. And so on until the end, where we have 8×7×6×5×4×3×2×1 possibilities. Duoduoduo (talk) 13:36, 23 September 2012 (UTC) possibilities possibilities
- Followup question: Does the same calculation apply in the case of the above arrangement? Bus stop (talk) 11:06, 23 September 2012 (UTC)
Friendship combinations
Let say there are 6 people: A, B, C, D, E, F. Each of them has 2 internet friends in the group. None of them has any internet friend outside of the group. How many different ways can this happen? I already know the answer which is 70 but I'm confused on how to find the answer. Can anyone explain to me in details how to get the answer? Thanks!65.128.190.136 (talk) 21:23, 21 September 2012 (UTC)
- I find problems like this usually require some fiddling around. A special case is when you have two groups of three; there are of those. WLOG you can look at the group of three containing A - then you vary the other two. Widener (talk) 22:05, 21 September 2012 (UTC)
- You could do this systematically by starting with A, then going through all possible pairs that could be both partnered with A. Then for each person in each of those pairs, you could go through all possible pairs that could be partnered with them - this process will terminate when you get back to someone you have already considered. That's the brute-force approach; there may be a quicker approach but I'm not aware of it off the top of my head. Widener (talk) 22:17, 21 September 2012 (UTC)
- Think of the 6 people as points/nodes; friendship is designated by connecting two points/nodes. Given the condition that each person has exactly 2 friends, you need to connect them with lines so that every point/node is paired off with 2 other points/nodes. Then there are only two possible shapes that these nodes can take that satisfy these conditions: 2 rings/circles of 3 points/nodes each, or 1 ring/circle of 6 points/nodes. Now consider each of those shapes in turn:
- 2 rings/circles of 3 points/nodes: You need to select 3 out of 6 points/nodes for the first ring/circle. Then, since there are two rings/circles and order doesn't matter, you need to divide by the possible number of ways to order those 2 rings/circles:
- 1 ring/circle of 6 points/nodes: The general formula for a ring permutation is (n - 1)! / 2 = 5! / 2 = 60. (To see why the formula is thus, consider that after choosing any arbitrary node in the ring as your starting point, you have 5 nodes left to order, so 5!, but then you have to account for the shape being a circle and looping back to the beginning, so you have to divide by 2 since clockwise and counterclockwise orderings are equivalent upon reflection.)
- 10 + 60 = 70, and there's your answer.
- P.S. The problem is equivalent to asking for the number of undirected 2-regular graphs given n labeled nodes (sequence A001205 in the OEIS), where in this case n = 6. —SeekingAnswers (reply) 04:28, 22 September 2012 (UTC)
nPr?
How do I calculate nPr by hand? Let say in Ti 89 calculator, I type in nPr(9 , 2) = 72. So how can I do it without the calculator? Thanks!65.128.190.136 (talk) 23:06, 21 September 2012 (UTC)
- , or or . Widener (talk) 23:23, 21 September 2012 (UTC)
- Or in simpler terms: P(15,4)=15*14*13*12, just multiply, first number is where you start with, second one is the number of factors to use. P(12,3)= 12*11*10 you get the idea. Ssscienccce (talk) 01:41, 22 September 2012 (UTC)
- Wasn't this discussed on NPR ? :-) StuRat (talk) 01:44, 22 September 2012 (UTC)
- Facepalm... all right, I laughed a little. --Kinu t/c 04:36, 22 September 2012 (UTC)
- Wasn't this discussed on NPR ? :-) StuRat (talk) 01:44, 22 September 2012 (UTC)
September 22
a?
problem 20. I saw the answer key but still unable to understand what is going on. I understand the binary expansion part. I get lost in the statement "it follows that the coefficient of x^2012 is equal to 2^0 + 2^1 + 2^5 = 2^6. answer key part 1, answer key part 2. Thanks!Pendragon5 (talk) 00:02, 22 September 2012 (UTC)
- A product of sums is the sum of all the possible products; say (a+b)*(c+d)*(e+f): the result will contain every product you can make by picking one of each sum,so for example a*d*e, b*c*e, b*c*f and five others.
- In your polynomial, you know which factors supply the power of x part, so the rest must provide the coefficient: showing only the parts that make up the x2012 term : (x1024+..)*(x512+..)*(x256+..)*(x128+..)*(x64+..)*(..+ 32)*(x16+..)*(x8+..)*(x4+..)*(..+2)*(..+1). And 1*2*32=64=26 Ssscienccce (talk) 02:11, 22 September 2012 (UTC)
- Oh okay I think I got it now. Thanks! I also have another question. How can I convert 2012 into binary number without the answer key?Pendragon5 (talk) 19:37, 22 September 2012 (UTC)
Commutability of integration
Is the same as ? 71.207.151.227 (talk) 21:04, 22 September 2012 (UTC)
September 23
Seeking function
Does anyone have any feeling whether any function f exists to satisfy the following:
- for all 1/2 < z < 1
Of course, actual solution(s) would be even better, but failing that an idea of whether there are likely to be no solutions, one solution, or many solutions would be of interest. 81.159.105.97 (talk) 00:59, 23 September 2012 (UTC)
Topology
Is there any "recognized" metric which satisfies the Pythagorean Theorem even when the lengths of [vectors representing] sides in a triangle are not real numbers? (e.g. the hypotenuse will be 0 in that metric if the lenghs of [vectors representing] the other sides are 1 and i; In other words: the length of the vector (1,i) will be zero, because 12 + i2 = 02). Additionally, can such a metric be regarded as Euclidean (or pseudo-Euclidean) in some sense? Or rather, can it even be regarded as the simple Euclidean metric on a space over the complex field? Is it a legitimate "recognized" metric? 77.126.50.25 (talk) 08:58, 23 September 2012 (UTC)
- Would the pseudo-Euclidean Lorentz metric (Minkowski space) qualify? This still uses real numbers, but modifies signs in the Pythagorean theorem. The question is essentially circular, inasmuch as what is considered to be orthogonal (a "right angle") is defined by the metric (usually a symmetric bilinear form), including in any number of dimensions. Pure imaginary numbers can be used to produce the same effect. — Quondum 10:11, 23 September 2012 (UTC)
- The circularity you've pointed at can be avoided by defining the space over the complex field (rather than over the real field), so that the two vectors (x,0) (0,y) be orthogonal - because their inner product is zero, yet x may equal 1 and y may equal i, in which case - the metric I'm looking for - determines the length of their sum (1,i) to be zero (because 12 + i2 = 02).
- I suspect Minkowski space won't be sufficient, because of two reasons: First, it's not defined over the complex field - as opposed to what's expected from the space having the metric I'm looking for; Second, even if one realy tried to use Minkowski space over the complex field, its metric still doesn't fully obey the Pythagorean Theorem - but rather modifies signs - as you have already pointed out.
- As long as I think about this, it seems that the metric I'm looking for is the simple Euclidean metric on a space over the complex field, isnt it? 77.126.50.25 (talk) 11:32, 23 September 2012 (UTC)
- You are describing a space of two real dimmensions with a metric. If you choose to separately consider it as the complex field, this is irrelevant, since the complex multiplication is not used here except as you choose within the metric, and you are choosing to dismember it into it real and imaginary parts (i.e. two one-dimensional spaces over the real numbers) before defining the metric in terms of the pieces. You are separately defining (x,0) and (0,y) be orthogonal; you can't do this, it is determined by the metric (i.e. you have to find whether they are orthogonal under the metric you've defined). The metric you have described is exactly the Lorentz metric in two dimensions, where you have simply labelled the units on one axis with an "i". The two axes do turn out to be orthogonal, but vectors away from the axes behave a little counterintuitively, with those at 45° being self-orthogonal. This matches the behaviour of split-complex numbers rather than the normal complex numbers. If you were to try to define a "simple Euclidean metric" over the complex field in two dimensions, each "axis" would allow values from the whole field (you'd be dealing with 4 real dimensions). This construction works perfectly well, but I doubt that it is what you're looking for. — Quondum 13:00, 23 September 2012 (UTC)
- Thanks to your comment, I've just fixed some mistakes in the old version of my original question, in order to make it clear that what I've been trying to describe - was not' "a space of two real dimensions" - nor "two one-dimensional spaces over the real numbers", but rather one space of two complex dimensions. Btw, I didn't understand why I can't state (x,0) and (0,y) to be orthogonal; Doesn't this follow from the very definition of the orthogonality and the inner product? 77.126.50.25 (talk) 14:14, 23 September 2012 (UTC)
- You are describing a space of two real dimmensions with a metric. If you choose to separately consider it as the complex field, this is irrelevant, since the complex multiplication is not used here except as you choose within the metric, and you are choosing to dismember it into it real and imaginary parts (i.e. two one-dimensional spaces over the real numbers) before defining the metric in terms of the pieces. You are separately defining (x,0) and (0,y) be orthogonal; you can't do this, it is determined by the metric (i.e. you have to find whether they are orthogonal under the metric you've defined). The metric you have described is exactly the Lorentz metric in two dimensions, where you have simply labelled the units on one axis with an "i". The two axes do turn out to be orthogonal, but vectors away from the axes behave a little counterintuitively, with those at 45° being self-orthogonal. This matches the behaviour of split-complex numbers rather than the normal complex numbers. If you were to try to define a "simple Euclidean metric" over the complex field in two dimensions, each "axis" would allow values from the whole field (you'd be dealing with 4 real dimensions). This construction works perfectly well, but I doubt that it is what you're looking for. — Quondum 13:00, 23 September 2012 (UTC)
- The complex metric is "unrecognised" not because it is impossible to construct. It is "unrecognised" because fails to satisfy the triangle inequality, if not to mention that almost certainly evokes a lot of null vectors. And the existence of non-zero null vectors is a hint to impossibility of the Pythagorean Theorem in complex-valued "metric spaces" (which are actually not so by their core properties). BTW, I still do not realize why this section is named "Topology". Incnis Mrsi (talk) 13:28, 23 September 2012 (UTC)
- For this I implicity assumed a definition of "metric" as quadratic form. Mathematicians typically require the triangle inequality of a "metric", whereas physicists don't. And in physics, the Lorentz "metric" is most certainly recognized and is used extensively. and yes, the section is misnamed, care to suggest another? — Quondum 14:08, 23 September 2012 (UTC)
- See my response to your previous comment. 77.126.50.25 (talk) 14:15, 23 September 2012 (UTC)