Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Widener (talk | contribs)
Line 281: Line 281:
:::Yes, you're right, I've revised my advice above. I'm going to write a program to solve this and post it tomorrow. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 17:38, 20 September 2012 (UTC)
:::Yes, you're right, I've revised my advice above. I'm going to write a program to solve this and post it tomorrow. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 17:38, 20 September 2012 (UTC)
::::Thanks, I'll be interested to see what you come up with and your algorithm to solve it. I was trying to work it out with Matlab, but it was just turning into a clumsy brute force algorithm, and I wasn't getting anywhere. But I charted it out, and three legs that we have here are definitely not part of the solution. For example, for A and G to meet, the only teams that haven't met A and G would be J and N, and they have both played each other already, so I right when I thought I had painted myself into a corner. —'''[[User:Akrabbim|Akrabbim]]'''<sup>[[User talk:Akrabbim|talk]]</sup> 17:56, 20 September 2012 (UTC)
::::Thanks, I'll be interested to see what you come up with and your algorithm to solve it. I was trying to work it out with Matlab, but it was just turning into a clumsy brute force algorithm, and I wasn't getting anywhere. But I charted it out, and three legs that we have here are definitely not part of the solution. For example, for A and G to meet, the only teams that haven't met A and G would be J and N, and they have both played each other already, so I right when I thought I had painted myself into a corner. —'''[[User:Akrabbim|Akrabbim]]'''<sup>[[User talk:Akrabbim|talk]]</sup> 17:56, 20 September 2012 (UTC)

(Teams have numbers rather than letters, obviously)

Series 1: 1, 2, 3, 4<br>
Series 2: 5, 6, 7, 8<br>
Series 3: 9, 10, 11, 12<br>
Series 4: 13, 14, 15, 16<br>

Series 1: 1, 5, 9, 13<br>
Series 2: 2, 6, 10, 14<br>
Series 3: 3, 7, 11, 15<br>
Series 4: 4, 8, 12, 16<br>

Series 1: 1, 6, 11, 16<br>
Series 2: 2, 5, 12, 15<br>
Series 3: 3, 8, 9, 14<br>
Series 4: 4, 7, 10, 13<br>

Series 1: 1, 7, 12, 14<br>
Series 2: 2, 8, 11, 13<br>
Series 3: 3, 5, 10, 16<br>
Series 4: 4, 6, 9, 15<br>

Series 1: 1, 8, 10, 15<br>
Series 2: 2, 7, 9, 16<br>
Series 3: 3, 6, 12, 13<br>
Series 4: 4, 5, 11, 14<br>

[[Special:Contributions/81.159.104.182|81.159.104.182]] ([[User talk:81.159.104.182|talk]]) 03:41, 21 September 2012 (UTC)


== x/y in IV quadrant, is the given ratio negative or positive? ==
== x/y in IV quadrant, is the given ratio negative or positive? ==

Revision as of 03:41, 21 September 2012

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


September 15

connected graphs

Is there a known pathological set of vertices for which the Urquhart graph, the Gabriel graph or the relative neighborhood graph is not connected? If not, is nonexistence proven? —Tamfang (talk) 07:06, 15 September 2012 (UTC)[reply]

Hmm. If the number of points is finite, then you can prove by induction that the relative neighborhood graph is connected. There are only finitely many pairwise distances between the points. Consider these pairwise distances in increasing order to prove that any two points are in the same connected component: Two points that are separated by the minimum pairwise distance (among all pairs of points) must be joined by an edge. Now consider two points A and B that are separated by more than the minimum pairwise distance. Either they are joined by an edge, or else there is a third point C whose distances from A and from B are strictly less than the distance between A and B. By induction, A and C are in the same connected component, and B and C are in the same connected component; so A and B are in the same connected component. —Bkell (talk) 08:31, 15 September 2012 (UTC)[reply]
Or how about this for the Urquhart graph, to separate two figures you'd need to have a cycle of triangles which each had two sides removed. One of those sides in the circle must be the smallest so why was it removed? Dmcq (talk) 08:38, 15 September 2012 (UTC)[reply]
If you have a graph where all the lengths round the cycle are the same so there is no consistent ordering then you can separate it in two but that's rather straining it. Eg you can do that with a large seven sided regular figure with a smaller one at its centre. Dmcq (talk) 14:53, 15 September 2012 (UTC)[reply]

Recurrence relation

I was thinking about what would happen if a player keeps scoring one more than his average. Let's say a cricketer has an average Ln after n matches. Let's say in the next match, he scores Ln+1. His new average will be

Ln+1 = (n * Ln + Ln + 1) / (n + 1)

Hence we have the recurrence,

Ln+1 = Ln + 1 / (n+1)

Now since Ln+1 - Ln tends to zero as n tends to infinity, I know that the series converges. But how do I find out the number it converges to? Can someone please solve this recurrence for me?

Also, more generically, if a cricketer scores (k * Ln + c) more than is average instead of just one more, what would his average converge to? Note that if we ave k = c = 1, it reduces to the case we talked about above.

Help will be appreciated ! Rkr1991 (Wanna chat?) 11:01, 15 September 2012 (UTC)[reply]


I was working some more on this, and I find that the general series seems to converge for any value of k less than 2. Is this correct? And if so, how can I find the limit? Rkr1991 (Wanna chat?) 11:01, 15 September 2012 (UTC)[reply]

You cannot conclude that a sequence converges if the change tends to zero. The initial recurrence relation you give is, aside from an initial additive constant L0, the harmonic series, which diverges (albeit very slowly). — Quondum 11:48, 15 September 2012 (UTC)[reply]
Yeah, there's no convergence at all. You can determine this just by thinking about the problem without actually doing any calculations. If a cricketer plays forever and keeps scoring any constant positive value more than his average every game, then his score for each specific game will diverge toward infinity, and his average score for all games will follow and diverge toward infinity as well (though ever more slowly, requiring ever more games for the same amount of increase). An increasing sequence does not necessarily converge even if the steps get smaller; the steps have to get smaller at a sufficient rate, and the rate of decrease described in your problem isn't sufficient. —SeekingAnswers (reply) 16:02, 17 September 2012 (UTC)[reply]


September 16

Probability question

Hi, this is for personal interest and is not a homework question.

1. Say I choose n independent random numbers uniformly in the range 0 to 1, and then sort them in ascending order, calling the resulting sequence x1, x2, ... xn.

2. Now suppose I start with x0 = 0 and then recursively set xi+1 = xi + r where r is a random variable from a distribution P. When I get to xn+1 I stop, and then I divide all the x1 through xn by xn+1. Is there any distribution P that will ensure that the resulting distribution of x1, x2, ... xn is identical to that obtained in method 1? Failing that, is there any sequence of different (i-dependent) distributions* that would achieve that result?

* But not xi-dependent, since I already know how to do that.

86.179.116.254 (talk) 00:15, 16 September 2012 (UTC)[reply]

I don't quite understand part 2, could you provide an example, please ? StuRat (talk) 17:04, 16 September 2012 (UTC)[reply]
I think it is clear. Which bit don't you understand? I can't provide an example because I do not know any solution. 86.176.214.83 (talk) 17:35, 16 September 2012 (UTC)[reply]
To attempt to rephrase 2 slightly (from someone other than the OP), he's generating a one-dimensional random walk starting from zero, drawing the step size from an unspecified distribution P. (My guess is that there is an implicit assumption that P is always non-negative.) He's then normalizing the points such that the total distance traveled (in n+1 points) is 1. His question is, what distribution P which would make the set of the first n points visited during this random walk be a uniformly distributed random variable across the unit interval? -- 71.35.125.16 (talk) 20:05, 16 September 2012 (UTC)[reply]
(OP) Right. Over many trials, the sets of x's delivered by method 2 should be statistically indistinguishable from those delivered by method 1. By the way, the non-negative constraint is not an implicit assumption but follows from the requirements (if negative r is allowed then you are not guaranteed to end up with all the x's between 0 and 1, so it could never be a solution). Also by the way, interestingly (or perhaps obviously, I don't know), making P the same distribution as the intervals between the x's in (1) does not seem to work. 86.176.214.83 (talk) 20:42, 16 September 2012 (UTC)[reply]
Hmm. I'm not going to give an answer here, because I don't know; but I'm going to restate the question a bit. If we just consider the case n = 1, then we are seeking a probability distribution P such that if X and Y are independent random variables distributed according to P, then X/(X + Y) is uniformly distributed on [0, 1]. Or maybe we'll settle for two probability distributions P1 and P2 such that if X and Y are independent random variables with X distributed according to P1 and Y distributed according to P2, then X/(X + Y) is uniformly distributed on [0, 1]. Right? —Bkell (talk) 05:41, 17 September 2012 (UTC)[reply]
Yes, that is correct in the simple case of n = 1. Obviously the elegant and desirable solution is to have just one distribution P, and even more so if it worked for any n.86.128.1.50 (talk) 11:35, 17 September 2012 (UTC)[reply]
  • I won't attempt a formal proof, but I'm convinced that the answer is no. The Strong Law of Large Numbers says that if n is reasonably large, the distribution of xn obtained by method 2 will be approximately Gaussian. The distribution of xn obtained by method 1 will be nothing like Gaussian. Looie496 (talk) 06:29, 17 September 2012 (UTC)[reply]
I'm not sure, but I think you may have misunderstood the question. This is not about the distribution of xn alone. It is about the distribution of the whole set of n variables, once they have been normalised into the range 0 to 1 using the method explained. There appears to be nothing "Gaussian" about this for any choice of P or n. 86.128.1.50 (talk) 11:31, 17 September 2012 (UTC)[reply]
Sure, but if we can get the distribution of x1x2, …, xn to be the same in the two methods, then, in particular, the distribution of xn alone will be the same in the two methods. So, if it is impossible to do the latter, then it is also impossible to do the former. —Bkell (talk) 14:23, 17 September 2012 (UTC)[reply]
Oh, but I see what you mean: after normalization, the distribution of xn in the second method is not Gaussian. —Bkell (talk) 14:30, 17 September 2012 (UTC)[reply]

Space battle with only one shot

Here is a mathematical question. In a space war, there are two ships fighting each other to the death. There is a large ship and a small ship. Both ships have a single weapon which can only be fired once. The weapon is so powerful that one hit from it would result in total destruction. The small ship has a probability of hitting of S_hit(x) = 0.9^x where x is the unit distance. The large ship has a probability of hitting of L_hit(x) = 0.6^x

Assuming that both ship are approaching each other at a straight line. What is the best strategy for the small ship? What is the best strategy for the large ship? It is obvious that the large ship should wait until the small ship approaches as close as possible before firing it's weapon but if it waits too long, the small ship would fire first and destroy it. For the small ship, it wants to get as close as possible before firing it's weapon because if it fire while too far away, then it would miss and giving the large ship a chance to shoot it at a closer distance. 220.239.37.244 (talk) 03:48, 16 September 2012 (UTC)[reply]

Here you get into game theory, as you would probably want to fire just before your opponent, who presumably will do the same. StuRat (talk) 03:54, 16 September 2012 (UTC)[reply]
You haven't stated what the payoffs are, so I'm going to make some assumptions. Each ship derives utility 1 from surviving, and utility epsilon from destroying the other ship while surviving (small enough to be ignored in our calculations, but positive to ensure that if one ship misses, the other ship will fire at distance 0). If the large ship fires at distance x, then the small ship has two options: fire at distance , or wait to fire until distance 0 (assuming it survives). One can evaluate the payoffs for each of these. Similarly if the small ship chooses to fire at distance x. Finding equilibrium then comes down to solving the equation . Let a be the solution: the equilibrium occur when one ship fires at distance a and the other fires at distance 0 (if it survives). Of course, this all changes if you change the payoff structure.--121.73.35.181 (talk) 06:36, 16 September 2012 (UTC)[reply]
That distance is approximately 2.72 (thanks Wolfram Alpha!). I agree, that is the distance at which the two captains will be indifferent to firing or waiting (and will simply do whatever the other one doesn't). So, there are two equilibriums, one with the large ship firing at 2.72 and the small ship waiting, and one the other way around. In either case, plugging in 2.72 gives us a probability of the small ship winning of 75% (and the large ship 25%). --Tango (talk) 13:44, 16 September 2012 (UTC)[reply]
It's pretty clear that the objective is to destroy the enemy ship. Therefore the strategy is one where the probability of the destruction of the enemy ship is maximised. However mathematically that occurs when the distance is zero which makes this a paradox. No real Captain would wait until the distance is zero before firing the weapon because it would most certainly means the enemy would fire first.
It's obvious the strategy is as follows.
If the enemy has already fired (and you are still alive) then wait until the distance is zero then fire the weapon.
If the enemy has not fired yet, then wait until optimum distance Xoptimum then fire the weapon.
The distance Xoptimum is one where the probability that the enemy ship will be destroyed is the greatest. But how to calculate Xoptimum.
It seems to be that Xoptimum must a a function of the current distance to the enemy. Obviously at the start the distance is infinity and Xoptimum is a value. Later when the distance is 10, then Xoptimum can be a completely different value because you are still alive at distance 10. 220.239.37.244 (talk) 10:34, 16 September 2012 (UTC)[reply]
The optimum distance is only a function of the current distance in as much as if you've already crossed it you can't fire at that distance. As long as you are at a distance greater than 2.72, then you shouldn't fire until you reach 2.72 (and then you should fire if your enemy doesn't and wait if your enemy does fire - although basic game theory breaks down a bit there since you won't actually know what they are going to do until they do it). If you start out closer than 2.72, then everyone just wants to fire straight away and it isn't entirely clear what happens because the rules didn't set out what happens if you fire simultaneously. --Tango (talk) 13:44, 16 September 2012 (UTC)[reply]
It goes outside the realm of math when game theory human behavior is added in. Knowing how your enemy is likely to behave becomes important. Also, is there any delay between when the enemy commits to firing and when you fire ? StuRat (talk) 17:01, 16 September 2012 (UTC)[reply]
I though the whole point of "game theory" is that it does use mathematical models and methods to solve these kinds of problems. Therefore it is not outside the realm of math. 86.176.214.83 (talk)
But that assumes that your opponent behaves rationally and can do the math. If not, then it comes down to what you guess your opponent will do. Then there's also an opponent who is too smart, so also considers what you will do in response to what he will do, etc. StuRat (talk) 21:54, 16 September 2012 (UTC)[reply]
I'm not disputing that real-life game problems may involve imponderable aspects of human behaviour that are not amenable to exact mathematical analysis. My quibble was about your statement "It goes outside the realm of math when game theory is added in". As I understand it, game theory covers exactly those aspects of the subject that are amenable to mathematical (or logical) analysis. Even the game theory article defines the subject as "the study of mathematical models of ... etc." (my italics). 86.176.214.83 (talk) 22:33, 16 September 2012 (UTC)[reply]
Point taken and correction made. StuRat (talk) 22:53, 16 September 2012 (UTC)[reply]
You don't need your opponent to be able to do the maths. There are plenty of examples where people (and other animals) have naturally come to the same solution as game theory would suggest simply be empirical observation, or even evolutionary pressures. --Tango (talk) 23:07, 17 September 2012 (UTC)[reply]
When the ships are further apart than the critical distance of c. 2.72, both ships have a lower chance of survival if they fire first, compared to if the other ship fires first. Therefore, neither should fire. When the ships are closer than c. 2.72, both ships have a greater chance of survival if they fire first. However, both also further increase their chances if they can fire first as late as possible, so the behaviour for <= 2.72 may not be so simple as just firing as fast as possible. 86.176.214.83 (talk) 17:33, 16 September 2012 (UTC)[reply]
There was an analogy I was trying to think of at the time, which came to me later, which is the game of "chicken". In fact, we have an article Chicken (game), but I haven't looked at it yet, and I don't know how relevant it actually is to this problem. 86.128.1.50 (talk) 12:04, 17 September 2012 (UTC)[reply]
There may be more complications than already mentioned. First of all, there are no turns in this game, and it's ill-defined in that we don't know what happens if both fire at the same time. This can be worked around by assuming that at distances of, say, 0.01n, the captain of the small ship gets their opportunity to fire, and at 0.01(n - 1/2) the other captain does. The question is what happens if we replace the constant 0.01 and repeat the problem with smaller constants. We can start at values of n which are so huge that firing during the fist turn would be "a bad idea" by any standards.
The fundamental dilemma is that waiting is insofar beneficial as it will increase the chances of hitting later on, but detrimental as it adds to the chances the other captain will hit should they fire during their upcoming turn. We can assume that the captain with the better chance to hit (in this example the one going up against the bigger target) will never want to fire at a chance of <50%. It will boil down to the questions what would happen if the enemy fired during the next turn, vs. if I fire first. As one can see quite easily, it is advantageous to fire if the risk to miss is below the enemy's chance to hit me during their turn: 1 - ax = bx, with a=0.9 and b=0.6 or vice versa determines the critical value x. If one adds bx, one gets the equation posted by IP user 121.73
Oops. It must read, 'If one adds ax...' - ¡Ouch! (hurt me / more pain) 06:35, 19 September 2012 (UTC)[reply]
If one solves the equation for x, one does unfortunately not get the solution to the continuous problem, as both captains would fire at the same instant, and we don't know what happens in that case. It does, however, solve the turn-based problem, as it provides a very good approximation of the distance at which one captain will fire. This will still assume that there are only two choices, fire and hold, but no "run" option for the captain of the faster ship if they shot first and missed. - ¡Ouch! (hurt me / more pain) 06:05, 19 September 2012 (UTC)[reply]

Root of cubic formula

Root of quadratic formula is


but what is the root of cubic formula? Sunny Singh (DAV) (talk) 16:06, 16 September 2012 (UTC)[reply]

The more terms are added, the more possible solutions. With the quadratic formula we already have the possibility of 2 roots, introduced by the ± sign, but larger formulas get unwieldy fast. However, since you asked, here it is, along with the reasons it's not recommended for common use: [1]. Wikipedia has a slightly different version: Cubic_function#General_formula_of_roots. StuRat (talk) 16:38, 16 September 2012 (UTC)[reply]
"Larger" formulas cease to exist fast! 86.176.214.83 (talk) 17:42, 16 September 2012 (UTC)[reply]
Actually the brilliant, but tragically short-lived Évariste Galois, showed via Galois theory that it is impossible to have algebraic solutions for general polynomials of order 5 and higher. His theory also provides the tools for telling whether or not a given higher order polynomial may be a special case with algebraic roots (e.g. x8 - 2 = 0). Dragons flight (talk) 17:31, 17 September 2012 (UTC)[reply]

Before the computer, tables of square roots and cube roots and trigonometric functions were used. So a problem was considered solved when reduced to such functions. But now the computation of a square root is not much easier or faster than solving the equation in the first place like this. The root of cubic formula has lost importance. Bo Jacoby (talk) 06:30, 17 September 2012 (UTC).[reply]

I sort of doubt that the importance of that formula ever had much to do with getting concrete numerical approximations to the root. It's too awkward to use. I can practically guarantee that the quartic formula was never important for that reason. The formulas are important for theoretical reasons, not because you can look up roots in tables. --Trovatore (talk) 20:07, 20 September 2012 (UTC)[reply]


Thanks, I got my answer. I have an another which revolves around my mind but it is not from mathematics. The question is -
"We, in India, read quadratic equation in standard 10. In which standard quadratic equation is taught to the students of USA?" Sunny Singh (DAV) (talk) 17:16, 17 September 2012 (UTC)[reply]

What's a "standard"? If you mean the same as grade level, then my recollection was that it was taught around 7th or 8th grade in the USA. Dragons flight (talk) 17:31, 17 September 2012 (UTC)[reply]
However, that's for college prep students. Those who don't intend to go to college may not encounter the quadratic formula at all, depending on state minimum curriculum requirements. StuRat (talk) 18:48, 17 September 2012 (UTC)[reply]

By quadratic equation I mean root of quadratic formula. Did you, Dragons flight, mean the same? Sunny Singh (DAV) (talk) 14:50, 18 September 2012 (UTC)[reply]

Yes, he probably did, because that is the context of the question and it sounds about right to me too - I think I learned it around 7th grade. As StuRat mentioned, mathematics teaching standards will vary from state to state in the US, but I think that they all follow pretty much the same schedule. It would be great if someone could find a reference with a state-by-state summary of basic mathematics education requirements, but I haven't found anything with a few quick searches. 209.131.76.183 (talk) 12:57, 20 September 2012 (UTC)[reply]

A problem with units

The following differential equation came up when I was solving a simple electrostatics problem: , where V (which represents the electrical potential) is a function of r (which represents a distance). The solution is obviously . But unit-wise, this answer doesn't make sense because you can't take the logarithm of a quantity with units. How can this dilemma be resolved? 65.92.7.148 (talk) 17:34, 16 September 2012 (UTC)[reply]

Rewrite the solution as , where C is an arbitrary constant that has the same units as r. — Quondum 18:10, 16 September 2012 (UTC)[reply]
You can just take the logarithm of your units, it won't hurt them. Or in other words, formally calculating with logarithms of units works fine. You'll have that , and you can use the additive constant to cancel the logarithm of the meter. —Kusma (t·c) 12:36, 17 September 2012 (UTC)[reply]


September 17

L'Hopital's rule

How do I apply L'Hopital's rule to a limit function with two or more variables? Plasmic Physics (talk) 10:45, 17 September 2012 (UTC)[reply]

With two or more independent variables, I think you need to define the path along which you approach the limit point (and so reduce the problem to a one-variable problem) before applying L'Hopital's rule. This is because the limit can depend on the path taken. To take a simple example (which doesn't really need L'Hopital's rule, although you can use it), consider the limit of the function
at (0,0). If we approach (0,0) along the line x = 0 then the limit is 2; along the line y = 0 then the limit is 1/2; and along the line y = x then the limit is 1. Gandalf61 (talk) 11:11, 17 September 2012 (UTC)[reply]
What if I use a mixed double partial derivatives of both the numerator and denominator? If I do that for the limit of f(x,y) as (x,y)->(0,0), then this approach would give 2/2, which simplifies to 1. Is this just a coincidence? Plasmic Physics (talk) 13:17, 17 September 2012 (UTC)[reply]
Not sure what you mean by "mixed double partial derivatives". If you mean
then you are implictly choosing the path y = x (or at least, a path that is tangent to y = x at (0,0)). Gandalf61 (talk) 13:51, 17 September 2012 (UTC)[reply]
No, I mean the product not the sum, as in over . Plasmic Physics (talk) 21:16, 17 September 2012 (UTC)[reply]
Roughly, expand f and g in their multivariate Taylor series at (0,0). If the Taylor series have a common factor, then you can cancel this common factor and evaluate the limit. Of course, this works much less often than in one variable, since two functions can be zero at a point yet have no common factor (e.g., f(x,y)=x and g(x,y)=y are both zero at the origin, with no common factor). Sławomir Biały (talk) 00:03, 18 September 2012 (UTC)[reply]
To show that a limit exists in the multivariate context, though, you need to show that the limit is the same along all paths. I'm not sure l'Hopital's rule is going to be much help (it's really just a short-cut anyway - if you can find a limit using the rule then you can find it using elementary techniques almost as quickly). --Tango (talk) 23:12, 17 September 2012 (UTC)[reply]

Definite integral from -1 to 1 of 1/x

Since 1/x is discontinuous and undefined at x = 0, my understanding is that the definite integral therefore has no limit and is undefined (and claiming that

is sketchy, dubious, and unrigorous). However, I have heard that it is possible to rigorously evaluate so that the result is not undefined. Can someone explain how this is done? —SeekingAnswers (reply) 15:34, 17 September 2012 (UTC)[reply]

See Cauchy principal value. Your integral is discussed in the Examples section. --Wrongfilter (talk) 16:18, 17 September 2012 (UTC)[reply]
See also Principal value for the complex number case and list of integrals#Integrals with a singularity which has a note about both cases. Dmcq (talk) 16:42, 17 September 2012 (UTC)[reply]

simple Differential equation

hi,

I would like to know the solution for:

f(x)*f^2(x)=1 15:50, 17 September 2012 (UTC) — Preceding unsigned comment added by Exx8 (talkcontribs)

What have you tried so far on solving the problem? —SeekingAnswers (reply) 16:28, 17 September 2012 (UTC)[reply]
Is this supposed to be ? Looie496 (talk) 21:40, 17 September 2012 (UTC)[reply]

no, it is Exx8 (talk) —Preceding undated comment added 23:16, 17 September 2012 (UTC)[reply]

Really the second derivative? This would be easy to deal with if it was the first derivative, but with the second derivative, it is essentially the equation for the motion of a charged particle in a repulsive inverse square field, which doesn't have a solution that can be written in closed form (as far as I know). Looie496 (talk) 19:23, 18 September 2012 (UTC)[reply]
I agree that it seems implausible that this is correct equation. It does have a solution though: . Dragons flight (talk) 19:41, 18 September 2012 (UTC)[reply]

Correct you are!! Do you have some software or you did it alone? Thank you! Exx8 (talk) 21:26, 19 September 2012 (UTC)[reply]

Skewed distribution with given median, mean and SD

For what skewed generalizations of the normal distribution can parameters be estimated given the mean, median and standard deviation of a sample but no other information about the sample? NeonMerlin 16:37, 17 September 2012 (UTC)[reply]

Are you asking for named types of distributions? Otherwise the answer is banal: you could do it for almost any family of distributions that has three parameters. Looie496 (talk) 18:36, 17 September 2012 (UTC)[reply]
These are not skewed, but the elliptical distributions are a generalization of the normal distribution in which one parameter is the median (which equals the mean if the mean exists) and the other parameter is the standard deviation. Duoduoduo (talk) 18:53, 17 September 2012 (UTC)[reply]
If you type three parameter distribution into the Wikipedia search box, a number of promising hits come up. Duoduoduo (talk) 19:03, 17 September 2012 (UTC)[reply]

When does additivity imply homogeneity?

There are usually two requirements for R-linearity of a function f : UV, where U and V are modules over a ring R (here x, yU, α ∈ R):

The two requirements are clearly closely related (partially redundant conditions). When does the first imply the second? For example, if R = , it clearly does. Wherever R is not commutative, it clearly doesn't. Does the implication extend to when R = ℚ? To other commutative rings? The reverse implication clearly holds whenever the module U can be generated by a single element (i.e. when it is one-dimensional over R). — Quondum 21:15, 17 September 2012 (UTC)[reply]

Yes, it holds for ℚ. As you noted, for an integer. So , and thus . So .
It doesn't hold for , however. Let be a basis for over , and let be projection onto the first basis element.--121.73.35.181 (talk) 22:38, 17 September 2012 (UTC)[reply]
Resolved
Thanks. Nice counterexample, which by analogy severely restricts the rings for which the implication holds. — Quondum 03:44, 18 September 2012 (UTC)[reply]

September 18

The continuum and cardinality

Hi. Not homework, but this is something I'm curious about. It is easy to show that R and (0,1) have the same cardinality, by taking the function f: (0,1) ↔ R f(x) = tan[π(x-1/2)] which clearly is a bijection. How would one show that (0,1) and [0,1] have the same cardinality, by a similar argument (constructing a function)? Thanks. 24.92.74.238 (talk) 00:33, 18 September 2012 (UTC)[reply]

Well, it can't be quite that simple, because (0,1) and [0,1] are not homeomorphic (for example, [0,1] is compact, and (0,1) is not). So the function can't be bicontinuous. But you can find a general method for doing that sort of thing at Schroeder-Bernstein theorem. Someone may be able to give a cleverer answer. --Trovatore (talk) 01:12, 18 September 2012 (UTC)[reply]
I don't know about clever, but here's another: map to and to for a positive integer, and leave every other point fixed. This is a bijection from [0,1] to (0,1).--2404:2000:2000:5:0:0:0:C2 (talk) 03:05, 18 September 2012 (UTC)[reply]
That's not quite a map from one of the sets to the other, but the general idea works. Essentially if you have an infinite set M and a finite or countable set S then to find a bijection between M and you take a countable set and construct a bijection from L to using the idea of Hilbert's Hotel, a construction that uses the enumeration of L like the 2404 IP above has suggested. —Kusma (t·c) 07:41, 18 September 2012 (UTC)[reply]
Whoops, for some reason I was thinking [-1,1] instead of [0,1].--121.73.35.181 (talk) 08:06, 18 September 2012 (UTC)[reply]
Just define a sequence in (0, 1), e.g. and shift it by two positions. The two endpoints of [0, 1] then land in the two just released points:

CiaPan (talk) 09:04, 18 September 2012 (UTC)[reply]
Some people think I am homeomorphic, but I am actually bicontinuous. μηδείς (talk) 18:50, 18 September 2012 (UTC)[reply]

Complexity class of minimalist pangram construction

Some time ago, I solved a word puzzle which asked the puzzler to find the minimal number of names necessary out of a given list of 100 names to construct a pangram. I was able to solve the problem without too much difficulty by inspection alone; what I did was I looked for the least common letters, which appeared in only a few names, and then worked backwards from the names containing the least common letters to construct the minimal pangram.

I was thinking some more about the problem, and while the 100-name problem was easy to solve manually, I suspect that the same problem with large inputs might be quite difficult, even for a computer.

Consider the following generalized version of the problem:

You have an alphabet consisting of c characters and a list of s strings of variable lengths which altogether contain at least 1 of every character of the alphabet. Your task is to construct a minimalist (defined as requiring the fewest possible strings) pangram out of those strings.

The difficulty is the "minimalist" part. It is fairly easy to construct a pangram; indeed, you can easily see that there is an upper bound of c on the number of strings required, as, at most, you need one string that represents each character of the alphabet. The lower bound, though, seems very hard. The algorithm I outlined above of working backwards from least common letters seems extremely inefficient when c and s are large numbers, as the branching factor of possibilities is so large even after minimizing the tree by working starting from the least common letters. It can heuristically produce a close-to-minimal pangram very quickly, but the absolute minimal pangram with that approach would require going through that entire huge tree.

So my question is, does anyone know the complexity class of this problem? Is this pangram-construction problem a well-known problem in the literature which already has many papers written on it? If so, does Wikipedia have an article on it? (Perhaps linked somewhere from list of NP-complete problems, if my intuition that the problem is computationally difficult is correct; there are so many problems in that list, and I don't really know what to look for.) Or perhaps I am overestimating the difficulty of the problem, and some efficient algorithm is known; if so, how does that efficient algorithm work?

SeekingAnswers (reply) 02:43, 18 September 2012 (UTC)[reply]

Assuming that there are no grammatical constraints on the sentence, it's a set cover problem. 130.188.8.27 (talk) 10:14, 18 September 2012 (UTC)[reply]

see humanities reference desk

i'm asking purely in terms of elegance and what is a good definition. i understand the current definitions. 80.98.245.172 (talk) 19:02, 18 September 2012 (UTC)[reply]

Summarizing the question stated there: If the count of hours turns over at 12:00 and the count of months turns over at 1/1, one of them must be wrong, so which is it? —Tamfang (talk) 20:11, 18 September 2012 (UTC)[reply]
Inconsistent, yes. Wrong, no. Mathematicians cannot agree on whether to start the natural numbers at 0 or 1. Dbfirs 20:21, 18 September 2012 (UTC)[reply]
You misspelled "some people, even some mathematicians, erroneously start the natural numbers at 1". Hope this helps. --Trovatore (talk) 22:34, 18 September 2012 (UTC) [reply]
Naturally! — but only in the last 200 years. I recall a lecturer who insisted on using "zeroth" when most people said "first". Dbfirs 07:16, 19 September 2012 (UTC)[reply]
I happen to know what question you folks are talking about, because I participated in the responses. But in general, please provide a link to whatever it is you're referring to. The above would completely bamboozle many people. -- ♬ Jack of Oz[your turn] 22:14, 18 September 2012 (UTC)[reply]
Agreed, but I wish you would have provided the link, rather than just complain about the lack of one. Here it is Wikipedia:Reference_desk/Humanities#Either_the_Calendar_is_wrong_or_the_Clock_is._It.27s_that_simple.. StuRat (talk) 22:18, 18 September 2012 (UTC) *[reply]
Have to agree, an RD star for that one, as actually providing the needed link. μηδείς (talk) 22:22, 18 September 2012 (UTC)[reply]
I wanted to explain why it's important for the instigator of the thread, or the first respondent, to provide a link in these circumstances. I wasn't complaining per se. I identified the issue and suggested a solution. Now, I'm in trouble. Interesting. And someone who did actually complain gets a star. Interesting. -- ♬ Jack of Oz[your turn] 22:30, 18 September 2012 (UTC)[reply]
This reminds me of the debate over where computer arrays should start counting. In Fortran, they traditionally start with one, so the first object in the array is the first element: ARRAY(1). In C, they start counting at zero, so the first object in the array is the zeroth element: ARRAY(0). (The C convention does makes sense in terms of the first element having a zero offset, however.) StuRat (talk) 22:25, 18 September 2012 (UTC)[reply]
Anything that involves modular arithmetic is a nuisance when arrays start at 1, and, as you say, in languages where actual memory locations and offsets are visible, it makes sense for the first element to be at offset zero. Other than that, I cannot think of any situation where arrays starting at 0 are helpful from a programmers' perspective. 86.176.210.77 (talk) 00:08, 19 September 2012 (UTC)[reply]

September 19

Cohen and the axiom of choice

The article on Paul Cohen mentions that:

Cohen is noted for developing a mathematical technique called forcing, which he used to prove that neither the continuum hypothesis (CH), nor the axiom of choice, can be proved from the standard Zermelo–Fraenkel axioms (ZF) of set theory.

I am trying to find out the exact year in which Coehn first proved the result concerning the axiom of choice (not the continuum hypothesis) and the year and the paper when it was first published, if ever. Thanks--Shahab (talk) 11:21, 19 September 2012 (UTC)[reply]

In 1963 according to this:
In 1942 Gödel attempted to prove that the axiom of choice and continuum hypothesis are independent of (not implied by) the axioms of set theory. He did not succeed, and the problem remained open until 1963. (In that year, Paul Cohen proved that the axiom of choice is independent of the axioms of set theory, and that the continuum hypothesis is independent of both.)
It was published in two parts, but the result had previously appeared in april 1963 and were presented may 3, 1963 at Princeton, according to the booknote at end of second paper: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC300611/?page=6
First paper: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC221287/?page=1
To work out where exactly the independence of ZFC of the axiom of choice is proven, I would first have to figure out what all this means...
Other info: http://www-history.mcs.st-andrews.ac.uk/Biographies/Cohen.html Ssscienccce (talk) 17:26, 19 September 2012 (UTC)[reply]

Solve this please

(Moved by User:JackofOz from Miscellaneous Ref Desk)

Questions 1 - 5

There are six soccer teams - J, K, L, M, N and O - in the Regional Soccer League. All six teams play each Saturday at 10 a.m. during the season. Each team must play each o f the other teams once and only once during the season.

Team J plays team M first and team O second.
Team K plays team N first and team L third.
Team L plays team O first.

On the first Saturday, which of the following pairs of teams play each other ?

(A)    J and K ; L and O ; M and N
(B)    J and K ; L and N ; M and O
(C)    J and L ; K and N ; M and O
(D)    J and M ; K and L ; N and O
(E)    J and M ; K and N ; L and O

Which of the following teams must K play second ?

(A)    J
(B)    L
(C)    M
(D)    N
(E)    O

What is the total number of games that each team must play during the season ?

(A)    3
(B)    4
(C)    5
(D)    6
(E)    7  

— Preceding unsigned comment added by 175.110.112.185 (talk) 19:43, 19 September 2012 (UTC)[reply]

You are essentially told the answer to the first question directly. I'm tempted to say it's obviously (E) (can you see why that's the case?)
Question two is done by process of elimination. K does not play N or L second because it plays them first and third respectively. J and O compete against each other in the second match so neither of them can play K. So the answer is M.
The answer to the third question is 5, obviously. For each other team, there are 5 other teams, and each team must compete against each other team exactly once. Widener (talk) 20:22, 19 September 2012 (UTC)[reply]
Creating a chart typically helps on this type of problem:
        O P P O N E N T S
TEAM   1st 2nd 3rd 4th 5th
----   --- --- --- --- ---
 J
 K
 L
 M
 N
 O
Try filling that in with what you know already. (Although, this doesn't look like so much of a logic problem as a reading comprehension problem.) StuRat (talk) 00:37, 20 September 2012 (UTC)[reply]

Show that a series is not Absolutely convergent.

Show that this series is not absolutely convergent: .

This is my approach:


This is almost the harmonic series. The desired result follows if you can show that for some for every in an arithmetic progression, for example. Equivalently,




might be easier to show.

Then it suffices that for all for all in an arithmetic progression which is why I asked a similar question a while ago. That way, the number inside the absolute value signs will always have a nonzero imaginary part and therefore will have nonzero absolute value. It's not actually enough that for infinitely many as I later realized, for instance the result doesn't (at least not obviously) follow if its only true for the squares.
Anyway, do you have suggestions for proving this result? — Preceding unsigned comment added by Widener (talkcontribs) 20:07, 19 September 2012 (UTC)[reply]


exp(-i n b) - exp(- i n a) =

exp[-i n (a+b)/2] {exp(i n (a-b)/2) - exp(-i n (a-b)/2)}

So, the summation of the absolute value of the terms is:

If you replace infinity by N in the upper limit, you can say that the value is larger than what you get if you replace the absolute value of sin by sin^2. Then you can write sin^2 in terms of cos of the double argument, this yields one divergent summation (in the limit of N to infinity) and a convergent summation. Count Iblis (talk) 20:55, 19 September 2012 (UTC)[reply]

Hey, that works! Thanks! Widener (talk) 21:16, 19 September 2012 (UTC)[reply]

The maze on today's main page.

As I posted earlier on Wikipedia:Main Page/Errors, the caption for this was a little confusing. Can anyone here figure out what it was trying to say: "In these mazes, it will typically be relatively easy to find the origin point, since most paths lead to or from there, but it is hard to find the way out"? As far as I can tell, since everywhere is connected to everywhere else, and there is neither a marked 'start' or 'end' once the maze is constructed, the statement makes no sense.

Incidentally, I'm fairly sure that there is a name for this particular type of maze - it has no 'loops' so can be navigated in its entirety simply by 'following a wall' - does anyone know the name? AndyTheGrump (talk) 21:08, 19 September 2012 (UTC)[reply]

It sounds like the "all roads lead to Rome" concept. In the case of the Roman Empire, all roads would actually also lead to any connected city. What they are really saying is that it's the central hub. Think of it like an airline's hub city, too. Yes, you can get to any city they service eventually, but the direct routes all lead to or from the hub city. StuRat (talk) 00:34, 20 September 2012 (UTC)[reply]
The phrase "simply connected" comes to mind, and it would technically be correct for a certain view of this (no loops) maze, although I'm not sure if anyone in the maze community uses precisely that term. (Although simple graph would be more relevant from a graph theory perspective.) I would also imagine (though don't have any proof/calculations to show) that the depth-first generation method would produce a maze with a low average degree of branching, which should aid in maze traversal, though I don't know if it would differentially affect and given position. Although it might be that since branching preferentially happens at the tips of the growing maze, you might expect a gradient of degree, with points near the origin having lower average degree than those further from it, leading to search being simpler in the region of the starting point. -- 205.175.124.30 (talk) 02:11, 20 September 2012 (UTC)[reply]
It appears that "perfect maze" is the most common name for loop-free mazes. -- BenRG (talk) 06:10, 20 September 2012 (UTC)[reply]

September 20

Lanchester's Law and Game Theory

Let us consider Lanchester's square law and three groups A,B,C which want to go to war. Making up numbers, let's say A,B,C have armies of sizes 45, 40, and 35 respectively. I am concerned with the optimal strategy for group C. From what I understand, B and C should form an alliance. They'll have a vastly superior force than A. They will beat A and both B and C will suffer casualties in proportions. After A has been beaten, then B and C can fight each other. It seems to be that if C has the smallest army, then it will always lose in the end. A,B,C can randomly all shoot at each other or B and C can form an alliance, wipe out A, and then turn against each other. Is there ever an case (with army sizes being A < B < C...notice strict inequalities) where Lanchester's Law and Multiparty Game Theory predict that C will win? A numerical counterexample is what I am looking for of course. If not then, C will always lose, right? So what is C to do? What incentive is there for C to form an alliance with B? It knows it will be killed in the end anyway so why help B (over A) survive? Is this the best strategy? If yes, then why because C's destruction seems inevitable so why should C care what it does and who it helps and what strategy it chooses? - Looking for Wisdom and Insight! (talk) 04:59, 20 September 2012 (UTC)[reply]

I assume you meant "A > B > C" Dbfirs 07:54, 20 September 2012 (UTC)[reply]
I can see three possible winning strategies for C:
1) The obvious one is to make peace with both A and B.
2) Form an alliance with one and hope that it holds after the other is defeated.
3) The most Machiavellian way for C to win is to secretly instigate a war between A and B, then, after one is destroyed and the other is weakened, attack. StuRat (talk) 05:18, 20 September 2012 (UTC)[reply]
See truel for something very similar. Since peace treaties as unnecessary between friends I view them as declarations of war but not yet. Option 3 certainly seems the best option for C, the problem is what happens if A or B discover it. C could also send an army of size 5 to 'help' B. It would be interesting to try and figure out the strategies to stay alive the longest if they are all fighting each other. Say army A devoted a(t) of its effort to destroying B and 1-a(t) to destroying C, so C was being destroyed by a force of A(1-a(t))+Bb(t), B by a force of C(1-c(t))+Aa(t) and A by B(1-b(t))+Cc(t). Dmcq (talk) 12:40, 20 September 2012 (UTC)[reply]
Peace should break out all over in ABC world, for no one can afford to attack alone, nor be the weaker ally in an attack. Rich Farmbrough, 21:10, 20 September 2012 (UTC).[reply]
Yep as far as I can see if they have a three way war then all three will be completely destroyed at the same time if the two smaller ones could defeat the largest. What'll happen is that the smaller ones will gang up on the largest until it is no longer the largest, then the two larger will reduce each other till all three are the same size as the smallest. Dmcq (talk) 22:31, 20 September 2012 (UTC)[reply]

Fourier coefficients of bounded Monotonic functions.

Show that fourier coefficients of bounded monotonic functions on are .
There is a Hint which says to approximate the function with characteristic functions on intervals because the characteristic function has Fourier coefficients that are . First, what approximation should I use? The approximation I have been using is the obvious but naive approximation but actually you don't necessarily have at the endpoint do you. It also says to manipulate the series to get a telescoping series which can be bounded by twice the bound of the function but I don't know why or how you should/would do this.
How do you do this? Widener (talk) 07:41, 20 September 2012 (UTC)[reply]

League problem

For a game I'm trying to come up with a particular league system, and I'm unsure how to go about doing it. I tried to do it manually but I couldn't come up with an algorithm that led me to a solution. Basically I have 16 teams in a league, but for logistics' sake I need to break it up into four 4-team series. I figure if I can break it up right, then I can each team have faced every other team in one of the series once, and only once. So in the first leg I have four series:

Series 1 Series 2 Series 3 Series 4
Team A Team E Team I Team M
Team B Team F Team J Team N
Team C Team G Team K Team O
Team D Team H Team L Team P

Each team has to face 15 other teams, and in each leg it faces 3 other teams, so then in 5 legs any given team will have faced all 15 other teams. I can come up with two other legs: one by simply flipping it around the diagonal (so Series 1 would be teams A, E, I, and M), and the other by taking diagonals for each series (so Series 1 would be teams A, F, K, and P), but beyond that I start running into problems. Maybe I'm backing myself into a corner, I don't know. And I calculate that there are 2386 possible ways to sort the 16 teams into 4 series, so I'm not about to check all of those, to find five "orthogonal" legs.

Is there a better way of going about this, in a more mathematical way? Thanks —Akrabbimtalk 16:47, 20 September 2012 (UTC)[reply]

I think your approach is reasonable. Your 4th and 5th groups will be "what's left":
Series 1 Series 2 Series 3 Series 4
Team A Team E Team I Team M
Team B Team F Team J Team N
Team C Team G Team K Team O
Team D Team H Team L Team P
Series 1 Series 2 Series 3 Series 4
Team A Team B Team C Team D
Team E Team F Team G Team H
Team I Team J Team K Team L
Team M Team N Team O Team P
Series 1 Series 2 Series 3 Series 4
Team A Team E Team I Team M
Team F Team J Team N Team B
Team K Team O Team C Team G
Team P Team D Team H Team L
 .
 .
 .
Also note that there are many possible solutions. StuRat (talk) 17:12, 20 September 2012 (UTC)[reply]
But the other diagonal will intersect. One direction will be A,F,K,P, and the other direction will be A,N,K,H. A and K are in both. —Akrabbimtalk 17:29, 20 September 2012 (UTC)[reply]
Yes, you're right, I've revised my advice above. I'm going to write a program to solve this and post it tomorrow. StuRat (talk) 17:38, 20 September 2012 (UTC)[reply]
Thanks, I'll be interested to see what you come up with and your algorithm to solve it. I was trying to work it out with Matlab, but it was just turning into a clumsy brute force algorithm, and I wasn't getting anywhere. But I charted it out, and three legs that we have here are definitely not part of the solution. For example, for A and G to meet, the only teams that haven't met A and G would be J and N, and they have both played each other already, so I right when I thought I had painted myself into a corner. —Akrabbimtalk 17:56, 20 September 2012 (UTC)[reply]

(Teams have numbers rather than letters, obviously)

Series 1: 1, 2, 3, 4
Series 2: 5, 6, 7, 8
Series 3: 9, 10, 11, 12
Series 4: 13, 14, 15, 16

Series 1: 1, 5, 9, 13
Series 2: 2, 6, 10, 14
Series 3: 3, 7, 11, 15
Series 4: 4, 8, 12, 16

Series 1: 1, 6, 11, 16
Series 2: 2, 5, 12, 15
Series 3: 3, 8, 9, 14
Series 4: 4, 7, 10, 13

Series 1: 1, 7, 12, 14
Series 2: 2, 8, 11, 13
Series 3: 3, 5, 10, 16
Series 4: 4, 6, 9, 15

Series 1: 1, 8, 10, 15
Series 2: 2, 7, 9, 16
Series 3: 3, 6, 12, 13
Series 4: 4, 5, 11, 14

81.159.104.182 (talk) 03:41, 21 September 2012 (UTC)[reply]

x/y in IV quadrant, is the given ratio negative or positive?

question: "Suppose that the point (x,y) is in the indicated quadrant (IV). Decide whether the given ratio is positive or negative." Is it not positive, simply by sketching? How would it be negative? thanks.24.139.14.254 (talk) 20:08, 20 September 2012 (UTC)[reply]

Just look at the signs of x and y. The ratio of two numbers with the same sign is positive; the ratio of two numbers with opposite signs is negative. Widener (talk) 20:28, 20 September 2012 (UTC)[reply]

September 21

Integration of 1/x in Quadrant I

Hi. First of all, this isn't an actual homework problem, but a conceptual question I asked a mixed crowd of people, some of whom know integration techniques and some of whom which who whom don't. Everybody seemed to know intuitively or by proof that the area underneath the curve in quadrant one is infinite, because since the graph never touches either axis, the area underneath each section is infinite. This didn't make sense to me, because I intuitively compared it to a converging sum such as 0.999..., but apparently the hyperbolic function does not converge. Therefore, I must be severely deluded. The integration technique also relies on the fact that zero invalidates the integration, resulting in infinity; if one of the axes were shifted away from the zero-point, the area would still be infinite. This brings up the question: since shifting both axes away from zero would immediately cause the function's area to become finite, is there a tipping point of some sort? This is hillarious because the hyperbola itself is tipping pointential. Therefore, perhaps at the exact tipping point upon which the area oscillates between finite and infinite, causing the creating of a complex number. However, since this is a bi-axial method, the convergence paradigm is both null and void.

I must admit, quite embarrassingly, that I've never done mathematical proofs for more than one hour in my entire life. Therefore, I may need to rely on intuitive techniques such as visual calculus to get the point across to myself about why this function has an infinite area. I incorrectly assumed that the total area underneath the hyperbola in the first quadrant is reducible to be equivalent to the area to the bottom left of a linear function with the slope -1. However, I then remembered my Cartesian plane-onto-sphere method, which was improperly answered because my method assumes an infinite Cartesian surface, whereas stereographic projection relies on a finite Earth. It would be more like taking the membrane of an open universe and reconstructing it to make it closed. Anyway, I proved to myself that in this projection, the four points of (0,0), (x-fin,x-infinlim), (y-fin,y-infinlim) and (±∞,±∞) together comprise the manifold geodesic of a sphere in 2.5 dimensions (clarification, citation, objectivity and sanity needed), in each of the four quadrants (actually, there are a total of eight). This is where {x,y-fin/infinlim} form the continuum where the grid system transforms from finite in one direction to infinite in the opposite direction (from the vantage point of the "anti-origin"), forming two points on a sort of central meridian line, though that's irrelevant largely to solve this problem. So in the Q-I space, the first quadrant forms a circle, and that circle's area is infinite. The perimeter of this circle must also be infinite. The area under the hyperbola, thus, represents an area around the circumference of this circle, although it is widest at one point of the circle and converges at the other end of the circle. Despite the distance from this circumference getting smaller as the function recedes from the 2-space origin, the area still goes onto infinity, thus allowing the total area by integration to be infinite. Phew, though this is not as clear to most people, so there has to be a better intuitive way of proving it.

Someone enlighten me by dialing 0 on the Hamiltonian operator. Thanks. ~AH1 (discuss!)

Limits of Integration

Is it true in general that ?


Widener (talk) 02:57, 21 September 2012 (UTC)[reply]