Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 78.149.193.98 (talk) at 20:41, 22 March 2010 (Formula to undistort the image from a shiny sphere). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:



March 16

Sum of i?

Look at Kullback-leibler divergence and the first formula in the definition defines a sum from i. Not from i to something, just from i. Is that a typo? If not, what is it the sum of? Is it assumed that P(i) is a set of values? -- kainaw 01:32, 16 March 2010 (UTC)[reply]

The notation means "sum over all values of i ". Of course the meaning of all is context-dependent. --Trovatore (talk) 01:37, 16 March 2010 (UTC)[reply]
In this case it's over all real numbers i for which . There are countably many such is. Note that the distribution is assumed to be discrete, so . It can probably be shown that the sum converges absolutely. -- Meni Rosenfeld (talk) 08:14, 16 March 2010 (UTC)[reply]
I don't see anything that says i has to range over real numbers. i could be taken from the set {red,green,blue}, and then you'd have the perfectly sensible expression . --Trovatore (talk) 18:42, 16 March 2010 (UTC)[reply]
Indeed. While it should be made more explicit, I think it is clear that i ranges over the sample space of the random variables in question. There is no restriction on what that sample space can be, other than that it must be countable in order for a discrete distribution to exist on it. --Tango (talk) 18:47, 16 March 2010 (UTC)[reply]

four dimensional space.

If S^3= { (x,y,z) : x^2+y^2+z^2+w^2=1} and S^2= { (x,y,z) : x^2+y^2+z^2} and h: S^3→S^2, where S^3 is circle in four dimensional space and S^2 is circle in three dimensional space. Then check for h(a,b,c,d) ε S^2. Find h^(-1)(0,0,1). —Preceding unsigned comment added by 147.174.75.241 (talk) 01:48, 16 March 2010 (UTC)[reply]

You need to be more specific about what the function h is. You've given its domain and codomain, but not the actual definition. Staecker (talk) 11:12, 16 March 2010 (UTC)[reply]

Our original poster is not asking a question; he's doing stenography. If he'd bother to read and understand what he wrote, he'd realize it doesn't make sense because he left something out. Michael Hardy (talk) 02:07, 17 March 2010 (UTC)[reply]

Was 2 always prime?

Is it true that the number 2 was not always considered prime? If so, when was this and what were the arguments against its primality? Thanks.Antheafor (talk) 11:17, 16 March 2010 (UTC)[reply]

From the article Prime number: Until the 19th century, most mathematicians considered the number 1 a prime. You might be thinking of this fact instead of the number 2 ??? 195.35.160.133 (talk) 11:44, 16 March 2010 (UTC) Martin.[reply]
I agree, the OP is probably thinking of the dispute over whether 1 should be considered prime (these days it isn't, simply because it makes the definition more useful - that's how definitions are decided in maths). As far as I know, 2 has always been considered prime and I can't think of any reason for it to be excluded. --Tango (talk) 11:51, 16 March 2010 (UTC)[reply]
There are some cases in which considering 1 to be a prime would be more useful: For a typical example, just think about the common version of Goldbach conjecture: every even number greater than 2 is a sum of two primes. The restriction "greater than 2" spoils the aesthetics of the conjecture; However, if we considered 1 to be a prime then we would get the original conjecture (being slightly different from the better-known conjecture), which is much simpler to formulate: every even number is a sum of two primes. Note that even the original conjecture hasn't been resolved yet. HOOTmag (talk) 12:10, 16 March 2010 (UTC)[reply]
Wouldn't that change the content? Is it known that if p is a prime greater than 2 then (p+1) is the sum of two primes (neither of which is equal to 1)? If not, it seems like including 1 would change the meaning of the conjecture. — Carl (CBM · talk) 12:20, 16 March 2010 (UTC)[reply]
Yes, that would really change the content, but I have already made a distinction between what I've called "the original conjecture" and what I've called "the better-known conjecture" (see above what I wrote in parentheses). Note that Goldbach used originally what I called "the original conjecture", which really considers 1 to be a prime. The better-known conjecture, which does not consider 1 to be a prime, is due to Euler, rather than Goldbach. For more details, see Goldbach conjecture. HOOTmag (talk) 13:23, 16 March 2010 (UTC)[reply]
Let's also report the "little Goldbach conjecture": every even prime is a sum of two odd numbers. --78.13.143.108 (talk) 15:03, 20 March 2010 (UTC)[reply]
Notice that considering 1 to be a prime does not change the content of your conjecture... HOOTmag (talk) 18:06, 22 March 2010 (UTC)[reply]
If 1 is prime there is no more unique factorization. I'm amazed that the idea that 1 is prime persisted til the 19th century. 66.127.52.47 (talk) 12:50, 16 March 2010 (UTC)[reply]
It takes some development of abstract algebra to realize that unique factorization is a useful property.—Emil J. 13:15, 16 March 2010 (UTC)[reply]
According to our article, Fundamental theorem of arithmetic, the first complete and correct proof of the theorem was in Disquisitiones Arithmeticae, which was published in 1801. So, unique factorisation wasn't even properly proven until the 19th century. --Tango (talk) 13:36, 16 March 2010 (UTC)[reply]


You still have unique factorization; it's just more awkward to state. You have to keep talking about "primes other than 1" or some such. There are several things that seem more natural if 1 is included in the primes; I recall a list on some Usenet group, but I don't remember them right now. --Trovatore (talk) 02:01, 17 March 2010 (UTC)[reply]

Everybody knows 2 is the odd prime, being the only prime that is even. Gabbe (talk) 15:00, 16 March 2010 (UTC)[reply]

Not so long ago I heard a formidable mathematician object to calling 2 a prime number on the grounds that so many theorems have a hypothesis that p is an odd prime (for example, recall from undergraduate algebra that the quadratic formula works in the field of integers mod p if p is an odd prime). Michael Hardy (talk) 00:57, 17 March 2010 (UTC)[reply]

Yes, lots of quadratic things require you to exclude p=2, but don't the equivalent cubic things require you to exclude p=3, etc.? We just work with quadratics more often, but the principles are the same. --Tango (talk) 01:49, 17 March 2010 (UTC)[reply]
A theorem in finite group theory due to Thompson asserts that in order to prove that a finite group has a normal p-complement (for p an odd prime), it suffices to check that the normalizers of just two p-subgroups each possess a normal p-complement (out of interest, the subgroups are the center of a Sylow p-subgroup of G, and the Thompson subgroup of this Sylow p-subgroup). A theorem due to Frobenius asserts that a finite group G has a normal p-complement if and only if the normalizer of every non-identity p-subgroup of G has a normal p-complement; the result due to Thompson is clearly very strong in comparision to Frobenius's result. However, the strengthening is valid only for odd primes (one particular application of Thompson's result is that a non-identity p-group cannot be a maximal subgroup of a finite simple group if p>2; another instance of how "2" differs from other primes). Results of this nature are dispersed across the field of finite group theory, and perhaps add furthur evidence supporting the idea that 2 "should not be a prime". PST 13:44, 17 March 2010 (UTC)[reply]

Request for input at our Entertainment desk

This question might benefit from your thoughts, O oracles of problem-solving! ---Sluzzelin talk 13:42, 16 March 2010 (UTC)[reply]

First of all note that there is a difference between (1) a soduko puzzle, (2) a soduko puzzle with an unique solution, and (3) a designed soduko puzzle. It's also not quite defined what you mean by having to guess, most steps of solving soduko puzzles are based on guessing, ie. we know that the 1 goes in this slot because if it does not there is no point it can go into. So it's more of a question of degree of guessing, how much will you have to "solve" the puzzle past your guess to resolve the question, and how many steps you will have to backtrack, that is how many seperate points will you have to guess on. That said, (1) will frequently require guessing because, since more than one solution is possible there will be no way to exclude a solution. (3) will, if the design is good, be solvable with a degree of guessing that most will consider as not guessing. This leaves (2) as the interesting case. Unless I am completely mistaken soduku is NP-complete, so the answer is that most believe that any method of solving will be at least as much work as guessing and backtracking, but it's possible that effective algorithms exists which will solve things faster. This is known as the P=NP problem, and is considered one of the great unsolved problems of computing today. Taemyr (talk) 12:33, 17 March 2010 (UTC) I have seen no proof of sudoku NP-hardness and could be completely wrong when I claim it's hard. Taemyr (talk) 12:37, 17 March 2010 (UTC)[reply]
Sudoku is ASP-complete under parsimonious poly-time reductions[1], which means the following: for every NP-problem of the form with P poly-time and |y| implicitly bounded by a polynomial in |x|, there exists a poly-time function f whose outputs are partially filled Sudoku grids (m×m for some m polynomial in |x|) such that for every x the number of y satisfying P(x,y) equals the number of solutions of f(x). Thus in this (very strong) sense, Sudoku is indeed NP-complete.
However, if you strictly take the task of solving Sudoku in the usual sense, the situation is different. First, it is not even a decision problem, but a search problem: you are not looking for any yes-no answer, you are looking for a solution of the grid (i.e., given a Sudoku puzzle with a unique solution, find the solution). Now, this search problem is mutually poly-time Turing reducible with the following promise problem: given a Sudoku puzzle with at most one solution, decide whether it has a solution. By the completeness result above, this is a complete problem for promise-UP (the class of promise problems of the form with |y| polynomially bounded in |x| and with the promise that there is at most one y which satisfies P(x,y)). This is similar to NP-completeness, but it is not quite the same. There is a randomized poly-time reduction of NP to UP, hence you can consider Sudoku solving to be "NP-complete" by randomized poly-time Turing reductions.
Of course, all the above relies on using variable-sized grids. If you stick to the usual 9×9 grid, it becomes a finite problem, hence it is trivial from the viewpoint of computational complexity (it is solvable in constant time).—Emil J. 14:11, 17 March 2010 (UTC)[reply]

I find no link to ASP-complete, and a quick Google search finds nothing on the first page that explains it. What is it? Michael Hardy (talk) 18:41, 17 March 2010 (UTC)[reply]

.....oh, maybe your explanation answers that. Should we have a Wikipedia article on that concept? Michael Hardy (talk) 18:43, 17 March 2010 (UTC)[reply]
My explanation actually only explained what parsimonious is (as that's what's relevant here), see the original paper for definition of ASP-reductions and the general framework (basically, the reduction also has to provide a poly-time bijection between the two solution sets). I put the link there so that people can click on it instead of searching on Google. Should we have an article? I don't know; it does not seem to be a widely used concept. Parsimonious reduction may be a better candidate for an article, or at least for a definition included in sharp-P.—Emil J. 19:01, 17 March 2010 (UTC)[reply]
The Complexity Zoo doesn't have an ASP-completeness entry either, so it sounds pretty obscure. 66.127.52.47 (talk) 19:54, 17 March 2010 (UTC)[reply]

Inner Product Spaces

I asked almost this before, but I wasn't quite clear enough. I know the assumption would be enough to show that cauchy sequences converge, but:

In an inner product space, does ZF prove that if all absolutely convergent series converge then all cauchy nets/filters converge? (I know nets and filters are equivalent.) JumpDiscont (talk) 21:18, 16 March 2010 (UTC)[reply]

Actually your previous version here did seem clear enough (at least, not less that the new one). Since in the new version of your question Cauchy filters replace Cauchy sequences, I guess you want to know what is used to show the equivalence. Let me recall to you:
  • A normed space is complete as a metric space (all Cauchy sequence converge) if and only if all absolutely convergent series converge; the equivalence just uses natural induction.
Therefore I guess you are asking whether for inner product spaces sequential completeness is equivalent to filter completeness, and whether AC is needed for the proof. Is this what you want? Recall that
  • If a metric space (more geerally, a uniform space) is filter-complete, it is obviously sequentially complete.
  • Conversely, (Cantor argument) any Cauchy filter-base in a complete metric space (M,d) does converge; however, for this you do use a choice (for any you pick a and a point . It follows that is a Cauchy sequence, thus convergent, and that converges to the same limit).
I guess one can't avoid using AC or some weaker form of it, even in the case of Hilbert spaces. --pma 00:43, 17 March 2010 (UTC)[reply]
"Therefore I guess you are asking whether for inner product spaces sequential completeness is equivalent to filter completeness, and whether AC is needed for the proof." Exactly.
I think choice would not be needed for separable spaces (use the obvious embedding into the space's filter-completion, take the first member of the countable dense subset less than 1/n from the limit), although I'm wondering if the full thing is any weaker than CC, even if it was weakened to a normed space.
For example does ZF+"All sequentially complete normed vector spaces are filter-complete" prove Countable Choice?
JumpDiscont (talk) 02:07, 17 March 2010 (UTC)[reply]

X/Y

Consider a random variable X with mean µ1 and standard deviation σ1. Consider a random variable Y with mean µ2 and standard deviation σ1. What is X/Y? I know it has a cauchy distribution--203.22.23.9 (talk) 22:08, 16 March 2010 (UTC)[reply]

If X is a random variable and Y is a random variable then X/Y is also a random variable (as long as Y is not zero). Gabbe (talk) 00:16, 17 March 2010 (UTC)[reply]
See Ratio_distribution#Gaussian_ratio_distribution, if X & Y are known to be normally distributed. --173.49.81.5 (talk) 00:34, 17 March 2010 (UTC)[reply]

You can't conclude it's Cauchy-distributed without assuming more than what you've said. Often that additional assumption is simply that X and Y are independent, although some weaker assumptions will get you the same conclusion. Michael Hardy (talk) 00:58, 17 March 2010 (UTC)[reply]

.... geeeeeeeeez..... I just realized the question didn't say anything about normality or anything like that. So you can't say it's Cauchy-distributed. Michael Hardy (talk) 02:11, 17 March 2010 (UTC)[reply]


March 17

Plank Time

It says

planck time =

What is the

?174.3.107.176 (talk) 01:40, 17 March 2010 (UTC)[reply]

It's the uncertainty. It means (5.39124 +/- 0.00027) *10^-44. The last two digits (the 24) are +/- the two digits in the brackets. 0.00027*10^-44 is the standard error. --01:47, 17 March 2010 (UTC)
So 5.39127 *10^-44, ± 0.00024?174.3.107.176 (talk) 03:23, 17 March 2010 (UTC)[reply]
5.39124 *10^-44, ± 0.00027*10^-44. HOOTmag (talk) 08:22, 17 March 2010 (UTC)[reply]

Local connection between measure preserving transformations

Suppose we have two maps F and G which preserve the Lebesgue measure and such that d(F,G)<ε (say in Ck).

Problem: does it exist a continuous family of maps Ft such that

  • Ft preserves the Lebesgue measure
  • d(Ft,F)<ε
  • F0=F and F1=G?

--Pokipsy76 (talk) 19:37, 17 March 2010 (UTC)[reply]

Character question

Working on a proof from "Introduction to Elliptic Curves and Modular Forms" by Koblitz. It involves characters and Gauss sums, which I have little experience with and I don't know what's going on. It's Prop 17 on P127 if you happen to have the book. The proof is on P128. There is a limited preview on Google Books but it does not include P128 so the proof is not included.

is a primitive Dirichlet character modulo N, so a multiplicative character, and is an additive character. is the Gauss sum, though I don't know if this is important yet. We define a function , where and the come from a modular form we start out with, . So, that's just the background and I am stuck on step 1 of the proof. It's probably not very hard. The claim is

.

I guess they're just rewriting in some other form??? I have no idea. Thanks for any help. StatisticsMan (talk) 20:50, 17 March 2010 (UTC)[reply]

is 0 when l and n are different mod N, and is 1 when they are the same. So that form is just stacking up the terms for each equivalence class mod N. Rckrone (talk) 06:38, 18 March 2010 (UTC)[reply]

Lambert W function for a base other than e

y = xn^x

solve for x in terms of y and n?--203.22.23.9 (talk) 21:22, 17 March 2010 (UTC)[reply]

Never mind, I've figured it out--203.22.23.9 (talk) 21:23, 17 March 2010 (UTC)[reply]


March 18

Integrating differential terms independently of each other in a single equaiton

Hi, I'm having trouble understanding a derivation of Bernouilli's equation given by my lecturer.

The derivation basically gets to the point of:

dpρ + d(v2)2 + gdz = 0

And the next line simply integrates the d(v2 and dz terms independently to become:

dpρ + v22 + gz = constant

Is this a legitimate operation? To the best of my knowledge you can't integrate one term in an equation independently (with respect to a different variable) to the others. In a triple integral for example you integrate the entire equation with respect to each of the three variables.

I'd appreciate any clarifications. --118.139.13.221 (talk) 00:25, 18 March 2010 (UTC)[reply]

Integration is additive, so . --Tango (talk) 01:06, 18 March 2010 (UTC)[reply]
Note that there are already differentials present in the original equation. You can differentiate both sides just by adding an integral sign alone. What we're doing is basically summing up the values of the differentials over some range. If you've got an equation without differentials that you want to integrate, you would need to introduce some differentials. In that case you can't introduce whatever differentials to different terms; it has to be consistent. So for example if x + y = 0 you can't say that ∫xdx + ∫ydy = 0. You would have to do something like ∫xdx + ∫ydx = 0. On the other hand, if you have dx + dy = 0, then it's fine to say that ∫(dx + dy) = ∫0, which is ∫dx + ∫dy = 0. Rckrone (talk) 04:52, 18 March 2010 (UTC)[reply]
Thanks heaps for that - very useful thing to find out! --118.139.13.221 (talk) 23:13, 18 March 2010 (UTC)[reply]
Forget them differentials - they ain't nothin' but trouble. The derivation of the equation of motion for the fluid should lead you to
then you can integrate between times t1 and t2 to get:
where p1, p2 are pressures at times t1, t2 etc. As this is true for any two times t1, t2, then we must have
Gandalf61 (talk) 10:13, 18 March 2010 (UTC)[reply]
Thanks heaps for that derivation, was a much 'nicer' way to think about it :)

formulas which induce functions

If is a formula defined by:  ::= x=y2, then induces a function from the set X of positive numbers x to the set Y of negative numbers y. However, does not strongly induce a function from X to Y, i.e. I can only make sure that every positive x has a negative y such that , and that there isn't any other negative y satisfying , but I can't make sure that there isn't any other arbitrary y satisfying . Here is a counter example - for strongly-inducing a function: The formula , does strongly induce a function from the set Y of negative numbers y to the set X of positive numbers x, namely: not only can I make sure that every negative y has a positive x such that , and that there isn't any other positive x satisfying , but I can also make sure that there isn't any other arbitrary x satisfying .

The notion of a formula which strongly induces a function, sounds very intuitive to me. Unfortunately, the very expression "strongly induces a function" - is not familiar (I invented it). Do you think that there is any familiar brief expression for indicating this intuitive notion?

HOOTmag (talk) 08:29, 18 March 2010 (UTC)[reply]

See Implicit and explicit functions. Bo Jacoby (talk) 00:45, 19 March 2010 (UTC).[reply]
I didn't find there anything that may help me to find out any familiar brief expression for indicating the intuitive notion of a formula which strongly induces a function. HOOTmag (talk) 11:07, 19 March 2010 (UTC)[reply]
Why does not "strongly induce" a function ? Sure, is a point on the curve , but , so why does the existence of that point matter? If you're specifying the codomain of the implicit function ahead of time, then "strongly induce" seems to mean the same as "induce a function." If the specified codomain is actually the reals rather than the negative reals, then it seems that what you are doing is defining a principal branch of a multivalued function, which is usually considered to be an arbitrary choice; if there is no such choice, then the set of points implicitly defines a function in the ordinary sense. What I am saying is that I don't understand the "intuitive notion" of "strongly induces." Can you provide a precise definition for what you mean? —Bkell (talk) 13:29, 19 March 2010 (UTC)[reply]
Perhaps what you mean is that implicitly defines a relation from to , but does not define a function unless the codomain is restricted to ? —Bkell (talk) 13:59, 19 March 2010 (UTC)[reply]
Exactly!
To put it more formally: If is a formula having two free variables only (over the universal domain of discourse), then: to state that induces a function from R to S, means that:
  • Every r in R has an s in S such that every v in S satisfies: iff v=s.
That's a simple definition of a function (from a given set to a given set) induced by a given formula. But what happens if we let v be totally arbitrary, i.e. let it belong to the universal domain of discourse, unnecessarily to a given set? Then we can say that strongly-induces a function from R to S, namely:
  • Every r in R has an s in S such that every v satisfies: iff v=s.
Can you think of any idea about how to express briefly the fact that strongly-induces a function from R to S, without using the term "strongly", and without having to get into too many details (like those I've indicated above when I defined the notion of "strongly-inducing a function")?
HOOTmag (talk) 14:14, 19 March 2010 (UTC)[reply]
Hmm. Your earlier example of "strongly induces" was somewhat tautological, then—the reason strongly induces a function from Y to X is that x is in fact explicitly defined by the formula. (So my current working hypothesis is that "strongly induces" means "explicitly defines.") Do you have an example of a formula that strongly induces a function without explicitly defining it? —Bkell (talk) 14:29, 19 March 2010 (UTC)[reply]
I agree with you that every formula, explicitly defining a given function from X to Y, strongly induces the function, but it's not necessarily the other way around. For example, check: x=y*2. HOOTmag (talk) 18:24, 20 March 2010 (UTC)[reply]
You didn't specify the values of R or S, so I'm going to assume you intend . Now, what is the operation in "y*2"? Is that real multiplication? If so, how does your definition for "strongly induces" make sense if v is not a real number? —Bkell (talk) 20:02, 20 March 2010 (UTC)[reply]
What I'm saying is that the formula x = y2 doesn't strongly-induce a function from the set X of positive numbers x to the set Y of negative numbers y, while the formula x= -2y does strongly-induce a function from the set X of positive numbers x to the set Y of negative numbers y.
More generally, given the binary relation R which is determined by a given formula, I'm looking for a shorter way to express the idea that both the restriction XR is a function and the image of XR is Y.
HOOTmag (talk) 20:23, 20 March 2010 (UTC)[reply]
And what I'm saying is that I still don't understand your definition of "strongly induces." A binary relation requires the specification of a domain and a codomain. If you haven't specified those, you haven't specified a relation.
The definition you gave allows v to "be totally arbitrary," to come from "the universal domain of discourse, unnecessarily to a given set." But I don't understand how you plan to interpret the expression "-2v" if v is totally arbitrary. For example, suppose we consider the line with two origins, interpreted as the real line with an "extra" origin (call it @, say). Now we define multiplication on this space to be ordinary real multiplication, with the additional definition that @ × a = a × @ = 0 for all a. This is certainly a valid binary operation. But in this space, with this binary operation, there are two values v for which it is true that 0 = −2v. So then, apparently, x = −2y actually does not "strongly induce" a function from to .
I suppose you will object to this example, saying that it's contrived or that v isn't really a number or something. But your definition is what is allowing v to be totally arbitrary.
If by "strongly induces" you mean "both the restriction X ◁ R is a function and the image of X ◁ R is Y," as you say, then your original example of does not "strongly induce" a function from to , because the restriction of the domain to X does not define a function. (It defines a multivalued function, for which you must choose a principal branch.) So I am still confused about what you are trying to say. —Bkell (talk) 20:54, 20 March 2010 (UTC)[reply]
I thought it was clear that the universal domain of discourse was (in my example) the set of real numbers.
When did I say that the formula x=y2 strongly induces a function from to  ? On the contrary: I said that it doesn't! See the first paragraph of my previous response! On the other hand, the formula x= -2y (defined on the set of real numbers) does strongly-induce a function from the set X of positive numbers x to the set Y of negative numbers y.
Again, I'm looking for a shorter way to express the idea that both the restriction XR is a function and the image of XR is Y. How can this simple question confuse you?
HOOTmag (talk) 21:12, 20 March 2010 (UTC)[reply]
Oh, oops, you're right. Sorry, I had it mixed up in my mind. So what's wrong with simply saying that the restriction of R to X is a function and R(X) = Y? That seems pretty short to me. —Bkell (talk) 21:23, 20 March 2010 (UTC)[reply]
Nothing wrong with that, I'm just asking if there is a shorter way to express that. If you think there isn't then it's ok. HOOTmag (talk) 21:26, 20 March 2010 (UTC)[reply]

Expected value of a difference

Please explain if there is something wrong with the following logic (I think there is).

Say the expected value of random variable a is a constant b,

Then the expected value of (a-b) should be

In that case the expected value of the cube of the difference should also be zero, right?

I know this can't be right (because this has to equal some non-zero term in the case of skewness)... but I don't know where the logic fails. —Preceding unsigned comment added by Damian Eldridge (talkcontribs) 10:18, 18 March 2010 (UTC)[reply]

and are not independent so you can't say Zain Ebrahim (talk) 10:27, 18 March 2010 (UTC)[reply]

Thanks! —Preceding unsigned comment added by Damian Eldridge (talkcontribs) 10:43, 18 March 2010 (UTC)[reply]

(ec) Nope, does not imply . Consider which gets one of three values with equal probability of 13 — then gets values and its expected value is certainly not zero. --CiaPan (talk) 10:45, 18 March 2010 (UTC)[reply]

Distance along a parabola

I have a parabola y=x^2, in the range -x0<=x<=x0 (x0 is guaranteed finite) and the arc length d between (-x0, (-x0)^2) to the (x1, (x1^2)) (x1<=x0). How do I find x1?

I looked through wolfram, and the "parabola" and "arc length" pages, but I really couldn't find an answer that made sense to me. This is not at all a homework question, by the way.

Thank you. 78.245.228.100 (talk) 11:59, 18 March 2010 (UTC)[reply]

The arc length of the function graph over is . For function you get , so . This can be expressed in an algebraic form with and as parameters (see List of integrals of irrational functions). Then substitute your given length and starting point −x0 as L and a respectively, and try to solve for b. --CiaPan (talk) 12:13, 18 March 2010 (UTC)[reply]
Thank you, that all makes perfect sense to me: I'll end up with d=f(x0, x') which I can rearrange to solve for x'. I consulted List of integrals of irrational functions in the hope of finding the algebraic representation of the integral, but couldn't find it there: whilst calculus brings back fond memories of my university math classes, I fear that twenty five years of disuse has relagated me to the neophyte level. 78.245.228.100 (talk) 12:29, 18 March 2010 (UTC)[reply]
Hint: . -- Meni Rosenfeld (talk) 12:34, 18 March 2010 (UTC)[reply]
Got it (for the moment, I think). The integral I was looking for was in fact the very first line where and . So I expand all of that out, and call it , then try to rearrange to get an expression for x' in terms of f(x0) and d.
Thank you both for making me think about it (a tiny bit) rather than just giving me the solution. I'm in engineering where we typically delight in spitting out the answer, why is it that mathematicians almost universally prefer to make the questionner work a little? 78.245.228.100 (talk) 13:00, 18 March 2010 (UTC)[reply]
Because we want the world to appreciate how beautiful it all is. I nearly lost my to be wife by explaining some maths to her :) Dmcq (talk) 13:12, 18 March 2010 (UTC)[reply]
  • I think mathematicians know that almost nothing in maths is clear and obvious for everybody. So there is always place to make mistakes. If they give you some general directions, you'll be able to better understand your problem and possible ways to solve it (as well as similar problems in future). You will also be able to check your solution if it seems wrong. On the other hand if you're given a final answer, you could do nothing except using it. If it is wrong, or you misunderstand it, you fail to solve your problem.
  • Another possible reason is that asking the questioner to show some of his/her own work lets them recognize the questioner's skills level, and express their advice in appropriate level of formalism.
  • Asking for some work allows them also to filter those lazy pupils who want to get their homework done by someone else. At least I think so. CiaPan (talk) 13:27, 18 March 2010 (UTC)[reply]
Engineers place a high value on getting stuff done--they expect the questioner wants the answer with as little fuss as possible, so s/he can get onto the next thing for some practical purpose. Mathematicians are more in pursuit of understanding things and seeking mental stimulation for its own sake, so they tend to expect the questioner to also want that. Of course in any well-functioning culture, both approaches are necessary, and having everyone follow their own tendency seems to result in a pretty good mix. 66.127.52.47 (talk) 20:40, 18 March 2010 (UTC)[reply]
In my expression d=f(x')-f(-x0), I set x0 to 1 and got f(-x0) as -2.9366. Does this value have any meaning? (notice how engineers want their numbers to mean something?) 78.245.228.100 (talk) 13:34, 18 March 2010 (UTC)[reply]
Not really. As you probably know, so when taking integrals you can choose which constant to add. The formula in the list is for a particular choice of constant (which makes the expression the simplest). The value of f at any particular point will depend on this arbitrary constant. What matters is the differences between values, which don't depend on the constant. -- Meni Rosenfeld (talk) 14:22, 18 March 2010 (UTC)[reply]
My goodness. Now I have . Where and as above. If I keep on trying to rearrange this will I ultimately end up with an algebraic expression for x', or is this going to end up with an iterative solution? Perhaps I'm going about what I really want to do in an overly complicated way.78.245.228.100 (talk) 08:05, 19 March 2010 (UTC)[reply]
What I have is some values in the range umin to umax which I want to map to a different coordinate system vmin to vmax such that the entire range umin to umax is visible but the ones to the center of seem bigger. My idea was to use ua-umin as the arc length and then work out the corresponding v, but this all suddenly seems like a lot of work.78.245.228.100 (talk) 08:28, 19 March 2010 (UTC)[reply]
Is there a reason to use specifically a parabola? If not, you can choose a transformation which will be much easier to work with. For example, . Other possibilities exist, depending on the requirements. -- Meni Rosenfeld (talk) 11:50, 21 March 2010 (UTC)[reply]

Drive time commute

How long would it take to drive from Colorado to San Jose, Bolivia? --67.134.239.205 (talk) 12:45, 18 March 2010 (UTC)[reply]

This isn't really a maths question. [maps.google.com Google Maps] should be able to give you an idea (click "Get Directions" in the top left). However, you will need to be more precise - there are apparently two Colorados and five San Joses in Bolivia. --Tango (talk) 12:56, 18 March 2010 (UTC)[reply]
I tried Google maps. I'm looking for an estimate, really. Any part of Colorado, USA to any San Jose in Bolivia. My roommate and I watched an episode of South Park where the kids drove to Bolivia ("Getting Gay with Kids") and wondered if a drive like this is even possible. If so, how long would it take. --67.134.239.205 (talk) 15:14, 18 March 2010 (UTC)[reply]
Just a "back of the envelope calculation": It looks to be about 6000 miles by bus, which, at an average of 30 MPH, would take 200 hours, or about 8 days. Of course, keeping the bus moving most of the time would require driving in shifts. Note that this doesn't include customs stops at the various borders. StuRat (talk) 16:38, 18 March 2010 (UTC)[reply]
You're forgetting the (enormous) difficulties of getting a bus across the Darién Gap. Algebraist 17:37, 18 March 2010 (UTC)[reply]
Good point, that's probably why Google declined to give a time and distance estimate. StuRat (talk) 03:45, 19 March 2010 (UTC)[reply]
No, that's not why. Google Maps must only cover a certain area. I tried to get directions from Denver, Colarado to México City, México and it couldn't do it. It couldn't give directions from México City to Acapulco either. The Mexican Google Maps doesn't even have a cómo llegar (Get Directions) option. •• Fly by Night (talk) 13:59, 19 March 2010 (UTC)[reply]

Rank-Nullity

Hi. I am currently revising a course on Vectors and Matrices and am a bit confused by part of my notes. I have been given the example of a linear map T(x,y) such that and it then goes on to say that the rank of T(x) is 2 and that the nullity is 0, which would work by the Rank-Nullity Theorem. My problem is that I can't see how the rank is 2. It actually says in the question that it is a map from R2 to R4 so surely the rank is 4. But if so, how does this fit with the Rank Nullity? Thanks. 92.11.208.237 (talk) 20:12, 18 March 2010 (UTC)[reply]

See if you can write T as a matrix. It takes a vector in R2 to a vector in R4, so it should have 2 columns and 4 rows. Then find the rank and nullity of that matrix. (The rank really is 2 and nullity really is 0.) Staecker (talk) 20:36, 18 March 2010 (UTC)[reply]
Think of the image in 4-space of R2 under that transformation, or if it's easier to visualize, think of a comparable 3-d transformation instead of a 4-d one. What does that image look like? What is its dimensionality? Also, what does T's kernel (the set of vectors that T maps to zero) look like? Remember that the rank is just the dimension of T's image, and the nullity is just the dimension of T's kernel. 66.127.52.47 (talk) 20:47, 18 March 2010 (UTC)[reply]
A linear map to Rn has rank n if and only if it is surjective. Your map isn't. --Tango (talk) 21:28, 18 March 2010 (UTC)[reply]

The rank cannot be bigger than the dimension of the domain. The fact that it maps INTO a 4-dimensional space does not mean that it maps ONTO a 4-dimensional space. And it doesn't, so the rank is not 4. Your words "surely the rank is 4" prove that you don't understand what rank is. The rank is the dimension of the image space, not the dimension of the target space. For example, consider the map that takes every point in the 2-dimensional space to (0,0,0,0) in the 4-dimensional space. Its rank is 0, not 4. Michael Hardy (talk) 23:13, 18 March 2010 (UTC)[reply]

The set of all (x,y) such that T(x,y) is the zero vector in R4, is precisely the kernel of T. Since T(x,y)=(0,0,0,0) if and only if (x+y,x,y−3x,y)=(0,0,0,0), it follows that only the zero vector of R2 is in the kernel of T ((x+y,x,y−3x,y)=(0,0,0,0) implies that the second and fourth coordinates of (x+y,x,y−3x,y) are zero, in particular, and that therefore x=0 and y=0). From this we conclude that the kernel of T is trivial; that is, the map T is injective. By the rank-nullity theorem, r(T)+n(T)=dim(R2), where r(T) and n(T) are the rank and nullity of T respectively. Since the nullity of T is 0, as we just observed, the rank of T must be 2.

Another way to determine the dimension of the image of T (i.e. the rank of T), is to determine the images of all elements in a set of basis vectors for R2 under T (any set of basis vectors will do, but for simplicity's sake, we will choose the set of standard basis vectors of R2). Note that T(1,0)=(1,1,−3,0) and that T(0,1)=(1,0,1,1). Can you see that T(1,0) and T(0,1) are linearly independent? (Suppose k1T(1,0)+k2T(0,1)=(0,0,0,0) for some . Then we have that (0,0,0,0)=k1T(1,0)+k2T(0,1)=k1(1,1,-3,0)+k2(1,0,1,1)=(k1,k1,−3k1,0)+(k2,0,k2,k2)=(k1+k2,k1,k2−3k1,k2). In particular, k1=k2=0 and linear independence is established. Alternatively, observe that two vectors in Euclidean space are linearly independent if and only if one is not a scalar multiple of the other, and that this is clearly the case for T(1,0) and T(0,1)) However, if T maps linearly independent vectors to linearly independent vectors, the dimension of its image should be the dimension of its domain, and as such, T has rank 2. Without necessarily applying the rank-nullity theorem, the fact that T has nullity 0 also follows from this fact (if T maps linearly independent vectors to linearly independent vectors, its kernel must be trivial, and in particular, its nullity must be 0).

Hope this helps. PST 01:10, 19 March 2010 (UTC)[reply]

March 19

Fractions

I was reading a math book and it said that if , then . Is this true? If so, why? --70.250.214.164 (talk) 01:29, 19 March 2010 (UTC)[reply]

It's true. I'm not sure this is the simplest way, but it works:

If you multiply both sides of this equality by cd, you get

Then, dividing both sides by ab, you get

and let us call this number k. Thus we have

so c = ak and d = bk. Then

Michael Hardy (talk) 02:16, 19 March 2010 (UTC)[reply]

I think this is probably one of the cases where a proof doesn't really capture the intuition. Here's a visual way to understand it:
Think about these numbers as lengths of line segments. The assumption is that the ratio of a to b is the same as that of c to d, which is to say, that d can be obtained by stretching/shrinking c by the same factor that b can be obtained from a. Visualize that, and visualize joining the line segments a and c together (this new line segment will have length a+b), and the line segments b and d together (length c+d). Do you see why these joined segments also have to have the same ratio? If it helps, draw some line segments.
That demonstrates it for positive real numbers. --COVIZAPIBETEFOKY (talk) 02:54, 19 March 2010 (UTC)[reply]
Well, I consider the Michael's explanation is pretty intuitive—possibly it's a matter of what intuition each of us has. See the image below, it's presents in graphical form what Michael wrote in algebraic form. The 'k' coefficient is a proportion ratio between the green and red triangle.

Generally we can say, if we add proportional values we get proportional sums. --CiaPan (talk) 08:32, 19 March 2010 (UTC)[reply]

Short proof: a/b=(a+c)/(b+d)⇔ a(b+d)=b(a+c) ⇔ ab+ad=ba+bc ⇔ ad=bc ⇔ a/b=c/d. Bo Jacoby (talk) 15:33, 19 March 2010 (UTC).[reply]

This, of course, is all true only after you add the prerequisite that b+d, b and d all be nonzero. --COVIZAPIBETEFOKY (talk) 18:28, 19 March 2010 (UTC)[reply]
That is pretty obvious for everybody who knows what division is, I suppose. If they were zero (any of them), some or all the fractions would not make sense, so there were no numbers to make equal in proportions. This is simply out of scope of the question (so called domain). --CiaPan (talk) 08:41, 21 March 2010 (UTC)[reply]

"Half" of the real numbers

Resolved

Is there a set  such that for every interval , where m is Lebesgue measure? If so, is there a nice construction for it? —Bkell (talk) 02:43, 19 March 2010 (UTC)[reply]

I don't think there is, but I'm pretty rusty at this. Michael Hardy (talk) 03:12, 19 March 2010 (UTC)[reply]
Suppose such an X exists. Its compliment, Y, has the same property. For any ε > 0 there's an open set S with X ⊂ S and m(S - X) < ε. Since S - X = S ∩ Y and S is the disjoint union of countable open intervals, m(S - X) = m(S ∩ Y) = m(S)/2 so m(S) < 2ε. Then m(X) = 0, which leads to a contradiction. Rckrone (talk) 05:16, 19 March 2010 (UTC)[reply]
Wonderful --pma 06:42, 19 March 2010 (UTC)[reply]
Also, by assumption all points of X have density 1/2, so m(X)=0 as a.e. point of X has density 1; for the same reason m(Xc)=0, a contradiction.--pma 06:42, 19 March 2010 (UTC)[reply]

Ah, I see. Very nice arguments. Thank you. —Bkell (talk) 07:27, 19 March 2010 (UTC)[reply]

sum of fractional differences

When a sequence whose nth term is the form:

What is the sum to n terms?
Thanks, --Wikinv (talk) 05:21, 19 March 2010 (UTC)[reply]

Assuming the sum starts at 1 and goes to n, we're adding up terms from 1/(1+a) to 1/(n+a) and subtracting terms from 1/(1+b) to 1/(n+b). Consider which terms are in common and therefore cancel out, and which terms are left over. Rckrone (talk) 05:30, 19 March 2010 (UTC)[reply]
Hmm... doing that leaves a few terms remaining at the beginning of the sequence. That would give me the sum to infinity, but when doing the sum to n terms there are a few terms at the end of the sequence that remain uncancelled, and I'm don't know how to get them in terms of n. Is there a formula for the sum to n terms given a and b?--Wikinv (talk) 05:50, 19 March 2010 (UTC)[reply]
Yeah, that's right. You'll have some terms left at the beginning and also some terms at the end. Let's assume for now that a < b. Then the ones at the beginning are all the terms from 1/(1+a) up to 1/b. The terms at the end are all the ones from -1/(n+a+1) up to -1/(n+b). That first set we can write as See if you can formulate a similar sum for the negative terms at the end. Rckrone (talk) 06:05, 19 March 2010 (UTC)[reply]
? =


In which case

--Wikinv (talk) 06:26, 19 March 2010 (UTC)[reply]

That's correct. If you want, you can also simplify a bit more by phrasing it as since then we can combine the two sums to get Rckrone (talk) 06:43, 19 March 2010 (UTC)[reply]


Is this also true?: --Wikinv (talk) 07:14, 19 March 2010 (UTC)[reply]
Yeah, you can generalize the same argument used here for 1/x to any function f(x). Rckrone (talk) 07:58, 19 March 2010 (UTC)[reply]

Quickly glancing at this discussion, I don't see any links to telescoping series yet. Michael Hardy (talk) 18:39, 19 March 2010 (UTC)[reply]

Alternative notations for exponentiation

It stuck me that the standard way we write exponentiation (and particularly scientific notation) uses an ordering and typography that de-emphasises what (at least from a scientific if not mathematical perspective) seems to be the most important parts. If, for example, we take the Avogadro constant of roughly 6.022 × 1023 we're writing things in the order sign;significand;base;signOfExponent;exponent, and writing both the exponent and its sign in small letters - even though (again from a scientific perspective) they're much more important than the significand. If the Avogadro constant was 7.022 × 1023 instead, that wouldn't make that much difference; if it was 6.022 × 1024 that would make a huge difference. Our exponentiation#History of the notation section is pretty thin, but it seems to show that things have always been denoted this way (since people cared about writing down the mathematical concept of exponentiation formally). One might think it would smarter, at least for scientific uses, to write the same number down in a way that emphasises the exponent, say [23]6.022 or 23E6.022   . I appreciate that we're stuck with the notation we have, and that mathematicians (and particularly number theorists) won't necessarily agree that the exponent is necessarily the most "important" bit, but has any serious mathematician or scientist used or proposed another, more exponent-emphatic, notation? -- Finlay McWalterTalk 13:47, 19 March 2010 (UTC)[reply]

How about this: 6.022 × 1023 ? StuRat (talk) 14:12, 19 March 2010 (UTC)[reply]
A good source for mathematical notations that have been used in the past is Florian Cajori's work A history of mathematical notations. I'm sure he has a section on exponentiation in there. But it dates from the 1920s, I believe, so you won't get information about proposed replacements for standard exponential notation since then. —Bkell (talk) 14:14, 19 March 2010 (UTC)[reply]
That looks just the ticket, thanks. It does indeed have a section on exponentiation (para 481) but unfortunately the Google Books preview omits that. I'll see if the library has it. -- Finlay McWalterTalk 16:57, 19 March 2010 (UTC)[reply]

Not only ordering and typography, but also pronounciation, de-emphasises the important parts. Some people use a logarithmic unit for big or small positive numbers to emphasise the important parts. Avogadros number indicate that a mole is 238 dB greater than a molecule. Bo Jacoby (talk) 15:54, 19 March 2010 (UTC).[reply]

Yes, that occurred to me, and I'd presume that an exponent-first notation might encourage people to say it differently, like "23 raises 6.022". But that left->right reading order isn't universal, and was pondering whether the de-emphasis of the exponent might not be felt so much in a r->l language. Our mathematics in medieval Islam article doesn't go into this sufficiently, and I note that Abū al-Hasan ibn Alī al-Qalasādī#Symbolic algebra presents a polynomial written in l->r order; so I don't know if medieval arab mathematicians still wrote algebra in l->r despite writing text r->l (which would seem incongruous). -- Finlay McWalterTalk 16:50, 19 March 2010 (UTC)[reply]

Automorphisms of the Riemann sphere

Why is complex conjugation not an automorphism of the extended complex plane? It's an automorphism of the complex plane, and if we define the conjugate of the point at infinity to be itself, I don't see why this isn't an isomorphism from the extended complex plane to itself. Isn't this just a reflection of the Riemann sphere in the great circle which the real axis is mapped to? Thanks, Icthyos (talk) 14:01, 19 March 2010 (UTC)[reply]

An automorphism is a structure preserving permutation. Thus, whether something is or is not an automorphism is very sensitive to the choice of the structure, it's not just a property of the domain as a set. What structure do you impose on the Riemann sphere and on the complex plane, respectively?—Emil J. 14:10, 19 March 2010 (UTC)[reply]
Ah, so complex conjugation is an automorphism of the complex numbers as a field, but not of the Riemann sphere because it is not a holomorphic function, and the Riemann sphere is a Riemann surface, so any automorphism must be biholomorphic? Thanks! Icthyos (talk) 15:18, 19 March 2010 (UTC)[reply]
Yeah, an automorphism is an isomorphism from one space to itself; so it is necessarily bijective. I think that conformal mappings of the Riemann Sphere are, or at least were, what people are most interested in. For example, the extended complex plane is conformally equivalent to a sphere (hence the Riemann Sphere as an analogue of the extended complex plane). A map of the extended complex plane onto itself is conformal if and only if it is a Möbius Transformation. •• Fly by Night (talk) 15:30, 19 March 2010 (UTC) Actually, I just found this: Riemann_sphere#Automorphisms. •• Fly by Night (talk) 15:42, 19 March 2010 (UTC)[reply]
To see the Möbius Transformations in action the take a look at this. •• Fly by Night (talk) 15:47, 19 March 2010 (UTC)[reply]
...although complex conjugation is an antiholomorphic function, as is the composition of complex conjugation with a Möbius transformation, such as inversion in a circle - these mappings preserve the magnitude of angles but reverse their sense. 86.136.246.229 (talk) 12:53, 20 March 2010 (UTC)[reply]

q test not for idiots ... (a q test table for a 99.9999% confidence level?)

I don't know why I can't find a statistical basis for the tables presented for the Q test. Just what function /distributions are the Q tests describing? I'm trying to look up the function because I can't find any tables for a 99.9999% confidence level. John Riemann Soong (talk) 15:24, 19 March 2010 (UTC)[reply]

The Dean and Dixon paper cited in the Q test article says "In this paper all conclusions are based on a normally distributed population." Of course, most "normal" distributions in the real world are only approximately normal, so doing anything at the 99.9999% confidence level is tricky. See for example kurtosis risk. 66.127.52.47 (talk) 23:14, 19 March 2010 (UTC)[reply]
Or Black swan theory for a more colourful version. Dmcq (talk) 10:19, 20 March 2010 (UTC)[reply]
Can I ask why you need such a high confidence level? At that confidence, there is a significant chance of accepting an outlier when you shouldn't - a Type II error. Zain Ebrahim (talk) 10:18, 20 March 2010 (UTC)[reply]
I think the whole idea of rejecting outliers is a bad one and the statistics I've seen supports normally keeping them. It's the sort of thing that kept the ozone hole hidden. Dmcq (talk) 10:25, 20 March 2010 (UTC)[reply]
Because the Q-value I'm getting is so high that the confidence level I will get should be high as well. John Riemann Soong (talk) 17:42, 20 March 2010 (UTC)[reply]
I think you're looking for a p value then, not the confidence level (which is independent of the data). Usually people just say the p-value was less than some significantly small level (i.e. you could just say p < 0.00001) to indicate the result of the test was significant. Zain Ebrahim (talk) 19:07, 20 March 2010 (UTC)[reply]

rack

calculate how to make a pinion size to a rack. like if i have a rack with a tooth spacing of .050 , how would i calculate the dia of the pionion and teeth to match the rack.i am a retired die maker and i have no use for this but i can't figure this out.i would like a nice simple explanation.i have a ford shop theory book , but i can't figure out which formula i should use. any help would be appreciated , or where i could go to ask.thanks , and have a nice day.---------ted----email removed by User:Coneslayer to prevent spam —Preceding unsigned comment added by 64.131.46.188 (talk) 16:43, 19 March 2010 (UTC)[reply]

The complication is that, unlike the rack, the teeth on the pinion don't all point in the same direction, as they are mounted on a circle. But as a first approximation, a pinion of diameter d will have a circumference πd which can accommodate πd/0.050 teeth. If you want the diameter d for a given number of teeth n, it will be d = 0.050n/π. For any other tooth spacing, just replace the 0.050 figure.→86.155.185.122 (talk) 20:38, 19 March 2010 (UTC)[reply]

March 20

Integration by substitution

Hey, it's me again. I read the article about Trigonometric substitution#Integrals containing a2 − x2 but there is one part that it was very vague on. "For a definite integral, one must figure out how the bounds of integration change. For example, as x goes from 0 to a/2, then sin(θ) goes from 0 to 1/2, so θ goes from 0 to π/6. Then we have" How does one figure out how the bounds of integration change? Thanks, and sorry if this question is a lot simplr than the ones you usually get! ~ ~ ~ ~ —Preceding unsigned comment added by 76.230.225.102 (talk) 03:26, 20 March 2010 (UTC)[reply]

Well in this case you have x = a sinθ. The bounds 0 to a/2 indicate that we are starting at x = 0, and ending at x = a/2. So if we want to integrate in terms of θ instead, we need to find what θ is in those places. Using our formula for converting x into θ, we find that when x = 0 we have θ = 0 and when x = a/2 is when θ = π/6, so integrating from where x = 0 to x = a/2 is the same as integrating from where θ = 0 to θ = π/6. Rckrone (talk) 04:16, 20 March 2010 (UTC)[reply]
Using Iverson brackets makes things easier. Substituting x=g(u) where g is an increasing differentiable function, gives
. Bo Jacoby (talk) 16:34, 20 March 2010 (UTC).[reply]

We had

so as x goes from 0 to a/2, then a sin θ goes from 0 to a/2. Therefore sin θ goes from 0 to 1/2. You need to remember some trigonometry: sin 0 = 0 and sin(π/6) = 1/2. Michael Hardy (talk) 16:52, 20 March 2010 (UTC)[reply]

Interest computation- when compounding frequency is lesser than pay out frequency

How should interest computation be done when the interest is required to be paid out earlier than the contracted pay out date considering compound interest. For e.g A fixed deposit is placed for 1 year for $10000 with 5% interest p.a, compounded monthly. The amount (principal+interest) is to be paid at the end of the term. Now if the interest so compounded was to be paid out quaterly, how should the discounting of interest be done? what is the formula for doing this calculation? —Preceding unsigned comment added by Motuammu (talkcontribs) 08:35, 20 March 2010 (UTC)[reply]

The accumulation after 3 months (a quarter) will be but I think we need more details. What do they pay out at each quarter? If it's the increase in the account, it will be the above number less 10,000. In that case, after they payout the account goes back to 10,000. Zain Ebrahim (talk) 10:11, 20 March 2010 (UTC)[reply]
Dividing 5% by twelve rather than taking the twelfth root of 1.05 is only an approximation, although we don't know exactly how the bank etc does its calculations. 92.29.149.119 (talk) 20:55, 21 March 2010 (UTC)[reply]
No, that's wrong. If the interest is compounded m times per period, then the annual rate per period compounded m times is the effective rate for periods multiplied by m. This is the definition of compounding multiple times per period. Zain Ebrahim (talk) 21:45, 21 March 2010 (UTC)[reply]
That was not how my credit-card debt used to be calculated each month. In any case, the exact method of calculation is going to vary from bank to bank - they are probably going to choose a method which works most in their favour. 84.13.47.185 (talk) 16:04, 22 March 2010 (UTC)[reply]
I think the article Annual percentage rate will help the OP. The nominal APR, vs. the effective APR, represents the spread between equivalent interest rates that are either compounded or single-payout at end of term. Also keep in mind that some financial contracts penalize pre-term withdrawal. Nimur (talk) 11:37, 20 March 2010 (UTC)[reply]
Reading the OP's question above, you could use a monthly formula as the basis for both calculations. First calculate the monthly interest rate - it will be the twelfth root of 1.05, minus 1. I make it about 0.4 percent. So each month you get about an extra 0.4% of your principal added to it, making the principal slightly bigger. You use this revised principal in your calculations for next month. In the first case you do this (without withdrawing any interest) for twelve months. In the second case you do this for only three monthas, then withdraw all the interest, so you are back with your original principle, and repeat this procedure four times. Note that the total amount of interest you get in the second case will be less that what you would get at the end of the year in the first case, as you've spent the interest rather than leaving it in the account to grow, even though the interest rate of the two cases is the same.
If you are asking this question because you want to check if a bank is correct or not, then be aware that the details of how they do their calculations can vary considerably, and can be difficult to understand (continuous compounding for example). So as someone suggested above, best to use the APR of each account to compare them. APR is carefully standardised in the EU and UK, although in non-EU countries such as the USA that is not so true. Similarly the interest rate they tell you is usually substantially different from the APR. 84.13.41.17 (talk) 15:10, 21 March 2010 (UTC)[reply]
See above. The effective rate per month is 5%/12. Zain Ebrahim (talk) 21:47, 21 March 2010 (UTC)[reply]
As I wrote above, that was not how my credit card debt used to be calculated every month. But the custom and legislation regarding the advertising and calculation of interest rates is likely to vary from country to country. So best to use APR to make comparisons, although from reading the APR article, even that does not seem too reliable in the USA. 84.13.47.185 (talk) 16:10, 22 March 2010 (UTC)[reply]

Could someone learn all the notation from all the branches of mathematics in a year?

Say someone was talented at math, but didn't pursue it after high school. They often find mathematical notation on Wikipedia and in some math papers that are of interest to them, but even were they would understand the concepts, they don't know them and certainly don't know the notation. If, fed up with ver seeing alien looking notation that is "Greek to them" they decided to spend a year learning enough math to understand all the notations in every branch of math, could they do it? Or is math now so expansive, with so many branches and sub-branches, that such an endeavor would be akin to not wanting to see a word they didn't kno when reading any major language - which is obviously a totally quixotic undertaking, for which, if it can be completed at all, one year is clearly not sufficient? Thank you. 84.153.237.214 (talk)

No. To understand the notation, you need to understand at least some of the maths it represents, and you could never learn all of maths, even to a very low level, in a year (it would be a challenge in a lifetime). There is also the problem of new notation being invented - every time a mathematician comes up with some new maths they invent new notation in order to write it down. I expect more notation is invented in a year than you could familiarise yourself with in a year. There is also the issue of how widespread a notation needs to be before you learn it - do you want to learn notation that is only used in one paper (which is probably most notation). --Tango (talk) 19:58, 20 March 2010 (UTC)[reply]

no, I (the op) meant standard notation, notation that many, many (tens of thousands of) mathematicians can understand. The reason I say tens of thousands and not the millions of people who understand more basic mathematical notation, is that I am leaving room for a degree of specialization. However, I am intrigued by your assertion that someone would have to understand the maths they represent to understand the notation. This seems so very false for me. Can you give examples? For example, I studied some college calculus, and I can understand integral and differential notation. However, I failed the course, and I can't take the integral of or derive anything at all. So, by my standard, I understand the symbols used in that course, which probably took me less than two hours to achieve. The vast, vast majority of the course was learning METHODS -- HOW to take derivates and how to integrate. Including many formulas and laws one had to memorize, and so forth. The actual notation itself was very minor. Actually, this example goes lower, we can take it down to algebra and arithmetic. It's super-easy to understand what the log notation is, and then the next 90% of the high school algebra class on logs is about various laws and methods and how to use them. Or a very basic example: the factorial. Every mathematician knows what it means, which takes 10 seconds to learn, and gets used in high school for permutations, and then not so much during the college classes I took. Maybe other math USES it, but there is never anything more to learn about the symbol ! when it means "factorial". Same goes for elementary school when we learn how to turn a(b+c) into a simple sum (no parentheses). It's a method, and 99% of math seems to be about methods, etchniques, proofs, etc. Very little is notation. This is my impression but I welcome your copious counterexamples if I am wrong. Thank you. 92.229.14.140 (talk) 20:22, 20 March 2010 (UTC)[reply]

When you learn a topic in math there are typically new structures that are introduced, and there are properties of those structures and relationships between those structures that you learn. Knowing the notation for a particular structure is pretty tied up with knowing what the structure is. For example if you know what means, then you probably know what a limit is, and if you know what a limit is then you've doubtlessly learned what the notation means. I guess theoretically you could go about learning all the structures that are studied in various branches of math without learning any properties, but (a) you'd still have to learn a lot of math in the process and (b) I don't know why anyone would want to do that since the structures aren't very meaningful or interesting by themselves. Typically the structures are motivated by the properties that people want.
Just learning all the names is even more useless. What good is it to know that is used to mean "tensor product" if you don't know what a tensor product is?
On the other hand I think you could make a point that there is a basic math canon that gets used everywhere like set notation and some other things like that. That isn't too hard to get a handle of, as any course that introduces proof-based math will have to get that stuff out of the way pretty quick. Rckrone (talk) 21:55, 20 March 2010 (UTC)[reply]
OP, you're right about how it's very easy to know what notations mean, while not being able to use them at all. You might say you can understand adjoints in categories (search adjoint functor), but can't work or even comprehend a trivial question that's asked. Math isn't about learning notation and look cool when you write them; when you actually fully understand them you'll feel the "cool" notations are just like letters a,b,c. When you are more experienced you'll know it normally takes 3 days to understand 2 lines of a really abstract/strange definition. You need think and have lots of mental images in your head to be comfortable with it. So, if you just want to understand symbols, I think you can do it for a particular subject in a month. But you won't even understand the most trivial remarks about it. Money is tight (talk) 06:49, 21 March 2010 (UTC)[reply]
If you're trying to become a mathematical typist (using TeX) then there are some cheat sheets with the names of all the symbols, and finding the right ones isn't that hard with a little bit of practice. It's more work to learn the weirdness of the software, and writing macros requires some programming-like skill. But friends of mine have done it without really knowing much math, and it can pay pretty well once you build up a client list. I can't think of any other reason to be interested in mathematical notation but not in the actual mathematics. 66.127.52.47 (talk) 09:10, 21 March 2010 (UTC)[reply]
If you want to read things and understand what they mean, it would not be possible to learn the meaning of all the commonly used mathematical symbols and terminology in a year. On the other hand, that's a bit like saying it's not possible to watch every movie ever made in a year. If you did devote an entire year to watching and analyzing a well-selected list of movies, you would come out with both a much deeper appreciation for moviemaking, and a head bursting with new experiences and ideas. If you devoted an entire year to learning a few well-selected areas of mathematics, you would end up knowing vastly more than you do now. What things have you been trying to read that you were not able to? Black Carrot (talk) 23:33, 21 March 2010 (UTC)[reply]

Given a relation R:

  1. Is there a shorter way to express the idea that AR = RB?
  2. Is there a shorter way to express the idea that both AR is an bijection injection and AR = RB?
  3. Is there a shorter way to express the idea that AR is a function onto B?

HOOTmag (talk) 19:16, 20 March 2010 (UTC)[reply]

What does the triangle (◁) stand for? --pma 20:59, 20 March 2010 (UTC)[reply]
Yeah, I didn't understand the notation either. It seems to be Z notation. So AR means the relation resulting from the restriction of the domain of R to A, and RB means the relation resulting from the restriction of the codomain of R to B. —Bkell (talk) 21:05, 20 March 2010 (UTC)[reply]
Oh, I guess these symbols are described in the Restriction (mathematics) article. Whaddya know. —Bkell (talk) 21:07, 20 March 2010 (UTC)[reply]
(ec) Are you considering the domain and codomain to be part of the relation, or are you considering the relation to be just the graph? (See Binary relation#Formal definition). If the domain and codomain are part of the relation, then in general AR = RB will be false, because the domain and codomain of AR will be different from those of RB. If the domain and codomain are not part of the relation, then what does it mean to say that AR is a bijection? Surjectivity makes no sense unless a codomain is specified. So do you just mean that AR is injective? —Bkell (talk) 21:04, 20 March 2010 (UTC)[reply]
Yes, the domain and codomain are not part of the relation. See again the new version of my question above. HOOTmag (talk) 21:21, 20 March 2010 (UTC)[reply]

March 21

Finding the nth term of a sequence given a bunch of terms (statistics question)

Is there a technique which allows you to find the nth term, or an approximation of the nth term, of a sequence, given a bunch of terms, or approximations thereof, in that sequence? The technique must give a good approximation regardless of the function for the nth term (i.e. not linear regression, which is completely unusable for the more complex sequences).

For instance, the first 5 terms of a sequence are 2, 5, 10, 17, 26 - Find the nth term, or an approximation thereof (this is {n2+1} btw).--220.253.247.165 (talk) 06:27, 21 March 2010 (UTC)[reply]

There can't be any general method to do that, but the OEIS is an online database of a lot of interesting sequences, so you can enter your terms into it and it will tell you what sequences match. Your example is (sequence A002522 in the OEIS). 66.127.52.47 (talk) 09:14, 21 March 2010 (UTC)[reply]
(ec) See extrapolation. If you know the terms exactly, and you have reason to believe it's a polynomial sequence, you can use finite difference methods to find a formula. For an introduction to one such method, see Finding a formula for a sequence of numbers. This method won't work if the sequence isn't given by a polynomial, if you don't have enough terms (you need at least one more term than the degree of the polynomial), or if the terms aren't known exactly.
Of course you should recognize that any attempt at extrapolation from a limited number of data points may produce a totally incorrect or unrealistic formula, even if it matches the given numbers exactly. Also, there is not a unique formula for a finite sequence of numbers. Any finite sequence can be extended to a "formulaic" infinite sequence of numbers in infinitely many ways. For example, is not the only formula that gives 2, 5, 10, 17, 26, …; another formula that gives a sequence that starts the same way is
.
Bkell (talk) 09:16, 21 March 2010 (UTC)[reply]

Arc length extension

What would the extension of the arc length formula to three dimensions (giving the area of a region of a surface) look like? 149.169.212.68 (talk) 09:25, 21 March 2010 (UTC)[reply]

See Surface area. For a function this becomes . -- Meni Rosenfeld (talk) 10:14, 21 March 2010 (UTC)[reply]

Probability question

Three distinct vertices are chosen at random form the vertices of a given regular polygon of (2n+1) sides. Let all such choices are equally likely and the probability that the centre of the given polygon lies in the interior of the triangle determined by these three chosen random points is 5/14.

Q. No. 1 The number of diagonals of the polygon is equal to (a) 14 (b) 18 (c) 20 (d) 27

Q. No. 2 The number of points of intersection of the diagonals lying exactly inside the polygon is equal to (a) 70 (b) 35 (c) 126 (d) 96

Q. No. 3 There vertices of the polygon are chosen at random. The probability that these vertices from an isosceles triangle is (a) 1/3 (b) 3/7 (c) 3/28 (d) None of these —Preceding unsigned comment added by Prathamesh D T (talkcontribs) 13:10, 21 March 2010 (UTC)[reply]

This is an obvious homework Q. While we don't just do your homework for you, if you show us your work or at least tell us what approach you'd take, we can tell you if that's right or not. StuRat (talk) 13:59, 21 March 2010 (UTC)[reply]

I don't necessarily mind seeing a homework question here, but why doesn't the poster ask us his own questions about it rather than just doing stenography? Michael Hardy (talk) 23:11, 21 March 2010 (UTC)[reply]

how to copy the equations and all mathaematical terms from wikipedia to MS Word?

I need your kind help to copy the equatons & any other maths terms from wikipedia to word document —Preceding unsigned comment added by 117.197.184.148 (talk) 13:18, 21 March 2010 (UTC)[reply]

It depends on the level of math-awareness of your browser. The source of in image is TeX math code, the result is (for me) an inline Portable Network Graphics image. You should be able to just drag that out of the rendered wikipage and into your document (or into some folder, and then use "Insert->Image" in Word). Alternatively, you can recreate the formulas in Word's formula editor. Typesetting will suck, but rendering of the typeset formula will be better. Or you can use a small helper program (I use LaTeXiT on the Mac, but there are similar tools for Windows, I've been told) to handle the source code. Of course, doing anything complex (like writing a text) in Word is painful, and writing a text with serious maths in it is unbearable. It pays to learn LaTeX to escape this pain... --Stephan Schulz (talk) 13:39, 21 March 2010 (UTC)[reply]
If all else fails, you can copy the image. This will leave you with a bitmap of the formula, instead of the formula itself. The advantage is that it's quick, easy, and accurate. The disadvantage is that it can't easily be edited (other than for aspect ratio and scale). Under Firefox, I right click on the formula, select "Copy Image", go to Word, and do an Edit + Paste. StuRat (talk) 13:51, 21 March 2010 (UTC)[reply]

Formula to undistort the image from a shiny sphere

There images are examples: http://www.flickr.com/photos/ch4os1337/400399785/ http://www.flickr.com/photos/stuart100/3096435218/ http://www.flickr.com/photos/mag3737/2229618975/ What fomula could I use to undistort/distort the image so that it looked like a reflection from a flat mirror rather than a spherical one? Thanks. 84.13.41.17 (talk) 15:25, 21 March 2010 (UTC)[reply]

See map projection - you have quite a lot of choice! 94.168.184.16 (talk) 16:55, 21 March 2010 (UTC)[reply]

I don't think that is the answer - map projection is about representing the surface of a sphere as a flat surface. This is about undoing the transformation that a convex mirror makes to what it reflects. 92.29.149.119 (talk) 20:06, 21 March 2010 (UTC)[reply]

It is described for a very different context, but I think this is relevant as it describes the reflection of a point in a sphere. Integrating over for all the points in the objective plane, you should get the shape of the image plane, the relationship between the two should allow you to flatten out your images (with a fair bit of grunt work). —Preceding unsigned comment added by 92.22.125.66 (talk) 21:27, 21 March 2010 (UTC)[reply]
I do think you will run into a problem of having around 180° visible on the surface of the sphere, which can't be flattened out without major distortions or rips, just like the spherical map projection problem. Perhaps it could be changed into a VR model where you rotate the camera to look around (this would be like you were in the center of the sphere looking out), although the resolution of the image would go down near the edges. StuRat (talk) 00:48, 22 March 2010 (UTC)[reply]
Some pixels in the transformed image will be stretched bigger than they were in the original image, but there should not be any rips. 84.13.47.185 (talk) 15:58, 22 March 2010 (UTC)[reply]
That would be distortions. Those can be minimized by strategically adding rips. StuRat (talk) 17:36, 22 March 2010 (UTC)[reply]
You seem stuck on the idea that its like a map projection, which it is not, but more analogous to the distortion you get with a lens. 78.149.193.98 (talk) 20:41, 22 March 2010 (UTC)[reply]

The grunt work for me would include several years of studying maths unfortunately, as I stopped studying it when I was 16. This applet illustrates the problem http://www.phys.ufl.edu/~phy3054/light/mirror/applets/convmir/Welcome.html - you see from the top of the red line, and the problem is to transform the image into what you would see from the top of the blue line. If that link does not work, use this and click Convex Mirrors: http://www.phys.ufl.edu/~phy3054/light/mirror/applets/Welcome.html 92.29.149.119 (talk) 21:49, 21 March 2010 (UTC)[reply]

It's not years of math, just some high school geometry. The ray tracing article might help. 66.127.52.47 (talk) 04:08, 22 March 2010 (UTC)[reply]

Maybe someone could help me figure out a formula for transforming the geometry given in the applet above. I would not be able to understand or implement matrix methods. Thanks 84.13.47.185 (talk) 15:58, 22 March 2010 (UTC)[reply]

March 22

Statistics question: Estimating a probability distribution given sample trials

Is there a technique for estimating the probability distribution of a continous random variable given the results for many trials?

For instance, a continous random variable was rolled 10 times, and the results were 4.03, 1.99, 3.2, 3.119, 4.21, 0.87, 4.14, 2.02, 3.324, 4.39 - Find the probability distribution, or an approximation of the probability distribution.

The article on recursive Bayesian estimation originally looked promising, but it may not be relevant.--220.253.247.165 (talk) 09:50, 22 March 2010 (UTC)[reply]

There are many techniques for this - choosing one depends on the particular application. If you know absolutely nothing about the distribution, you can't do much better than the empirical cumulative distribution function. If you assume the density function is smooth, then a good way to estimate it is kernel density estimation. If you assume the distribution is from some parametric family, you can use the method of moments or preferably, the MLE. -- Meni Rosenfeld (talk) 11:17, 22 March 2010 (UTC)[reply]
Wouldn't it make sense to plot it first, then try to determine the type of distribution by inspection ? If I plot the ranges 0-1, 1-2, 2-3, 3-4, and 4-5, I get this:
  4^                 *
  3|             *
n 2|
  1| *   *   *
  0+------------------->
    0-1 1-2 2-3 3-4 4-5
            range
I also notice data clusters around 2, 3.2, and 4.2, although more data points are needed to confirm this. StuRat (talk) 12:58, 22 March 2010 (UTC)[reply]
Compute the cumulant generating function of the sample. Bo Jacoby (talk) 18:13, 22 March 2010 (UTC).[reply]
Wouldn't that just give the empirical distribution? -- Meni Rosenfeld (talk) 18:25, 22 March 2010 (UTC)[reply]
Yes, but the cumulants of the sample approximate the cumulants of the population. See also multiset. Bo Jacoby (talk) 18:44, 22 March 2010 (UTC).[reply]

Iterations of a multiplicative function

For those who are interested in such things, here is an oddment from recreational number theory that has been puzzling me (not homework, and not a competition problem either !).

If the prime factoriation of a positive integer n is

define the function f(n) as

Then f is a multiplicative function, although not completely multiplicative. We can iterate f; for example:

As the above sequence shows, f(n) may be less than, greater than or equal to n. However, the second iterate f(f(n)) always seems to be less than or equal to n.

Can you either prove that f(f(n)) ≤ n for all n or find a counterexample such that f(f(n)) > n. Gandalf61 (talk) 11:10, 22 March 2010 (UTC)[reply]

If p is an arbitrary prime number, then . However, if we let (qi prime for all i), we can compute . Now use the formula to test your conjecture. PST 15:03, 22 March 2010 (UTC)[reply]
Okay, I follow that - but I am not sure where it goes next. And what if n does not have the form pk ? Gandalf61 (talk) 15:48, 22 March 2010 (UTC)[reply]

Inverse functions and

I worked out that the second in my question's title was the inverse of the first by using the first and then substituting , flipping my x-and y-values so I have , rearranging to get and applying the quadratic equation and discarding the solution with the minus and finally taking the log to get back to y. But when I graphed these two functions on the online grapher at walterzorn.com, they only looked like mirror images over the line y=x for positive x-values. Are these two not inverses, meaning I did something wrong, or is there some restriction here I'm not remembering? I know you can't take the log of a negative value, but doesn't go below zero until x equals approximately -1.56, where the asymptote of my second function is. 20.137.18.50 (talk) 13:23, 22 March 2010 (UTC)[reply]

Should be . Gandalf61 (talk) 13:36, 22 March 2010 (UTC)[reply]
Thanks, I see how I forgot to square my middle term. 20.137.18.50 (talk) 14:05, 22 March 2010 (UTC)[reply]
Should be . (minus not plus in the square root) Staecker (talk) 13:46, 22 March 2010 (UTC)[reply]
Sorry- you're right. I screwed it up twice so it still looked right when I double-checked. Staecker (talk) 14:39, 22 March 2010 (UTC)[reply]
Since no one gave the WP link yet, I will: hyperbolic function#Inverse functions as logarithms. The inverse of is then .—Emil J. 14:54, 22 March 2010 (UTC)[reply]

If and only if symbol to use in thesis

I am wanting to use the symbol for iff in my theoretical framework but I am not sure which one would be the right one to use in this instance. Should it be ≡ ?--160.36.39.157 (talk) 14:27, 22 March 2010 (UTC)[reply]

If you must use a symbol, I've most often seen . But English words are usually better in my opinion. Staecker (talk) 14:34, 22 March 2010 (UTC)[reply]
I agree, words are often better than symbols (I don't know about "usually better"). The abbreviation "iff" is very commonly used, as well. --Tango (talk) 15:35, 22 March 2010 (UTC)[reply]
See iff. -- SGBailey (talk) 16:28, 22 March 2010 (UTC)[reply]
I'd say the symbol is OK, but you need to define it first: "Throughout this thesis I will use 'iff' to mean 'if, and only if,' ". StuRat (talk) 17:08, 22 March 2010 (UTC)[reply]
I think "iff" is sufficiently widely used not to need to be defined. --Tango (talk) 17:11, 22 March 2010 (UTC)[reply]
I agree with that -- definitely do not define the word iff; that would be somewhere on the continuum from silly to offensive.
But I don't actually agree with using it. Iff is a blackboard abbreviation; it's not the right linguistic register for a thesis. That said, some good people do use it in print, but I still don't like it.
As for symbols versus words, it depends on whether you're writing prose or logical formulae. It's true that would be odd in prose, but it's just the right thing if the rest of the formula is in symbols. (Well, that or .)
If you find that the repetition of if and only if gets tedious, a couple of synonyms that you could throw in for elegant variation are just in case or exactly when, or you could reword to use necessary and sufficient. --Trovatore (talk) 17:20, 22 March 2010 (UTC)[reply]
You're both assuming that the audience has a math or science background. This could be a thesis in another area, with just a bit of math or science thrown in. StuRat (talk) 17:32, 22 March 2010 (UTC)[reply]
That strikes me as unlikely. --Trovatore (talk) 17:58, 22 March 2010 (UTC)[reply]

It is a thesis for M. S. in Agricultural Economics. I have my logic explained in words and I want to also show it in a equation form.--160.36.39.157 (talk) 18:58, 22 March 2010 (UTC)[reply]

Optimal location of a bridge

Does anybody suggest me an algorithm for finding an optimal location of a bridge joining two sets of populated places on either side of a river? —Preceding unsigned comment added by Amrahs (talkcontribs) 15:23, 22 March 2010 (UTC)[reply]

For a optimisation problem you need a cost function (some way of comparing different places) and a set of constraints (things that must be satisfied by the place). What algorithm is best will depend on what form those things take. --Tango (talk) 15:38, 22 March 2010 (UTC)[reply]
Obvious criteria to include are: minimising sum of (journey length over bridge) * (Relative journey frequency) and minimising construction cost of bridge (eg build it over the narrowest part of the river) -- 16:26, 22 March 2010 (UTC)
This would get so complex, with so many variable weighting factors, that human judgment might be better than an algorithm. Other factors might be the depth of the river in various locations, ground quality (bedrock or swamp ?) at the adjacent land, what would need to be demolished to build the bridge (new skyscraper or old slum ?), environmental impact, attractiveness of the bridge in various locations, effect on river traffic, etc. Something which they don't always consider, but should, is the ability to connect to existing highways. In many places you must exit the highway, drive on local roads to the bridge, cross the bridge, then drive on local roads again to get to the highway on the other side. And, if political reasons are included, like getting government money for your area that will otherwise go elsewhere, you can even get a bridge to nowhere. StuRat (talk) 17:15, 22 March 2010 (UTC)[reply]