Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 84.227.11.170 (talk) at 10:43, 18 June 2011 (mathematical characters and algorithms: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


June 12

Normal Deviation

The Menstrul Cycle of female is on an average, of 28 days and has standard deviation of 2 days A)in what % of women will the cycle differ by more than 3 days from mean ? B)mark out symmetrically around the mean the range in which 80% women will lie — Preceding unsigned comment added by 49.200.54.220 (talk) 16:40, 12 June 2011 (UTC)[reply]

This is obviously a homework question. We don't do people's homework for them here. Could you show us what you have done so far and then we will help you to understand what you need to do next. As it says at the beginning of this page: "If your question is homework, show that you have attempted an answer first, and we will try to help you past the stuck point. If you don't show an effort, you probably won't get help. The reference desk will not do your homework for you. Fly by Night (talk) 17:49, 12 June 2011 (UTC)[reply]
I'll just point out that from a fully technical position, the question is underspecified. The answer depends *highly* on which probability distribution you use. Average and standard deviation are really only enough to fully specify a distribution if you're assuming a normal distribution (and some other, simple distributions). While it's possible that the length of the menstrual cycle is normally distributed (Central Limit Theorem and all that), as the problem is stated we can't be sure if the assumption is completely valid. -- 174.31.219.218 (talk) 19:14, 12 June 2011 (UTC)[reply]
Well, the fact that the OP entitled this section Normal Deviation is a bit of a clue, perhaps. Looie496 (talk) 23:18, 12 June 2011 (UTC)[reply]
The distribution can't be normal because that would imply the possibility of a negative time. The question also seems to assume (wrongly, as far as I know) that the cycle time is constant for any particular woman. AndrewWTaylor (talk) 19:36, 13 June 2011 (UTC)[reply]
A negative value would be 14 standard deviations away from the mean. I think we can safely ignore that problem. Few things are actually normally distributed, but a great many things are well approximately by the normal distribution within about 3 sd of the mean, which is what makes it such a useful distribution. --Tango (talk) 23:23, 13 June 2011 (UTC)[reply]

Newton's Second Law

"The second law states that the net force on a particle is equal to the time rate of change of its linear momentum p in an inertial reference frame:

where, since the law is valid only for constant-mass systems the mass can be taken outside the differentiation operator by the constant factor rule in differentiation. Thus,

"

This is an extra from your article on Newton's laws of motion, specifically the section on the second law. I don't understand why it says that the law is valid only for constant mass systems, not least of all because on Isaac Newton, in the section entitled 'Laws of Motion', the second law is written as , the form that I am familiar with.

I'm partly asking what the subtlety is that demands we treat the mass as constant in the first case, assuming it's not a mistake, and partly suggesting that someone who knows what they're doing clarifies the issue. I have studied mathematics and physics at university and if I am left confused by this then the layman will have no idea what's going on. Thanks. meromorphic [talk to me] 16:44, 12 June 2011 (UTC)[reply]

Can I just say, as a the layman of your words, I can follow the first version with mass constant, but would have more trouble with the rule if the rule is equally true in changing-mass systems. (I do understand the product rule, but still.) I couldn't say which is more familiar to people with physics or mathematics degrees, but I'm certainly more aware of f=ma than one adapted for use on changing-mass systems. I don't know which Newton used, but if the second is more "true", then it may still be preferable to keep the first and say "if mass is constant" rather than "since mass has to be constant", at least introductory-wise. Grandiose (me, talk, contribs) 16:52, 12 June 2011 (UTC)[reply]
The problem with is that if the mass of an object is changing it must be picking up or expelling mass (or energy) from outside. The mass that's entering or leaving has momentum of its own, and the formula doesn't tell us how to accurately deal with that. For example if I have a moving object and a piece breaks off, but the original object and the piece both keep moving off at the same velocity, then I end up with a non-zero , but can we really say there's a force acting on the object? Rckrone (talk) 18:24, 12 June 2011 (UTC)[reply]
I think the term for non-constant mass comes in when you take relativity into account. You can apply all the force you want but the velocity will always be less than c. So when you get close to c what the force does mostly is increase mass, which is where the dm/dt comes in. But this is the math helpdesk, we don't normally deal with soft science like physics.--RDBury (talk) 22:53, 12 June 2011 (UTC)[reply]
No response yet from physicists who might object to calling their science "soft"? Dbfirs 07:51, 17 June 2011 (UTC)[reply]
See here:

Hilbert said "Physics is too hard for physicists", implying that the necessary mathematics was generally beyond them; the Courant-Hilbert book made it easier for them.

Count Iblis (talk) 02:10, 18 June 2011 (UTC)[reply]
While you can do relativity using relativistic mass (and I often do when trying to give people an intuitive feels for how it works), it isn't the normal way of formulating it mathematically. You can see a derivation of the more common relativistic equivalent of F=ma here: Special_relativity#Force. --Tango (talk) 23:30, 13 June 2011 (UTC)[reply]


June 13

Ogden's lemma examples

Our article on Ogden's lemma (and many other pages I've found on the web) use the example {aibjckdl : i = 0 or j = k = l} but this language can be proven non-context-free using the pumping lemma and closure of CFLs under intersection by regular languages (take ab*c*d*). Are there examples of languages which can be proven to be non-CFLs with Ogden's lemma, but not with the pumping lemma and closure of CFLs under regular intersection and gsm mappings? --146.141.1.96 (talk) 10:51, 13 June 2011 (UTC)[reply]

nilpotent cone of a Lie algebra

The article nilpotent cone claims that "The nilpotent cone...is invariant under the adjoint action of on itself." Take the example of as in the article, so the nilcone N is spanned by E and F. Then which is not in the nilcone, so N is not ad-invariant. Have I misunderstood something, or is the article wrong? Tinfoilcat (talk) 11:15, 13 June 2011 (UTC)[reply]

If g is a Lie algebra and x is an element of g, then the adjoint action adx : gg is defined by adx(y) = [x,y] for all y in g. To prove that the nilpotent cone, say N, is invariant under the adjoint action you just need to show that for all x and y in N, the Lie bracket [x,y] is again in N. Fly by Night (talk) 14:47, 13 June 2011 (UTC)[reply]
FbN, I know what the adjoint map is and what it means to be ad-invariant (it's not quite what you say - ad-invariant means a Lie ideal, not a Lie subalgebra). The example I gave in my post seems to show that the nilcone is not ad-invariant since which is not in the nilcone, whereas E and F are. Tinfoilcat (talk) 14:54, 13 June 2011 (UTC)[reply]
I don't recall mentioning subalgebras or ideas. I'm sorry I wasn't able to help. Fly by Night (talk) 15:50, 13 June 2011 (UTC)[reply]
FbN, I was very bitey above. Sorry. When you say "for all x and y in N, the Lie bracket [x,y] is again in N " that's equivalent to it being a subalgebra. Being an ideal (= being ad-invariant) is that for n in N, x in the Lie algebra, - a little stronger. Tinfoilcat (talk) 15:59, 13 June 2011 (UTC)[reply]
No problem. Proving it's a subalgebra means that it's automatically an ideal too, by definition (because of the way the nilpotent cone is defined in the first place). Or at least I think so… Fly by Night (talk) 16:06, 13 June 2011 (UTC)[reply]

The statement is wrong. The nilpotent cone is invariant under the action on Int(g) on g (interior auromorphisms). Sławomir Biały (talk) 15:53, 13 June 2011 (UTC)[reply]

Thanks. I'll fix the page (if you haven't got there first). Tinfoilcat (talk) 15:59, 13 June 2011 (UTC)[reply]

Normalizing Associated Legendre Polynomials to get Spherical Harmonics

Hi all. I've been working on this problem for like weeks, and I can't seem to figure it out. I'm trying to normalize Associated Legendre polynomials to turn them into Spherical harmonics. The integral comes out to:

where is the normalization constant. can be found in Spherical harmonics#Orthogonality and normalization. I know that it involves integrating by parts times, and that the boundary terms vanish in each case, but I'm not sure why they vanish. Can anyone point me to a (very detailed) discussion of how to actually do the integral, or maybe a better way than by parts? Thanks!--Dudemanfellabra (talk) 23:32, 13 June 2011 (UTC)[reply]

Hi, I had a quick look in Boas - 'Mathematical Methods in the Physical Sciences'. She has a problem (prob 10.10 in second edition) about the normalisation, first some results:
a)
This comes from substituting in Rodriques' formula (see Legendre_polynomials) for the Legendre polynomials. Next Boas says derive
b)
Multiply these two results together (this forms your normalisation integrand) then integrate by parts repeatedly until both l+m and l-m derivatives are just l derivatives. Then use
- I assume because you end up having an integrand that looks like two Legendre polynomials multiplied together.
Now, the derivation of result (b) is a task in itself. Apparently one can show that

by starting with and finding derivatives using Leibniz' rule. Good luck, I'd find a copy of Boas and work from that if I was you. Christopherlumb (talk) 19:55, 14 June 2011 (UTC)[reply]

I found a copy of Boas, but unfortunately the method for part b) is not given. After applying Leibniz' rule, I get
I have no idea where to go from here...--Dudemanfellabra (talk) 23:13, 14 June 2011 (UTC)[reply]
I'm really struggling myself, I think you have to get expressions for the th and th derivatives and compare them term-by-term, so
we note that the derivatives on the right-hand side are just acting on so that any derivative higher than differentiates to zero. This means the sum gets truncated: we must have (else the d/dx^i goes to zero) and (else the d/dx^(l+m-i) will go to zero). So:
now we have no-more terms in this sum than for the sum, next evaluate the derivatives:
I've written the derivatives as fractions where you'd normally just write . Next job is to make this look like the l-m derivative:
Let then our sum runs from to :
Pull a factor of (x^2-1)^m out the front:
It's looking very similar to the l+m version above, I haven't the energy to go through to check the factorials all work out but I think this is on the right track 213.249.173.33 (talk) 21:09, 15 June 2011 (UTC)[reply]
AH! THANK YOU THANK YOU THANK YOU THANK YOU! Haha I had figured out what the (l+m-i)th and (l-m-i)th derivatives were (btw, the factorials in your above derivation are a little off. I'll show the correct one below.), but I didn't think about shifting the index of one of the sums so that they can be compared properly. By shifting that index and using falling factorials for the derivatives, I was able to show the correct relationship. I'll show it below and continue to work on the rest of the problem. Thanks for all your help!
Derivation showing alternative way to write Plm(x)

has only l non-vanishing derivatives, so . Therefore, the sum can be written from i=m to l. Using falling factorial notation for the derivatives and expanding the binomial coefficients in factorial form yields

Doing the Leibniz formula for the l-m derivative yields

Again, expanding the coefficients and using falling factorials,

Let Plugging in:

Now that the two sums have the same indices (i and k are just dummy variables, so we may as well say they're the same thing; let's choose i), we can divide the two sums to find the ratio of the l+m and l-m derivatives. That is,

which, after cancelling and observing that all i's disappear so the sum can be dropped, yields:

This gives the relationship desired in part b) of Christopherlumb's comment above (with the exception of the factor... I'm hoping that doesn't matter though haha). I'll now attempt to work through the integral using this identity. Again, thanks for all the help!--Dudemanfellabra (talk) 23:34, 15 June 2011 (UTC)[reply]


June 14

Software

WHAT IS THE COMPUTER SOFTWER? — Preceding unsigned comment added by 117.197.165.191 (talk) 05:38, 14 June 2011 (UTC)[reply]

Try asking at The Computing Helpdesk. Perhaps a little more detail about your question would help. ―JuPitEer (talk) 06:50, 14 June 2011 (UTC)[reply]
Or read our article on computer software. Gandalf61 (talk) 09:20, 14 June 2011 (UTC)[reply]
I think the OP might be wondering how Wikipedia works. I don't know if Wikipedia has a software per se. But, as JuPitEer said: the computer reference desk would be a better place to ask. Maybe try the Wikipedia article too. Fly by Night (talk) 14:28, 14 June 2011 (UTC)[reply]
The software that runs wikipedia is MediaWiki. Staecker (talk) 23:27, 15 June 2011 (UTC)[reply]

Evaluating the matrix exponential of a tridiagonal matrix

I am working on a classification problem based on Bayesian inference, where I observe the discrete state of an object at non-equidistant points in time (the evidence). My objective is to keep updated probabilities that the object belongs to different classes Ck, k=1,...m using Bayes' theorem. The evolution of the state can be modelled as a continuous-time Markov process. The states can be ordered such that the state only jumps to a neighboring state. The rates at which the jumps occur are a characteristic of the object class, and I have collected data from objects with known classes and based on this I have good estimates of the class specific rate matrices Rk. The rate matrices are tridiagonal since the state jumps are restricted to nearest neighbours.

The number of states n are in the range 5-20 and the number of classes m is less than 10. For the Bayseian inference update I need to evaluate the state-transition probability conditioned the class hypothesis Ck

where

Posterior state at time
Prior state at time
Time difference

In my problem I need to evaluate the matrix exponential

regularly for each of the m object classes but with different update intervals. One way to do that is to diagonalize the rate matrices as

where is a diagonal matrix containing the eigenvalues (eigenrates) as then

which makes recalculating the matrix exponential for different update times easier, as calculating P and its inverse only needs to be done once per object class, so what I basically need to do is to make n scalar exponential calculations and then make the matrix product, which scales as . From the eigenrates, I also know the characteristic times in the system and I may even precalculate the matrix exponentials for some logarithmically distrubuted update times and simply pick the closest match in update time.

However, I am concerned about the numerical stability in the diagonalization. The largest eigenvalue should always be zero (corresponding to the steady state solution), but this is susceptible to roundoff error. (Until now I has postcorrected the eigenvalue setting the highest one to zero) For the remaining eigenvalues the ratio between min and max can be assumed to be smaller than 1000. I read in the matrix exponential article that doing this right is far from trivial.

Can I use the fact that my rate matrices are tridiagonal to make a more robust or elegant computation of the matrix exponentials?

--Slaunger (talk) 09:06, 14 June 2011 (UTC)[reply]

Try the formula to reduce the matrix exponential to n squarings of the matrix , and not using the fact that M is tridiagonal. Bo Jacoby (talk) 15:56, 14 June 2011 (UTC).[reply]
Thank you for your response. Ok, I think I understand, but if I digonalize once, I only have to do three matrix multiplications for each different , whereas using the approach you suggest I need make n multiplications, where n depends on how fast the expression converge, assuming I understand correctly? I realize it will converge very fast though... Obviously n should be selected such that that , and this is mathematically correct, but will it be more numerically stableerrors than the diagonalization approach? What I am concerned about is the numerical stability and reliability of the diagonalization approach. It is mentioned in Matrix exponential#Computing the matrix exponential that
Finding reliable and accurate methods to compute the matrix exponential is difficult, and this is still a topic of considerable current research in mathematics and numerical analysis.
In mathematical libraries, also the Padé approximant is often used as a robust way to do the matrix exponential for general matrices (except if it is ill-conditioned), but it seems like overkill for my case where the rate matrices are constant and I only multiply it with a scalar prior to taking the exponential.
In the Octave documentation for expm I noticed a reference to Moller and Van Loan, Nineteen Dubious Ways to Compute the Exponential of a Matrix, SIAM Review, 1978, which seems very interesting in this context. Unfortunately I do not have access to that paper, but the abstract seems very interesting:
And I was thinking that maybe there was a nifty, reliable and elegant method for calculating the matrix exponential if the matrix was tridiagonal and multiplied by some scalar - at least I seem to recall that solving the eigenvalue problem for a tridiagonal matrix is much easier than for a general diagonalizable matrix. --Slaunger (talk) 19:12, 14 June 2011 (UTC)[reply]
Even when a scalar exponential eM is approximated by (1+2nM)2n , n should be chosen with care. If n is too small, then the approximation has insufficient precision, and if n is too big, then round-off errors make 1 + 2nM equal to 1. Floating-point numbers have fixed precision, and not everything can be done. Bo Jacoby (talk) 08:49, 15 June 2011 (UTC).[reply]
Yes, that is why the question is not so simple. It is more about the most efficient and reliable algorithm than about the maths (although you need some linear algebra maths to arrive at the best algorithm). Maybe the Computer or science help desk is better place for my question? I was in doubt if numerical methods/algorithms are in scope of this help desk, when I asked the question, but decided to try here as the question was pretty mathematically oriented... In my problem i need to do these caclulation in real time for many objects on a computational platform with rather limited resources, so I need to understand how to find the best processing compromise. --Slaunger (talk) 10:11, 15 June 2011 (UTC)[reply]
As far as I'm concerned, asking at the math desk is fine. I think at the computing desk they're more used to answering questions about programming languages and systems administration than numerical algorithms. --Trovatore (talk) 10:30, 15 June 2011 (UTC)[reply]
In order to find the most efficient and reliable algorithm you need to test some algorithms. Have you tried my method before you think that something else is better? Bo Jacoby (talk) 15:51, 15 June 2011 (UTC).[reply]
A valid point. I have tried experimenting with the Padé approximant implementation in scipy, which seem to converge very fast for the examples I have tried. I have implemented the diagonalization approach in Python, which also seem to work and which is efficient as the number of computations is fixed once the rate matrices have been diagonalized. I have not tried your approach yet, but my fealing is that it is not as stable as the Padé approximant. However, I need to go to a production language like C to properly profile the different options, and this takes a considerable effort. I think I will try to get hold on the reference I mentioned earlier, where I have noticed there is also a 25 years later updated review article, and study what others have done instead of spending a lot of time of reinventing the wheel. Tak for din tid! --Slaunger (talk) 19:40, 15 June 2011 (UTC)[reply]
The 25 years later reference is also publicly available at http://www.cs.cornell.edu/cv/researchpdf/19ways+.pdf. I am reading it with interest - a lot of goodies there.... --Slaunger (talk) 07:45, 16 June 2011 (UTC)[reply]
To compute the formula to good numerical precision, start with rather than and use to perform the squaring without adding in the . At some point, unless is very small, the intermediate value being stored gets large and then the can be added in and ordinary squaring can be used for the remaining steps. McKay (talk) 04:32, 16 June 2011 (UTC)[reply]
After having studied the review article, it appears to me that series expansion and multiplicative methods as suggested here will not be stable as for my rate matrix problem. In my case the highest eigenvalue is always exactly zero, whereas all others are negative real numbers, which implies that
,
where is the steady-state state-transition probability matrix containing n identical column vectors. Any column vector in will represent the steady-state (or a priori) probability distribution between the states. As the time difference grows, the elements of the matrix to exponentiate also grows, and so will be number of multiplications and roundoff errors. In the specific case, where the I have this eigenvalue property of the rate matrices, a matrix decomposition method seems to be a good path. However, the decomposition does not necessarily have to be an eigenvalue decomposition (although that is very easy for a tridiagonal matrix). If the condition of the matrix Pk containing the eigenvectors is too high (if it is nearly defective), one should be careful. I also learned that other decomposition methods such QR decomposition into an orthogonal and a triangular matrix can be a useful alternative to eigenvlaue decomposition, as calculating the matrix exponential of a triangular matrix should be fairly simple (have not studied the details).
Another observation is that the decomposition method require of the order computations to make the decomposition but only for each calculations for each subsequent . --Slaunger (talk) 09:48, 16 June 2011 (UTC)[reply]
McKay's trick above is a clever one. The review article is scholarly and inconclusive. High floating point precision was not available in 1978. If you know that no eigenvalue has a positive real part, then use eMt=(e−Mt)−1 Bo Jacoby (talk) 12:31, 16 June 2011 (UTC).[reply]
It is the updated 25 years after article from 2003 I have been reading, albeit it still contains quite some old stuff reminescent of 1978 single precision computing possibilities. I agree it is not very conclusive, the bottom-line is that all methods has it pros and cons, although the pro/con balance differs from method to method and to some extend depend especially on how well-behaved the matrix you want to exponentiate is.
Thanks for the inversion trick. But does the inversion trick help when the largest eigenvalue in the rate matrix is identical zero? The rate matrices fulfill the condition that the sum of elements in each column is zero, the diagonals are negative and the offdiagonals are positive. Does a sign reversal on all elements really make it easier to exponentiate, and is the matrix inversion needed afterwards not a process, or am I missing something? --Slaunger (talk) 12:54, 16 June 2011 (UTC)[reply]
The series expansion of e−5 is unstable, due to round-off errors, while (e5)−1 is stable. e0 is the easier case. Yes, matrix inversion is n3, so don't do it if anything else works. The squaring method is stable, I think. The tridiagonal property is apparently not helpful. Good luck on the project! Bo Jacoby (talk) 16:34, 16 June 2011 (UTC).[reply]
Ah, I see. Thanks for all the helpful advice of which some helped me to go in a reasonable direction myself. I think I will go for a eigenvalue decomposition n2 approach first, and if that fails I have a couple of good n3 ideas to pursue here. Tak igen. --Slaunger (talk) 18:42, 16 June 2011 (UTC)[reply]

Dimension and Cardinality

Let M be a smooth, real, n−dimensional manifold. I'd like to know the dimension and cardinality of C(M,R), i.e. the space of smooth functions from M to R. I believe that the dimension is at least ℵ1, and maybe even ℵ2. I have no idea what the cardinality might be; although that isn't of paramount importance. Can anyone confirm, or deny, that the dimension is at least ℵ1, and maybe even ℵ2? If someone could supply a reference then that'd be great. Fly by Night (talk) 15:45, 14 June 2011 (UTC)[reply]

I may be being stupid, but isn't M a disjoint union of as many open balls as you like a perfectly good smooth n-dimensional manifold which has as many such functions as you like? Or is there an implicit connectedness assumption somewhere in that? 128.232.241.211 (talk) 16:23, 14 June 2011 (UTC)[reply]
You're not being stupid at all, in fact you raise an interesting point. I guess you're right. I should have mentioned that it's connected. Although I'm not sure that it matters. You'd get a Cartesian product of the form
I'm not sure that that would change the dimension. The because k⋅ℵ1 = ℵ1. Or at least I think so… Fly by Night (talk) 16:42, 14 June 2011 (UTC)[reply]
I think (part of) 128's point was that k need not be finite. AndrewWTaylor (talk) 17:04, 14 June 2011 (UTC)[reply]
I was just thinking of a nice hypersurface like we use in differential geometry all the time. Fly by Night (talk) 17:37, 14 June 2011 (UTC)[reply]
Okay, so that's at least σ-compact, which still gets you separable (unless I was wrong about compact manifold => separable). --Trovatore (talk) 18:22, 14 June 2011 (UTC)[reply]
For an abstract topological space, compact does not imply separable; for example the so-called Lexicographic order topology on the unit square. It's compact and connected, but it's not separable. As for a manifold, I'm not sure. It's usual to assume connectedness and paracompactness (that makes it a metrizable space). Fly by Night (talk) 18:47, 14 June 2011 (UTC)[reply]
It's actually pretty trivial; I just didn't see it right away. Given a compact manifold, each point has an open neighborhood that's homeomorphic to Rn, so that neighborhood has a countable dense subset. By compactness, finitely many such neighborhoods cover the whole manifold. --Trovatore (talk) 19:04, 14 June 2011 (UTC)[reply]
Good work. It's easy when you know how… Fly by Night (talk) 19:53, 14 June 2011 (UTC)[reply]
OK, first of all, I kind of suspect that you're using and to mean and respectively, is that true? That's a bad habit, unfortunately one encouraged by a lot of popularizations. The GCH is not some harmless little notational thing but rather a strong combinatorial assertion. You can write instead and , although these are somewhat less universally recognized and suffer from the problem that and look awfully similar.
Are we assuming M is compact? Then the cardinality of C(M,R) is . The lower bound should be pretty trivial; for the upper bound, note that M is separable (I think that follows from "compact manifold"), and pick a countable dense subset; then any continuous function from M to R is determined by its values on that subset. If you drop "compact" then you can find non-separable (topological) manifolds like the long line and things get a little more complicated, but if I recall correctly there's no way to make the long line into a smooth manifold. But again that's more complicated.
For the dimension, what sort of dimension are you interested in? Topological dimension? Dimension as a real vector space? --Trovatore (talk) 16:40, 14 June 2011 (UTC)[reply]
Ermm, errr. I wasn't assuming M to be compact; I didn't think that it would have mattered. But it obviously does. I'm thinking of it as a ring, with point-wise addition and multiplication as the operations. So (I guess) that the vector space dimension would be of more useful for me. You obviously know your stuff, so go easy. Try not the fry my mind. Fly by Night (talk) 16:59, 14 June 2011 (UTC)[reply]

Assuming M to be connected and paracompact, , equipped with the topology of uniform convergence of all derivatives on compact subsets, is a separable Frechet space. That's probably more useful to know than the linear dimension (which is .) Sławomir Biały (talk) 17:55, 14 June 2011 (UTC)[reply]

Is there an example of a connected infinitely differentiable manifold that's not separable? Did I remember correctly that there's no infinitely differentiable structure on the long line? --Trovatore (talk) 18:24, 14 June 2011 (UTC)[reply]
I think the Lexicographic order topology on the unit square gives a topology on a manifold (with boundary) that is compact and connected, but which is not separable. I'm not sure if that's what you were asking. Fly by Night (talk) 18:53, 14 June 2011 (UTC)[reply]
There is a smooth structure on the long line (according to our article long line). The lexicographic order topology is not a manifold with boundary. No neighborhood of a point (x,1) for 0<x<1 is homeomorphic to an interval. Sławomir Biały (talk) 19:21, 14 June 2011 (UTC)[reply]
Also, every connected paracompact manifold is separable. (Paracompactness and local compactness implies that each component is sigma compact, hence Lindeloef, hence separable.) Sławomir Biały (talk) 19:46, 14 June 2011 (UTC)[reply]
Okay, that makes sense. Would it just be a (topological) manifold then? Fly by Night (talk) 19:55, 14 June 2011 (UTC)[reply]
It's not any sort of manifold. Sławomir Biały (talk) 19:57, 14 June 2011 (UTC)[reply]
The article on topological manifold says that each point in the space must have a neighbourhood homeomorphic to an open subset of Rn. But to define a homeomorphism you need to know the topology in both the source and the target. What must the topology be on Rn to be able to talk about a homeomorphism? The unit square with the lexicographic order topology is compact, connected and Hausdorff. Fly by Night (talk) 21:20, 14 June 2011 (UTC)[reply]
It's with the usual Euclidean topology. Sławomir Biały (talk) 23:57, 14 June 2011 (UTC)[reply]

Back to the Question

My question has blossomed into a very interesting topological discussion. But could we bring it back to the question at hand. What are the main conclusions? It seems that the space of smooth functions on a manifold M to R is a Fréchet space. Does that tell us anything about the dimension? Have we decided that the dimension is 20 (which may or may not be the same as ℵ1)? Fly by Night (talk) 21:23, 14 June 2011 (UTC)[reply]

Any infinite-dimensional separable Frechet space has dimension as a linear space equal to the continuum. However it might be more natural to think of the dimension as countable, since there is a dense sunspace whose linear dimension is countable. This is in much the same way that the space has continuum dimension as a linear space, but its "Hilbert dimension" is countable. Sławomir Biały (talk) 23:57, 14 June 2011 (UTC)[reply]

June 15

What's the probability of getting twenty heads in a row if you flip a coin one million times?

Here's my reasoning: the twenty in a row could happen in the first twenty tosses, or between the second toss and the twenty first toss, and so on. So there are 10^6 - 19 ways to get twenty heads in a row. So the probability would be (10^6 - 19)/2^20 ~ 95%. Is this right? Incidentally, this isn't a homework question, just something that came up in a conversation. 65.92.5.252 (talk) 01:12, 15 June 2011 (UTC)[reply]

No, there are many more than ways to get twenty heads in a row. To see this, consider how many ways there are to have the first twenty tosses come up heads: the remaining tosses can come up in different ways! (Of course, some of these possibilities also include other runs of twenty heads in a row, besides the first twenty, so you'll need to correct for that overcounting; see Inclusion–exclusion principle.) Your denominator isn't right either—there are possible outcomes for the coin tosses, which is vastly larger than . —Bkell (talk) 01:25, 15 June 2011 (UTC)[reply]
Alright...so how would you do it? 65.92.5.252 (talk) 01:55, 15 June 2011 (UTC)[reply]
This is actually a hard problem. The solution to it is described on this page from Wolfram Mathworld -- it involves some pretty advanced concepts. Looie496 (talk) 02:25, 15 June 2011 (UTC)[reply]
Well, as you say there are possible places for a run of 20. If we treat each of these possibilities as independent (they aren't, but let's pretend), you get . Of course, as mentioned, they aren't independent: having tosses 1 to 20 all come up heads increases the odds that tosses 2 to 21 all come up heads. So outcomes with multiple runs of 20 should be over-represented compared to what you would expect if they were all independent. Since the expected total number of runs of 20 is the same, independent or not, this means that the actual answer to your question should be less than 61.5%.--Antendren (talk) 02:31, 15 June 2011 (UTC)[reply]
Thanks. 65.92.5.252 (talk) 15:59, 15 June 2011 (UTC)[reply]

The average number of rows of twenty heads if you flip a coin twenty times, is 2−20.

The average number of rows of twenty heads if you flip a coin a million times, is 999981 · 2−20 = 0.954.

The probability that the number is equal to k, is Pk = e−0.954 0.954k / k! = 0.385 · 0.954k / k!

    0     1     2      3      4       5
0.385 0.367 0.175 0.0557 0.0133 0.00254

The probability of not getting twenty heads in a row if you flip a coin one million times, is P0 = 0.385.

The probability of getting twenty heads in a row once or more, if you flip a coin one million times, is 1−P0 = 0.615. Bo Jacoby (talk) 12:56, 15 June 2011 (UTC).[reply]

Cool, thanks. 65.92.5.252 (talk) 15:59, 15 June 2011 (UTC)[reply]
Um, wrong. The events are not independent (for example, 1-20 all being heads is not independent of 10-30 all being heads), so the calculation is not valid. There simply is no easy answer to this question; the calculation is genuinely hard. Looie496 (talk) 18:40, 15 June 2011 (UTC)[reply]
See our article Perfectionism (psychology). The proper way to criticize a calculation is to improve it. Bo Jacoby (talk) 19:26, 15 June 2011 (UTC).[reply]
Well, I already said above that the solution is explained on this page. Looie496 (talk) 21:08, 15 June 2011 (UTC)[reply]
Yes, and what is your improved calculation? Bo Jacoby (talk) 06:46, 16 June 2011 (UTC).[reply]

(1 000 000 - 19) * 0.5^20 = 95.37..% Cuddlyable3 (talk) 08:59, 16 June 2011 (UTC)[reply]

No that's nowehere near the nswer. A fairly reasonable estimate is got by looking for the chance of not having any so the chance of having one at least would be about 1 - (1 - 1/2^20)^1000000 i.e. about 1-1/e or 1-0.37 or about 0.63 that's a very rough estimate and doesn't deal with the problem of overlap of intervals at all. Dmcq (talk) 11:14, 16 June 2011 (UTC)[reply]

Note: this page contains a very thorough explanation of the problem and how to solve it, by Mark Nelson. As it happens, he worked it out for the exact values given in the question here, and obtained an answer of approximately 37.9%. To get that number he had to set up a Java program that took about an hour to run. Looie496 (talk) 23:48, 16 June 2011 (UTC)[reply]

I think the answer can be computed exactly, but I haven't worked the details out yet. My approach would be to compute the number of coin configurations with one or more such rows using inclusion-exclusion. When you consider configurations with k rows, you deal with the overlap problem by ordering the rows you put in by hand from left to right and multiply the result by k!. You then get the following expression for the probablity:

This doesn't work, I do think however, that a derivation using generating functions should be straightforward.

Count Iblis (talk) 00:36, 17 June 2011 (UTC)[reply]

Very nice Looie496! Note that the formula from your link http://marknelson.us/2011/01/17/20-heads-in-a-row-what-are-the-odds/ is
where is defined in http://mathworld.wolfram.com/Fibonaccin-StepNumber.html . The author then spends an hour of computer time to compute
but he forgets to subtract from one, so the result is P = 0.621. The elementary approximation P ≃ 0.615 above is not "wrong" but amazingly precise! Bo Jacoby (talk) 10:34, 17 June 2011 (UTC).[reply]
No he doesn't. Note that and . So is the correct value. "DIv" doesn't represent what you're assuming it does.--Antendren (talk) 12:11, 17 June 2011 (UTC)[reply]

If anyone cares, my friend ran a computer simulation and got around a 33% success rate. 65.92.5.252 (talk) 16:23, 17 June 2011 (UTC)[reply]

Despite the problem having been "solved", the issue is still that one should be able to use paper, pencil and a not so powerful calculator and be able to compute the answer to arbitrary precision. In this respect, this problem has not been solved. Also, no simple derivation has been given, so that's another issue with the "solution".

So, let's try to find the solution using generating functions. We can count strings of heads and tails by giving both a head and a tail a weight of x. We then don't have to constrain the length of the string, we just need to extract the power of from the result. We can count all strings that don't contain rows of heads of 20 or more, by multiplying generating functions corresponding to r rows of heads of length 1 to 19 and tails of arbitrary length (zero to infinity at the start and end and 1 to infinity inbetween the r rows of heads) and summing over r.

For r = 0, we have

as we have only one row of arbitrary length of tails.

For r = 1, we have a row of tails of arbitrary length, a row of heads of length less than 20 and then another row of tails:

For r > 1, we have in addition to the two rows of tails of length zero or larger at the start and the end, r-1 rows of tails of length 1 or larger inbetween the rows of heads:

The generating function is thus:

Then to extract the coefficient of x^{10^{6}} is easy, just find the roots of the denominator, and expand the function in partial fractions. The root closest to the origin yields the dominant contribution. So, you can just approximate the roots numerically and then use simple calculus to estimate the coefficient of x^{10^{6}}. I found an answer close to that give above, but I made some approximations that I need to check. Count Iblis (talk) 18:20, 17 June 2011 (UTC)[reply]

is a root in the polynomial as . No other root is closer to the origin according to http://www.wolframalpha.com/input/?i=1-2x%2Bx^21 . Both numerator and denominator in is divisible by . Bo Jacoby (talk) 23:27, 17 June 2011 (UTC).[reply]
Ok, then what I have below should be correct. The answer depends sensitively on how far this root is removed from 1/2. Count Iblis (talk) 23:57, 17 June 2011 (UTC)[reply]

I only get the first 5 digits of the answer from Looie's link ( 0.379253961388950068663971868...) correct, so perhaps I'm making some stupid mistake  :( . This is what I did. The coefficient of x^N of a function f(x) can be written as:

Where the integration contour encircles only the pole at z = 0 counter clockwise. The point of this exercise is to avoid having to expand to large orders. The residue at zero is minus the sum of the residue at all the other poles (the contour integral over a circle with radius R obviously tends to zero if R is sent to infinity, so the sum of all residues is zero). Since those other poles are simple poles, calculating these is a piece of cake. We can also do this via a change of variables z --> 1/z. Then the integral becomes:

The pole at zero of f(z) is now at infinity and all the other poles are now inside the contour. In our case, we have:

If there is a simple pole at , then that makes a contribution to the probability (which is the coefficient divided by 2^N) of:

For N = 10^6, it is obvious that we only need to consider poles at points with a modulus close to 2. Now, there is pole at:

And assuming that this is the only one with a modulus close to 2, I find using my calculator (by using the log(1+x) function for numbers close to 1, the (exp(x) -1) function for small x whenever necessary to prevent loss of significant digits) that one minus the probability of no rows with 20 or more heads is 0.379251854769, which is not the correct answer. Perhaps I'm missing a pole... Count Iblis (talk) 23:50, 17 June 2011 (UTC)[reply]

Using Mathematica I do find the correct answer, it turns out that the root of the denominator was not determined accurate enough with my calculator (only the first few digits of the devation from 2 were accurate, so I guess I wasn't careful enough using my calculator when switching to 2 + x to avoid loss of significant digits; there was some spillover to the deviation from 2).

So, the first 3 hundred thousand digits or so of the probability are given by:

where is the zero closest to 2 of the polynomial:

And I think one can get 12 digits for the probability using a simple calculator by computing the root with some care. Count Iblis (talk) 01:07, 18 June 2011 (UTC)[reply]

I've now verified that I can get 10 significant digits correct doing all computations including the root finding process using my antique HP-28s calculator. Also, the probability function can be simplified using the equation for lambda to:

To find the root, you can write z = 2 + u, the equation becomes:

And then you can can use that (1+x)^N = exp[N Log(1+x)], so

So, we can then evaluate this using the log(1+x) function and the exp(x) -1 function (which are easy to program in case your calculator doesn't have them). The root finding function (which can also easily be done by hand using Newton Raphson) gives u = - 9.53683411446*10^{-7} which then yields the final answer to 10 digits accuracy, so I lost 2 significant digits in the whole process. I computed (lambda/2)^10^6 also using the Log(1+x) function by writing it as Exp[10^6 Log(1 + u/2)].

Count Iblis (talk) 02:36, 18 June 2011 (UTC)[reply]

90!

Greetings, learned amigops (: I took a recreational maths test once that had a question, what are the last two nonzero digits of 90!, in order from the left? (that is, if 90! were equal to 121, they would be 21) I did not get to this question, but looking back, how would I have solved it? WA gives a solution (12) but even simple calculators were not allowed, let alone alpha. I suppose I could have written 90!'s prime factorization out, crossed out 2 and 5 in pairs, and then multipled all the remaining numbers mod 10, but is there a faster and less tedious way to do this? (remembering that all I had at my disposal were literally a pen and paper). Cheers. 72.128.95.0 (talk) 03:36, 15 June 2011 (UTC)[reply]

De Polignac's formula may or may not be helpful-Shahab (talk) 05:53, 15 June 2011 (UTC)[reply]
Since we are only interest in the two LS digits that are non-zero, we can reduce the calculations to products of 2 digit numbers. Implement the following algorithm:
      t = 1
      FOR i = 2 TO 90
         t = t * i
         IF (t MOD 10) = 0 THEN t = t / 10
         DO WHILE (t MOD 10) = 0 : t = t / 10 : LOOP
         t = t MOD 100
      NEXT
which though tedious is doable by hand and gives the answer 52 (not 12). Is this right? -- SGBailey (talk) 11:37, 15 June 2011 (UTC)[reply]
That algorithm fails at i=25, 50, 75. You need to repeat t=t/10 while (t mod 10) = 0 still holds. Algebraist 12:25, 15 June 2011 (UTC)[reply]
Algorithm repaired a la Algebraist and this gives 12. -- SGBailey (talk) 13:01, 15 June 2011 (UTC)[reply]
From an IPython shell using scipy I get
In [1]: factorial(90, exact=1)
Out[1]: 14857159644817614973095227336208257378855699612846887669422168637049
85393094065876545992131370884059645617234469978112000000000000000000000L
So, it is 12. --Slaunger (talk) 12:01, 15 June 2011 (UTC)[reply]
I gather you wanted a way to do it by hand without a program. In that case, I would have noticed that there are a lot more powers of 2 than 5 in 90! and hence even after removing all the trailing zeros, the number will still be divisible by 4. Hence, we only need to work out the result mod 25. I would then write out every number with those divisible by 5 having their factors of 5 removed and then write this list out again taken mod 25. Then I would cancel out those which are multiplicative inverses (so 2 and 13 would cancel each other out, 3 and 17, etc (This might take a while to work out but can be done) and finally from what is left it should be easy enough to calculate the product by hand mod 25. Then multiply by 4 and you're done. --AMorris (talk)(contribs) 14:24, 15 June 2011 (UTC)[reply]
Don't multiply by 4 at the end. Just see which of the 25n+k is divisible by 4 where k is the result modulo 25. ANd by the way you can remove powers of 20 when calculating modulo 25. Dmcq (talk) 00:11, 16 June 2011 (UTC)[reply]
Just had another think of this and if I weas doing it by hand I wouldn't split into prime factors as suggested above. Instead I'd work base 25 as mentioned before and use Gauss's generalization of Wilson's theorem, the product of the numbers below 25 prime to 25 is -1 modulo 25. Applied three times up to 75 this can remove a whole pile of numbers quickly and then just remove powers of 5 and a corresponding number of powers of 2 from the remainder and multiply what remains modulo 25 - tht might be n appropriate time to split into factors. Dmcq (talk) 08:52, 16 June 2011 (UTC)[reply]

Riemann integral of a circle

Is there a way to finish this sum of rectangles? For a semicircle of radius r cut into n slices, the area of the kth slice is:

The sum of all of them is:

And extending to infinity:

Somehow this has to equal , which means Is it possible to show that this is true? This isn't for homework, just curiosity. I've been able to do this with a parabola, cone, sphere, just about everything but a circle, so it seems weird to me that it's so difficult. KyuubiSeal (talk) 03:53, 15 June 2011 (UTC)[reply]

I haven't taken a look at the math, but there's an easier way to show that the area of a circle is πr^2: http://en.wikipedia.org/wiki/Area_of_a_disk#Onion_proof. 65.92.5.252 (talk) 03:58, 15 June 2011 (UTC)[reply]
Yeah, plus there's the 'chop into sectors and rearrange into almost-parallelogram' method. Also, the integral of the circumference is area? That looks like it works for spheres too. Weird... — Preceding unsigned comment added by KyuubiSeal (talkcontribs) 04:09, 15 June 2011 (UTC)[reply]

[wrong answer removed] Ignore me, I confused your sum for an integral ... in my defence it was about midnight when I posted that. 72.128.95.0 (talk) 15:11, 15 June 2011 (UTC)[reply]

Well, if you divided a circle like an onion, each segment would have a width dr. The area between r and r + dr is approximately (circumference at that point) * (width) = 2πrdr. Then, you can just integrate. (To the experts: please don't smite me for ignoring rigor). 65.92.5.252 (talk) 04:16, 15 June 2011 (UTC)[reply]

Maybe this has something to do with the Basel problem? That does create a pi from a sum, but I don't know how I would apply it. Or if it's even relevant. — Preceding unsigned comment added by KyuubiSeal (talkcontribs) 17:12, 15 June 2011 (UTC)[reply]

This site has the same sum. I agree that it would be nice to know if you can show it sums to using another technique. To me the series looks quite elegant, and clearly isn't difficult to derive, so I'm surprised it doesn't seem to be on the long list of Pi formulas at mathworld. Perhaps it is somehow trivially equivalent to some other series? 81.98.38.48 (talk) 21:13, 15 June 2011 (UTC)[reply]

This is the Riemann sum for

which is not an easy integral to evaluate. I know of the methods which involve integration by substitution of or or x = sech(t) with use of hyperbolic or trigonometric identities.(Igny (talk) 02:20, 16 June 2011 (UTC))[reply]

It should be possible to compare to a polygonal approximation of the arc length of the part of the unit circle in the first quadrant, and so prove that in the limit this Riemann sum tends to the arc length. (I think this only involves elementary inequalities.) Then, using the geometric definition of π as half the circumference of the unit circle, this should then show
which can then be used to evaluate the aforementioned limit. I've not really tried to write out the details of this, though. Sławomir Biały (talk) 14:14, 16 June 2011 (UTC)[reply]
Polygonal approximation? Like a circumscribed polygon? I still don't know how to get that. KyuubiSeal (talk) 20:31, 17 June 2011 (UTC)[reply]

Highest number

Suppose there are N balls in a bin, each distinctly labeled with a number from 1 to N. I draw D balls from the bin, and note the highest numbered ball I get, which I call S. Is there a way to calculate the probability distribution of S? I wouldn't mind a general answer, but in the problem I'm dealing with, D<<N. 65.92.5.252 (talk) 17:07, 15 June 2011 (UTC)[reply]

There are different ways that a highest number S can arise. So the distribution is
--Sławomir Biały (talk) 18:49, 15 June 2011 (UTC)[reply]
Sorry, do you mind explaining how you got ? 65.92.5.252 (talk) 20:12, 15 June 2011 (UTC)[reply]

For a fixed S, the remaining balls are drawn from the set {1,...,S-1}. Sławomir Biały (talk) 20:53, 15 June 2011 (UTC)[reply]

Cotcha, thanks. 65.92.5.252 (talk) 09:39, 16 June 2011 (UTC)[reply]

If Visible Light Represented by People/Person, what's the Ratio

Just a thought, if the ELECTROMAGNETIC SPECTRUM was represented by population/people, how many people would see the Light? --i am the kwisatz haderach (talk) 17:41, 15 June 2011 (UTC)[reply]

I think it would just be (highest wavelength of visible - lowest of visible) / (highest of spectrum - lowest of spectrum), then multiply by earths population. KyuubiSeal (talk) 18:19, 15 June 2011 (UTC)[reply]

The Universe on an eps on Nebulas, says Visible light is represented by only 1 inch on an electromagnetic scale of over 2000 miles. According to this show, I would take 63,630 inches/1 Mile x 2000 to get my 126.72 Million inches. With 1/126,720,000, w/worlds pop at 6.92 Bil, I take 6.92 Billion and divide by 126.72 Million to get about 55 People seeing the Light. --i am the kwisatz haderach (talk) 20:48, 15 June 2011 (UTC)[reply]

June 16

Geometry problem

Hello all, it's me from yesterday. This is the other question that I did not answer (pretty good, considering this was a 25 question test). Unlike the 90! one, I tried on this one but did not get it and so refrained from answering (guessing is penalized). It goes, Let R be a square region and n>3 an integer. A point P inside R is called n-ray partitional if there are n rays emanating from P divding R into n triangles of equal area (for example, there is one 4-ray partitional point at the center of the square). How many points are 100-ray partitional but not 60-ray partitional? I didn't even know where to begin on this one- can someone point me right? Thanks. 72.128.95.0 (talk) 00:48, 16 June 2011 (UTC)[reply]

Suppose wlog that R is the unit square. Orient R so that the sides are parallel to the coordinate axes and the lower left hand point is at the origin. Call the point (x,y). The rays from the point to the vertices of the square divide the square into four triangles of areas . Now, (x,y) is a (2n)-ray point if and only if and for some integers (since these two triangles need to be divisible into equal pieces of area ). Sławomir Biały (talk) 02:38, 16 June 2011 (UTC)[reply]

Angle bisectors of a triangle

prove that the angle bisectors of a triangle is concurrent. — Preceding unsigned comment added by Qwshubham (talkcontribs) 07:55, 16 June 2011 (UTC)[reply]

You need to put in a title otherwise it looks like it follows from the previous discussion. Have you had a look at triangle? Dmcq (talk) 08:42, 16 June 2011 (UTC)[reply]

Spot the error?








Widener (talk) 13:29, 16 June 2011 (UTC)[reply]

The third and fourth equals signs are not justified. Indeed, the series is divergent. What this fallacious argument illustrates is that the summation of infinite series is not associative: one can't "insert and remove parentheses" into series in general. There is a discussion of this in Tom Apostol's textbook "Mathematical analysis", for instance. Sławomir Biały (talk) 13:47, 16 June 2011 (UTC)[reply]
See Grandi's series. Gandalf61 (talk) 13:52, 16 June 2011 (UTC)[reply]
…or the Eilenberg–Mazur swindle. Fly by Night (talk) 14:58, 16 June 2011 (UTC)[reply]
There is an extra −1 at infinity plus one that you missed out. Sorry just joking ;-) Dmcq (talk) 17:28, 16 June 2011 (UTC)[reply]
See also section 3 of this article. Count Iblis (talk) 20:43, 16 June 2011 (UTC)[reply]

How wet will the windshield get?

Let us imagine a car which has to travel a set distance, say 100 meters, between two garages during a rainstorm. The rain is falling horizontally at a constant even rate for the entire period of the trip, say one liter per cubic meter per second. The car's windshield is a flat rectangular surface of, say, two square meters, at some angle, Θ between 0 and 90 degrees normal to the rainfall. The car is travelling at some speed v on a perfectly straight even course between the garages. Of what variables in what degree is the amount of rain that will hit the windshield (during a trip between the garages at any finite positive speed with the windshield at a given angle) a function? Can you provide a formula? Thanks. μηδείς (talk) 03:18, 17 June 2011 (UTC)[reply]

"falling horizontally", do you mean falling vertically? "one liter per cubic meter per second", do you mean one liter per square meter per second, or one liter per cubic meter ? If each raindrop falls u meter downwards per second, and the car moves v meter forwards per second, and there is ρ kilograms of water in each cubic meter of air, then there will fall ρ·u kilograms water per second per square meter horizontal surface, and ρ·v kilograms of water per second per square meter vertical surface. The car moves s meters from one garage to the other in t = s/v seconds, so the horizontal surface receives ρ·u·t = ρ·s·u/v kilograms of water per square meter, and the vertical surface receives ρ·v·t = ρ·s kilograms of water per square meter. The surface area of the windshield is A square meters, and the angle of the windshield against the vertical is Θ, so the horizontal area is A·sin(Θ) and the vertical area is A·cos(Θ). So the total amount of water falling on the windshield is
ρ·s·A·(u·sin(Θ)/v+cos(Θ))
kilograms. Bo Jacoby (talk) 09:14, 17 June 2011 (UTC).[reply]
This sounds like the perennial problem of what speed should I walk at or run through the rain, see [1] for instance for starters or just search for something like "walking in the rain maths". If you walk long enough you'll get so wet that it'll become impossible to get any wetter so at some stage you almost certainly get drier, now that's the sweet spot ;-) Dmcq (talk) 14:47, 17 June 2011 (UTC)[reply]
It occurs to me I should have defined the velocity of the vertically falling rain, rather than just giving the density per second which leaves the falling velocity undefined. Lets say there are a liter of drops per cubic meter and they are falling straight downward at 10 meters per second. I suspect the angle of the windshield makes the question a little more complicated than the question as to whether a person should walk or run in the rain, and Bo Jacoby's equation strikes me as taking into effect the relevance of the angle of the windshield, but I'll have to wait til I have time to sit down and visualize it. μηδείς (talk) 15:57, 17 June 2011 (UTC)[reply]

June 17

Dotted variables

I came across some formulae I want to understand, which contain elements with single or double dots or horizontal lines above them. What are those? DirkvdM (talk) 06:21, 17 June 2011 (UTC)[reply]

They often denote the first and second derivatives of the variables, respectively, especially in the physical sciences (kinematics, dynamics, etc.). —Anonymous DissidentTalk 06:55, 17 June 2011 (UTC)[reply]
Ah, that gave me somewhere to look. Thanks. The dots are Newton's notation for derivatives.
That still leaves the lines. For example, Newton's second law (F=ma) is written as F = d/dt (mU), where the F and U have lines over them. U stands for speed here, and acceleration is change in speed, so that makes sense to me. But what do the lines mean? DirkvdM (talk) 08:42, 17 June 2011 (UTC)[reply]
Lines above a variable (often called bars) often indicate that the variable represents a vector. Force and velocity (not speed) have both magnitude and direction, so they are vectors.--Antendren (talk) 08:55, 17 June 2011 (UTC)[reply]
Ok, thanks! Now I can start looking things up and brush up on my secondary school maths (too long ago - I forgot). Oh, and that article says that speed is the magnitude of velocity, so I also get that remark.
The differentiation stuff is slowly starting to come back to me now. But I remember never really getting the vector-stuff. As I recall, I could 'do the sums', but without understanding what I was doing. How should I read = d/dt (m)? "The direction of the force is the change in the direction of the impulse"? That doesn't make any sense to me. DirkvdM (talk) 10:08, 17 June 2011 (UTC)[reply]
The vector notation is just shorthand for each of the three directions, e.g., . Doing the differentiation does not mix the components (there are other vector operators, which do this, like the cross product and the divergence). Thus, Newtons second law in the form stated is equivalent to
So the x-component of the resulting force is the time derivative of the x-component of the momentum, and so on. --Slaunger (talk) 10:32, 17 June 2011 (UTC)[reply]
Vectors are not just direction, they have both a magnitude and a direction. While you could talk about how the derivative of a vector affects its magnitude and direction separately (the parallel component changes magnitude, the orthogonal component changes direction), it's easiest to consider differentiation in standard components as Slaunger describes.
Philosophically speaking, vector notation is not "just shorthand" for components. Components are just an easy way to deal with them. -- Meni Rosenfeld (talk) 13:31, 17 June 2011 (UTC)[reply]
Be careful using the subscript letters, e.g. Fx, etc. This notation is often used to denote the partial derivative with respect to x, i.e. Fx = ∂F/∂x. A more common notation, at least in mathematics, is to use subscript numbers, so v = (v1,v2,…,vn) where v is a vector with n−components, and vk is the k-th component of v. Also, a dot is quite often reserved for differentiation with respect to time. Fly by Night (talk) 13:46, 17 June 2011 (UTC)[reply]

June 18

Hi, i don't know if it's the good place to do that, but some articles in computer sciences have (or had) some issues with people writing directly characters for "union" (\cup) , "intersection" (\cap), "is element of" (\in) and so on.

my questions : 1) how is laTex supported in wikipedia ? 2) is it supported in the "algorithm code text zone" like in this article DFA_minimization ?

in the previous mentionned articles i needed to print it, and i didn't saw any "is in set" character, so i changed them to "in", but i don't know if it's the standard way to do that here. and Latex would maybe be better.


thx 85.1.144.185 (talk) 10:02, 18 June 2011 (UTC)[reply]

mathematical characters and algorithms

Hi again, i'm the author of the previous untitled question.

Just adding this to make some title.

thanks