Wikipedia:Reference desk/Mathematics: Difference between revisions
Line 180: | Line 180: | ||
: (ec) I'm not sure if this is what you want, but if I got it properly it is enough to mark any point P inside a parallelogram L, join P with lines to arbitrary three vertices of L, and then copy that in all parallelograms... --[[User:CiaPan|CiaPan]] ([[User talk:CiaPan|talk]]) 16:18, 20 December 2011 (UTC) |
: (ec) I'm not sure if this is what you want, but if I got it properly it is enough to mark any point P inside a parallelogram L, join P with lines to arbitrary three vertices of L, and then copy that in all parallelograms... --[[User:CiaPan|CiaPan]] ([[User talk:CiaPan|talk]]) 16:18, 20 December 2011 (UTC) |
||
::That was my approach, except I require that the angles between each pair of lines meeting at P be <math>2\pi / 3</math>, so my hexagons are always regular. I was just finding it difficult to show that the (finite?) number of points P for which this held, all produced the same hexagonal lattice/tiling (up to isometries) when regluing. I suppose if I could do it for the bog-standard equal sided regular hexagon, then I could argue using something like shear transformations (is there a unique shear transformation mapping one parallelogram to another?) [[User:Icthyos|Icthyos]] ([[User talk:Icthyos|talk]]) 17:29, 20 December 2011 (UTC) |
|||
== ZFC being consistent if a proper sub-set of ZFC is consistent == |
== ZFC being consistent if a proper sub-set of ZFC is consistent == |
Revision as of 17:29, 20 December 2011
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
December 13
Error with 1 − 2 + 3 − 4 + · · ·?
So, I was looking at the second manipulation process (which can be seen here). It goes like:
Now, I understand the manipulation, but the last part doesn't make sense. The article claims that 1-2+3-4... is equal to . But the last part of this equation essentially is:
Sorry for my non-alignment but my point remains. Am I missing something that makes this equation correct, or is there really something wrong with it? 64.229.180.189 (talk) 04:06, 13 December 2011 (UTC)
- Did you read articles Grandi's series and Summation of Grandi's series? They say that 1-1+1-1... is divergent in the strict sense of the word (hence, manipulations with terms make it possible to "compute" many different values of the "sum"), but the value of 1/2 is the most "natural" of these, in some (very vague) sense of the word. For example,
--Itinerant1 (talk) 05:54, 13 December 2011 (UTC)
- See analytic continuation for a less vague sense of the word "natural". Bo Jacoby (talk) 07:26, 13 December 2011 (UTC).
- 1−2+3−4+··· = Σi(−1)i−1 = f(1)
where
- f(x) = Σi(−x)i−1 = 1−2x+3x2−4x3+···
But
- Σi(−x)i−1 = Σd((−x)i)/d(−x) = −d(Σ(−x)i)/dx = −d((1+x)−1)/dx = (1+x)−2.
So
- f(1) = (1+1)−2 = 1/4
or
- 1−2+3−4+··· = 1/4
Q.E.D.
Note also for example that
- 1−4+12−32+80−192+··· = f(2) = 1/9
Bo Jacoby (talk) 10:35, 13 December 2011 (UTC).
Sample median uncertainty from normal distribution contaminated with symmetric outliers
I have n sample data . A few percent of the samples are false outlier measurements, which may be spread more or less symmetrically up to 10 standard deviations away from the mean of the true underlying distribution, which can be assumed normal with an unknown mean and standard deviation . For some sample sets of data, I may be unlucky that the few outliers do not cancel each other when calculating the sample mean . Thus, the uncertainty of the sample mean for a normal distribution with unknown mean will be an underestimate of the true sample mean uncertainty due to the presence of the outliers in the sample. Thus, I reckon the sample median will be a more robust estimator of the mean of the distribution, but I would like to estimate the uncertainty of this sample median. How do I do that? --Slaunger (talk) 10:25, 13 December 2011 (UTC)
- In the absence of outliers and for large sets, standard error of the median is times standard error of the sample mean. This estimate should be valid as long as either the distribution of outliers is symmetric, or the expected asymmetricity is small (, where m is the number of outliers and p is the probability for the outlier to be above the underlying mean.)--Itinerant1 (talk) 11:02, 13 December 2011 (UTC)
- Cool, thanks. Just what I needed. I tried to find this information in a Wikipedia article but did not manage to find it. maybe I just did not look the right place? If it is not there, maybe it would be relevat to add including a source, as I think it is a rather useful quantity. --Slaunger (talk) 11:07, 13 December 2011 (UTC)
- This information isn't easy to find in Wikipedia, but is in Efficiency (statistics)#Example. Qwfp (talk) 17:02, 13 December 2011 (UTC)
- Thank you! That section was instructive reading. It also eludes to that although the sample median is not as efficient an estimator for the mean it is a sensible estimator to use, when a pdf is only approximately equal to a normal distribution and has outliers. That section could use a source though for the factor, which I guess is not so trivial to derive. (I guess it involves writing out an expression for the log-likelihood of the median conditioned N samples, and finding the curvature, or the Fisher information) --Slaunger (talk) 07:36, 14 December 2011 (UTC)
- This information isn't easy to find in Wikipedia, but is in Efficiency (statistics)#Example. Qwfp (talk) 17:02, 13 December 2011 (UTC)
- Cool, thanks. Just what I needed. I tried to find this information in a Wikipedia article but did not manage to find it. maybe I just did not look the right place? If it is not there, maybe it would be relevat to add including a source, as I think it is a rather useful quantity. --Slaunger (talk) 11:07, 13 December 2011 (UTC)
Uncertainty of a sample fractile difference robust std dev estimate of a normal distribution contaminated by a few outliers
A related question. I also calulate the sample variance, but this is even more susceptible to the outliers. So instead I estimate the standard deviation by taking the sample fractile difference between the 15.9% and 84.1% fractiles (because for a normal distribution this will converge to the standard deviation of the underlying distribution). However this quantity is again uncertain due to the limited number of samples, and I would like to have an idea of the confidence I can put into this sample standard deviation found by this more robust fractile difference based approach. Now, if the the pdf had been normal, and I had used the sample variance , I know the variance would be distributed as a Pearson Type III distribution, and I know there would be a nice closed form formula for the variance of the variance, cf., Sample Variance Distribution on Wolfram. My best guess for the variance of the fractile difference based estimate of the standard deviation would be that it is close to that form, but should probably be a little larger (as was the case for the sample mean vs median uncertainty discussed above). Is there a closed form expression that I can use for given N measurements? --Slaunger (talk) 13:40, 13 December 2011 (UTC)
- I've never been happy with removing outliers and would always be very careful about that. The ozone hole over Antarctica was ignored because the actual measurements had been discarded as outliers. More prosaically it seems in many cases that outliers are not errors and results would have been better if people had taken note of them. What's the problem with just accepting the distribution has fat tails? Dmcq (talk) 17:12, 13 December 2011 (UTC)
- I basically agree with your reluctance to ignore the outliers. In this case, however, I fully understand the origin of the outliers. They are the result of a process, which will take place with low probability (relative to the normal data points). I even have a fairly good idea about the probability that a measurement will belong to this process, but I have no way of distinguishing them. My objective for using the method is that it is part of a negotiation process with a costumer regarding the method to use for verifying that the precision of a quantity does not exceed a predefined limit. But this precision should only refer to the most frequent occuring "normal process" samples, and not to the other unlikely process, which produce the outliers. The costumer has a good understanding of this as well. We also know that if we calculate the sample variance including all measurements, there will be a non-conformance. We know that the sample variance is completely dominated by a few outliers, and we also have a good mutual understanding that the sample variance is not a good measure of the actual precision, as by just looking at the sample pdf for some examples there is a clear and narrow peak, contaminated by a very broad, low probability fat tail. We have cases where the probability for the anomalous process is very, very low, and from these we know that the "normal process" measurements are normally distributed. Thus uit has been agreed with the customer that a more robust estimate for the precision, such as a fractile difference approach or a sample mean deviation is acceptable for verification purposes. Here we will be "punished" a little for the outliers, but they will not completely dominate the result. --Slaunger (talk) 07:20, 14 December 2011 (UTC)
- You might like to read Robust measures of scale, which suggests more efficient robust estimators. I guess the article it references (Rousseeuw and Croux JSTOR 2291267) must include expressions for their variances for the case of a Normal distribution (at least in the large-sample limit). It may be easier to use bootstrapping though and that would take account of your outliers, which presumably will increase the variance of any estimator of scale, even ones that are robust enough that the estimates themselves aren't influenced by the outliers. Qwfp (talk) 14:45, 14 December 2011 (UTC)
- Those were very helpful hints. Thanks. These things are really hard to find when you do not know the terms used to describe the methods. --Slaunger (talk) 21:17, 14 December 2011 (UTC)
- A collegue figured out that the Application: confidence intervals for quantiles section of the order statistic article deals with this problem. The Quantile article is also helpful. --Slaunger (talk) 11:38, 19 December 2011 (UTC)
- Those were very helpful hints. Thanks. These things are really hard to find when you do not know the terms used to describe the methods. --Slaunger (talk) 21:17, 14 December 2011 (UTC)
- You might like to read Robust measures of scale, which suggests more efficient robust estimators. I guess the article it references (Rousseeuw and Croux JSTOR 2291267) must include expressions for their variances for the case of a Normal distribution (at least in the large-sample limit). It may be easier to use bootstrapping though and that would take account of your outliers, which presumably will increase the variance of any estimator of scale, even ones that are robust enough that the estimates themselves aren't influenced by the outliers. Qwfp (talk) 14:45, 14 December 2011 (UTC)
- I basically agree with your reluctance to ignore the outliers. In this case, however, I fully understand the origin of the outliers. They are the result of a process, which will take place with low probability (relative to the normal data points). I even have a fairly good idea about the probability that a measurement will belong to this process, but I have no way of distinguishing them. My objective for using the method is that it is part of a negotiation process with a costumer regarding the method to use for verifying that the precision of a quantity does not exceed a predefined limit. But this precision should only refer to the most frequent occuring "normal process" samples, and not to the other unlikely process, which produce the outliers. The costumer has a good understanding of this as well. We also know that if we calculate the sample variance including all measurements, there will be a non-conformance. We know that the sample variance is completely dominated by a few outliers, and we also have a good mutual understanding that the sample variance is not a good measure of the actual precision, as by just looking at the sample pdf for some examples there is a clear and narrow peak, contaminated by a very broad, low probability fat tail. We have cases where the probability for the anomalous process is very, very low, and from these we know that the "normal process" measurements are normally distributed. Thus uit has been agreed with the customer that a more robust estimate for the precision, such as a fractile difference approach or a sample mean deviation is acceptable for verification purposes. Here we will be "punished" a little for the outliers, but they will not completely dominate the result. --Slaunger (talk) 07:20, 14 December 2011 (UTC)
December 14
Summation beginner
Hi everyone. I'm extremely new to summation and I just had a question. Using how do we find that the answer is 5050 (as the Wikipedia article says)? Surely we don't have to add each of the numbers on the calculator to find the answer if summation is what is used to solve this. 64.229.180.189 (talk) 02:01, 14 December 2011 (UTC)
- See Carl_Friedrich_Gauss#Mythology for a famous story about this problem and a brief description of the solution. 98.248.42.252 (talk) 03:55, 14 December 2011 (UTC)
- There are two fairly intuitive ways to do it.
- You are summing 100 numbers, where the average number is 50.5 (equal to the average of the first and last numbers. In fact this is not trivial and only guaranteed for arithmetic progressions, but I find it intuitive). The sum is therefore 100*50.5 = 5050.
- The sum is equal to , since in both cases you sum the integers from 1 to 100 (this is shown formally using a change of variables in the sum). You can add two sums over the same range element-by-element, so
- so .
- Arithmetic progression deals with this more technically. -- Meni Rosenfeld (talk) 05:53, 14 December 2011 (UTC)
- While young Gauss's supposed method is quite elegant, our summation article specifically mentions the use of mathematical induction to prove that
- and the proof is given in the latter article's Example section. Note that induction is typically used to prove that patterns observed in a few initial cases extend to the whole, not to discover those patterns in the first place. -- ToE 06:01, 14 December 2011 (UTC)
- While young Gauss's supposed method is quite elegant, our summation article specifically mentions the use of mathematical induction to prove that
- If is hard, try for a start. Bo Jacoby (talk) 18:13, 14 December 2011 (UTC).
- By any chance are you asking which buttons to press on a calculator (with stats functions) to give you the summation ? StuRat (talk) 21:24, 14 December 2011 (UTC)
- Well, if there was a way to do that, that'd be great, but I was just thinking what's the point of having this system of summation if we'd have to use a calculator to find the answer and not using arithmetic steps? 64.229.180.189 (talk) 01:35, 15 December 2011 (UTC)
- I think you're confusing "notation" and "technique". "" is notation. We write "the sum of all integers from 1 to 100" as "" only because it's more compact and convenient. No calculating power is gained by doing this. The ability to compute 1 + 2 + 3 + ... + 100 without performing 99 additions is a technique. You can use the technique without using the "" notation, like young Gauss did in his head. Using the "" notation, that particular technique is written , but you could also describe the technique in English if you wanted. 98.248.42.252 (talk) 04:56, 15 December 2011 (UTC)
A relevant article is triangular number. Sławomir Biały (talk) 19:57, 15 December 2011 (UTC)
'Curved' objects
I'm trying to figure out how to generate some 3D geometry (namely railway track).
What I'd like to do is generate some curves, but I got stuck on how to generate appropriate vertex data.
There is a Diagram here -> File:Track curve.svg to which I will refer.
The particular 3D viewer I am using models railway track-types as 'blocks' based on a fixed distance (AB in diagram) along the nominal centre line of the track.
This distance AB can be considered as a chord of an arc through (AB) as shown, which has a specific radius R and origin (shown as O), and typicaly a 'curve' is represented as a pair of rails parrallel to (AB). nominaly representing the track guage. The arcs and representing the position of the individual rails.
NB The diagram given is for a lefthand curve.
What I'd like to know is how to work out , , for any given value of Z1, and the four subscripted Z values given , I've tried a number of approaches and failed to get a formulae that worked consistently.
A worked solution for this would be much appreciated as I was intending to generate the actual geometry using some kind of algorthimic approach based on steps w.r.t to z. Sfan00 IMG (talk) 16:04, 14 December 2011 (UTC)
- It's probably going to be easier to work in polar coordinates, and you'll get much better results if the curved sections get close to 180 degrees. First, find the angles of the endpoints.
- θ1 = atan2(Az,Ax), θ2 = atan2(Bz,Bx)
- Then choose some θs between θ1 and θ2 and evaluate
- z = R sin θ, x = R cos θ
- If you have to work in cartesian coordinates, use similar triangles to calculate ZRS and friends
- (ZRS+Z1)/(R+len(AJ)) = Az/R
- And the equation of the circle is just
- xc = sqrt(R2 - z2)
- 130.76.64.120 (talk) 19:06, 14 December 2011 (UTC)
- You'll have to go through this in idiot level detail as for the typical values I was using I get non sensiscal results. Sfan00 IMG (talk) 19:32, 14 December 2011 (UTC)
- In the context of the sim, the point marked A is (0,0,0) if that helps Sfan00 IMG (talk) 19:39, 14 December 2011 (UTC)
- Ah, and here I had assumed that the origin was... at the origin. No matter. The thing is, it's hard to give you a detailed answer without knowing exactly what your inputs are. Do we know R? B? O? The slope of the tangent line at A? 130.76.64.116 (talk) 21:10, 14 December 2011 (UTC)
- Knowns: The length of AB(it's a constant in the simulator) , R is known, as are HJ, AB is aligned with the Z axis, with A at(0,0) in this example.
(NB There's a different example for the divering curve at switches, but I am trying to understand this example before I do anything more complex.)
I can see why you want to do the math in polar. I'm assuming that to get a more sensible answer I need to do a translation somewhere?
Sfan00 IMG (talk) 21:40, 14 December 2011 (UTC)
- Yes, a translation is what's needed. The z coordinate of O is len(AB)/2 by symmetry, and the x coordinate you can get from the Pythagorean theorem:
- Then the angles involved are:
- And the translated centerline points are:
- Replace R with len(OH) or len(OJ) to get the inner and outer track points. 130.76.64.116 (talk) 22:14, 14 December 2011 (UTC)
Thank You :) I know have enough information I think to build a general 'bend' function, unless it's already a library function :) Sfan00 IMG (talk) 22:27, 14 December 2011 (UTC)
A more generalised solution would involve having a function that converted a point in (x,z) relative to AB, to an equivalent (x,z) in the curve, but I think I should figure that out myself before asking here again Sfan00 IMG (talk) 22:56, 14 December 2011 (UTC)
- Hmm. looking at the diagram I've seen another possible method for getting for any z
So : and
If then
? Sfan00 IMG (talk) 01:01, 15 December 2011 (UTC)
- Having tested , the formulae based on arcos doesn't work consistently, sorry. Sfan00 IMG (talk) 10:37, 15 December 2011 (UTC)
OK , having again tried a number of approaches, I am clearly not understanding something. So given the diagram, A at x=0,z=0, with +z axes along AB and the +x axes running from A vertically down the page, How do I map any general point (x,z) w.r.t to the chord, to a point in a 'curved' space for the following cases :
- 'bend' - z distances parallel to chord AB get scaled.
- 'curve' - z distances parallel to chord AB are translated to 'arc' distances w.r.t arc AB?
I'm sorry to sound like someone with no math ability at all :( Sfan00 IMG (talk) 11:09, 15 December 2011 (UTC)
- It also seems I can't code as I still fail to get consistent 'sane' values when converting the set of equations given by the IP to program code.
Sfan00 IMG (talk) 11:47, 15 December 2011 (UTC)
- Just noticed this discussion. If you want to do railway tracks well you should be aware of Track transition curve. They don't go directly from straight tocurved because that would cause too much jerk (physics). Dmcq (talk) 12:08, 15 December 2011 (UTC)
- Yes I am aware of that, this is about generating the 3D geometry. For the purposes of the simulation the 'transition' spiral is a set of chords of fixed length, for each chord the radius is 'fixed', hence the disscussion above.
If you know of a way of a way of approximating a 'transistion' spiral by means of fixed length 'chords' feel free to put the explanation into the disscussion :) Sfan00 IMG (talk) 12:37, 15 December 2011 (UTC)
- Bezier curves or some relatives are the way to go if you want to approximate 2D or 3D curves or surfaces. Dmcq (talk) 15:55, 15 December 2011 (UTC)
- That was going to be my next comment. From reading the relevant article the centre line of a railway track transition curve seems to be based on a Euler spiral, at which point my brain shut down :( Sfan00 IMG (talk) 16:53, 15 December 2011 (UTC)
- The poster has put some stuff on my talk page about what actually happens with railways where they do the approximation with curves with a given radius. I think I better check whether they've contacted some railway enthusiasts before looking deeper as they sound like the appropriate crowd if they want that level of detail! I deal more with spherical cows rather than the versions with four legs and udders ;-) Dmcq (talk) 17:07, 15 December 2011 (UTC)
Why do you want to base yourself on a chord? Doesn't your imaging package support arcs as built-in primitive? I would expect that it does. Then you just input the parameters into it. If not, make a function to show arcs, and use the arcs internally. The two parallel arcs are very easy to get from the central one: the radii are + and - half a gauge, etc. Make yourself a polar(fromPoint, distance, atAngle)
function and use it extensively. Your points can be {x,y}
pairs. Forget deriving the ultimate math formula, just write a program instead, tracing the geometric derivation steps with a chain of calls to polar
, collecting the points. If there are no built-in arcs, just generate a sequence of points along your arc - and draw straight chords between them. You will need to assess the needed frequency with which to generate your points along the arc, according to the wanted precision (you don't need shorter chord than the one where the height of a circular segment is less than half a pixel, say). To calculate any point on an arc, just call polar(centerPoint, _O_A_angle + some_delta, _Radius)
.
To show transitional curves, you'll need to calculate points along Euler spirals, preferably with precision-controllable density too, or you'd need an algorithm that approximates them with tangential circular arcs, and draw those. It's not something trivial that you could get just by asking for it here on RefDesk, I don't imagine.
Lastly, you mention 3D. Arcs are 2D. To show 3D arcs you could just trace them like a 3D polyline with a lot of {x,y,z}
points. WillNess (talk) 20:14, 15 December 2011 (UTC)
- The chords are a limitation of the current model of trackwork used by the software simulation concerned, (although the source code is available. The simulation concerned is Train Simulation Framework, part of the reason I was asking about this here was to try and get a basis on which appropriate generator functions in the software could be made.
- I'm trying to derive the 'formulae' so I can code the relevant generator functions consistently.
- I'm well aware that for 3D i'd need to consider the vertical component as well.. Thankfully the 'vertical' curves for typical trackwork as so small as to be practically non-existent over the distances concerned here.
- I'm aware of tools like Templot (amongst others used professionally in engineering) which would do accurate track layout, but those are not 'free' software in a form which the authors of Train Simulation Framework would consider viable. I'm trying to do this the hard way because I don't know of 'free' tools can that do this at present.
- Sfan00 IMG (talk) 23:07, 15 December 2011 (UTC)
- Diagram No 2. File:TrackCurve Slide and Bend.svg - explaining what I mean by a 'slide' and 'bend'. I was trying to figure out how to generalise for Q->Q' and getting hoplessly confused in the process. I'm not even sure if what I've labelled as is the correct point. :(.
NB. In the attached is not labelled as it's obviously translated appropriately based on PQ
I can also upload the GeoGebra workbook on this second diagram if someone wants to look at it in more depth. I'm sorry I am not seemingly capable of working out the hard math to do it all :( Sfan00 IMG (talk) 23:22, 15 December 2011 (UTC)
- First we need to understand the setting of the problem. In your 1. you say "chords are ... model of trackwork". Does that mean that you have linear diagrams of trackwork and want to translate them into the curved setting according to the true centerline? If so, the diagram probably refers to the running distance along the axis so your slide function is completely off-base, and each (s,o) point of distance/offset based on chord as axis' representation, should get translated into a point with same (s,o) according to the actual arc. Is this right? WillNess (talk) 23:32, 15 December 2011 (UTC)
- In the simulation trackwork is approximated by a series of 'straight lines' of a fixed block length. by constructing 'nominal' arcs of the appropriate radius through the endpoints of these lines a better 'graphical' appearance of the track can be generated for each block of track.
I am trying to get from the 'straight' line , namely AB to the curved centere-line... A->B is along the track.
Your description sounds closer to a 'bend', which can be done by rotating Q about P by an (appropriate amount) and translating based on The later has already been demonstrated in this thread. meaning to do a bend, To do this initial rotation needs a specfic angle?
The 'slide' function ONLY modifies in 'x', in the distance parrallel to the 'x' before the slide remain parallel to 'x' after the 'slide'.
I apologise if I am bad at explaining this.
BTW In writing this, it seems obvious my second diagram is incorrect. I'll upload an updated one that reflects the above explanation as a replacement soon. I'll also upload a 'visual diagram' that explains what happens with a slide and bend. Sfan00 IMG (talk) 00:30, 16 December 2011 (UTC)
- Explanation of 'slide' and 'bend' operations using a rectangle, File:Curve-Operations.svg apologies for the crude nature of this diagram. Sfan00 IMG (talk) 00:33, 16 December 2011 (UTC)
- Hmm. I think once 'slide' is worked out, that sorts out the rails.. Next problem would be placing the sleepers, which is thankfully not a 'math' question... Sfan00 IMG (talk) 00:50, 16 December 2011 (UTC)
- I will reply on my talk page because I don't think we're talking math anymore. :) WillNess (talk) 08:25, 16 December 2011 (UTC)
- I've now tried a number of approaches for both 'slide' and 'bend', none of which work consistently. Given the respone I am having to conclude I am not able to adequately explain (let alone attempt to code) what I am trying to do. This is unfortunate as it means I am back to relying on 'non-free' tools. Sfan00 IMG (talk) 20:27, 16 December 2011 (UTC)
- If anyone cares, I uploaded a modified diagram on Wikiversity - http://en.wikiversity.org/wiki/File:P_Slide_Solve.svg,
It's the translation for P->P`_SLIDE I'm trying to find. I already think I know how to do a 'bend', but given the response here I'm not inspired to continue. Sfan00 IMG (talk) 21:35, 16 December 2011 (UTC)
1917 arithmetic problem...
I'm not understanding. The question is:
- What will be the cost of lining the sides and bottom of a tank 8ft. 8in. long, 4ft. wide and 5ft. deep, with zinc weighing .5pound to the square foot, at $0.12 a pound?
I'm getting $232.32, which seems off. Best, Albacore (talk) 23:50, 14 December 2011 (UTC)
- I get US$ 9.68 (161+1/3 square feet at 6 ct per square foot). I assume labour is free ;-). --Stephan Schulz (talk) 00:05, 15 December 2011 (UTC)
- I agree with Stephan. Perhaps you should show us your calculations, Albacore. Widener (talk) 01:08, 15 December 2011 (UTC)
- How did you guys conclude it's in US dollars? -- Jack of Oz [your turn] 04:20, 15 December 2011 (UTC)
- Political incorrectness. I assumed someone who cannot solve this problem has to come from Texas. Since Texas has banned import of foreign books since 1903, chances are that the problem is from a US textbook. ;-) --Stephan Schulz (talk) 07:52, 15 December 2011 (UTC)
- Wow, what an amazingly rude thing to say...Phoenixia1177 (talk) 11:55, 15 December 2011 (UTC)
- Did they also ban reprints of foreign books, in Texas? WillNess (talk) 18:22, 15 December 2011 (UTC)
- Political incorrectness. I assumed someone who cannot solve this problem has to come from Texas. Since Texas has banned import of foreign books since 1903, chances are that the problem is from a US textbook. ;-) --Stephan Schulz (talk) 07:52, 15 December 2011 (UTC)
- How did you guys conclude it's in US dollars? -- Jack of Oz [your turn] 04:20, 15 December 2011 (UTC)
- Since there are 12 inches in a foot, with L=8+(2/3) a length of a tank in feet, the total area is L*5 twice, for the two sides, L*4 for the bottom, and 5*4 twice, to close the front and the rear, or in total L*(5*2+4) + 5*4*2 = 14*(8+2/3) + 40 = 112 + 28/3 + 40 = 161 + 1/3 square feet (121 + 1/3 if "sides" refer just to the two sides, without the front and the rear of the tank). You need to multiply this by the 6 cents half a pound of zinc costs, that is needed for each square foot. This will come up to 968 cents (or 728 in the other case). WillNess (talk) 18:22, 15 December 2011 (UTC)
- $9.68 is a correct first order approximation, but it doesn't take into account overlap at the corners, a reasonable omission in this case as the lining is very thin compared to the dimensions of the tank, but something which needs to be taken into account for small tanks with thick linings. To see how insignificant the difference here is, consider that with a density of 7.14 g·cm−3 the zinc lining, at 0.5 lb·ft-2, is only 0.0135 in thick (call this thickness T). If we did want an exact answer, we can do it either of two ways: calculating the overlap directly or taking a difference of volumes.
- To calculate the overlap directly, visualize the installation of the lining which you have already cut to the exact dimensions of the sides and bottom. You could install two sides opposite one another without problem, but to fit the next two side you will have to trim off T from each end of each side, trimming off an area of 4(5 ft)T. To install the bottom piece, you need to trim off T from all four edges. For a second order approximation of the cost you could approximate this trimmed area by multiplying the perimeter of 25 ft 4 in by T, but that overcounts the four T2 areas in the corners, so the actual bottom material trimmed is (25 ft 4 in)T - 4T2, for a total trimmed area of (45 ft 4 in)T -4T2 = 7.32 in2 saving you $0.00305, or just under a third of a penny. (Note that the -4T2 term only makes the difference between 7.3227 in2 and 7.3220 in2, well beyond the number significant figures we are using for the density.)
- An alternate way of calculating the exact price is to compute the difference between the volume of the tank before and after it is lined, thus determining the amount of zinc lining. The "before" volume is (8ft 8in)(4ft)(5ft). The after volume is (8ft 8in - 2T)(4ft - 2T)(5ft - T) = (8ft 8in)(4ft)(5ft) - (161+1/3)(ft2)T + (45+1/3)(ft)T2 - 4T3. Subtract the volumes to determine the volume of zinc lining and divide by T to get the area of the zinc sheeting required, which is (161+1/3)(ft2) - (45+1/3)(ft)T + 4T2, the same result as above.
- While ignoring the overlap was the right thing to do here, it needs to be accounted for when the thickness of the lining is significant compared to the dimensions of the container, such as when lining a beverage cooler with an inch of styrofoam insulation. -- ToE 21:16, 16 December 2011 (UTC)
- Since 232.32 is 24 times 9.68 I'm wondering if the original calculation was done by converting feet and inches to inches and then forgot that a square foot is 144 square inches (12x12). That still leaves a factor of 2 to explain; maybe forgetting .5 lb/sq ft. (And if this were a modern book it would be US$ because the problem is in feet and inches rather than metric.) RJFJR (talk) 14:34, 16 December 2011 (UTC)
December 15
Berry paradox in second order logic
Why doesn't the Berry paradox apply to second order logic? I am familiar with self-referential arguments, but not with second-order logic, so I'm wondering why you can't state something like 'the first set of natural numbers not uniquely definable by a second-order statement with one free variable', given some well ordering of the set of sets of natural numbers. 129.97.215.245 (talk) 03:47, 15 December 2011 (UTC)
- Once you talk about a wellordering of P(P(N)), you're already in third-order arithmetic, and it might well be that no such wellordering is definable in third-order arithmetic. So maybe what you want to do is throw in a predicate symbol for the wellordering, and work with definability with the wellordering as a parameter?
- You could do that, I guess. I think what you'll find is that to define definability in this second-order language with the predicate for the wellordering, you'll need third-order logic, or some fragment of it, but in any case more than second-order. --Trovatore (talk) 09:45, 16 December 2011 (UTC)
- I guess I should have mentioned this is my question, but according to http://plato.stanford.edu/entries/logic-higher-order/#4, any sentence in third (or higher) order logic is equivalent (constructively) to some sentence of second-order logic, so a wellordering should be translatable into second-order logic. Also, it isn't that hard to construct a wellordering of sets of naturals informally: there is a bijection from them to strings of binary digits where the ith digit represents whether the ith number is in the set. I'm not sure how to express this formally, but I would expect that it would be representable in third-order logic and then a corresponding second-order statement could be found. 129.97.215.57 (talk) 17:13, 16 December 2011 (UTC)
- I haven't read your link carefully yet, but I think you have a misunderstanding here.
- Here's what's true: In second-order logic, you can give a theory that completely characterizes the natural numbers up to isomorphism, which in some sense is as much as you could ever want them to be characterized. That is, any two models of this theory (models in the sense of full second-order logic) have all the same properties, first-order, second-order, third-order, and every higher order.
- However that does not mean that third-order formulas are not more expressive than second-order formulas. They are more expressive. The second-order sentences have already fixed the objects of discourse; that doesn't mean they can say everything there is to be said about them. --Trovatore (talk) 19:49, 16 December 2011 (UTC)
- I guess I should have mentioned this is my question, but according to http://plato.stanford.edu/entries/logic-higher-order/#4, any sentence in third (or higher) order logic is equivalent (constructively) to some sentence of second-order logic, so a wellordering should be translatable into second-order logic. Also, it isn't that hard to construct a wellordering of sets of naturals informally: there is a bijection from them to strings of binary digits where the ith digit represents whether the ith number is in the set. I'm not sure how to express this formally, but I would expect that it would be representable in third-order logic and then a corresponding second-order statement could be found. 129.97.215.57 (talk) 17:13, 16 December 2011 (UTC)
- It is true that there is a procedure to test logical validity of higher-order statements in terms of logical validity of a second-order statement. But this does not help with Berry's paradox. A sentence like "There is a least set which is not definable by any second-order formula" is not directly a second-order sentence; if we convert it into an equivalent second-order sentence, that sentence will simply be true or false the same as the original sentence. The converted sentence will not directly give us a second-order definition of any set, which is what we would need to make the argument in Berry's paradox go through.
- You might think about trying to change the formula "x is the least set which is not definable by any second-order formula" into a second-order formula with a free variable x, but the theorem is only that sentences of higher-order logic can be changed into sentences of second-order logic in a way that preserves logical validity. The theorem does not give a way to change higher-order formulas with free variables into second-order formulas that are true of exactly the same objects. — Carl (CBM · talk) 20:41, 16 December 2011 (UTC)
- I see. Thanks for your help. One more thing: does this mean that second-order logic can express arbitrary properties of a set, ie. it can express the truth of falsity of 'if x is the least set not definable by second-order logic, than P(x)', where P(x) is a general second-order expression involving x, not a predicate. 129.97.215.57 (talk) 03:48, 17 December 2011 (UTC)
Sieve of Atkin
There is an ambiguity in the lead of the article. It says "[it] marks off multiples of primes squared, rather than multiples of primes". It isn't clear whether it means the multiples of primes are squared, or is it the multiples of the squared primes that are marked. I don't understand it. Could someone please explain this? Thanks! -- WillNess (talk) 08:58, 15 December 2011 (UTC)
- It means multiples of the squared prime, i.e. np2 rather than (np)2. This is so that non-squarefree integers are excluded before the quadratic forms tests are applied. I have changed the wording of the lead paragraph to try to make this clearer. Gandalf61 (talk) 09:38, 15 December 2011 (UTC)
- Thanks! WillNess (talk) 17:21, 15 December 2011 (UTC)
Calculating average utilization of equipments
What is the correct method of calculating the average utilization of equipment over a period of time. I am just confused between the two methods that can be used for calculation:
Total number of hours available for: 17 (May-11), 9 (Jun-11)
Total number of hours actually used for: 13 (May-11), 6 (Jun-11)
Method 1:
Utilization calculated for each month and then averaged: 0.764705882(=13/17 for May-11), 0.666666667(=6/9 for Jun-11)
Average utilization: 72% (average of 0.764705882 & 0.666666667) and then %
Method 2:
Total number of hours available for (May+Jun): 26 (=17+9 for May-11 & Jun-11)
Total number of hours actually used for (May+Jun): 19 (=13+6 for May-11 & Jun-11)
Taken the sum of both numerator and denominator and then calculated the utilization: 73% (=19/26 and then %)
- The second would be normal. If you worked like a dog for one day of the week and lazed the other five you wouldn't say you were working hard for half the time (well a person might but others wouldn't) Dmcq (talk) 15:00, 15 December 2011 (UTC)
- I believe the first method is called an "average of averages" and it's a common statistical error. It seems like WP should have something on this but I don't see a good source when I Google the phrase.--RDBury (talk) 08:49, 16 December 2011 (UTC)
- The first is sometimes done, the resulting figure doesn't mean an awful lot generally but its easy to calculate and for something like months like you say it should normally make little difference. Dmcq (talk) 02:05, 17 December 2011 (UTC)
December 16
P vs NP and Automated Theorem Proving
Hi, I've read that if P = NP this would make mathematical theorems open to automated theorem programming in a way that would "change" mathematics. However, I've been thinking about that and was wondering how accurate that really was. First, for the algorithm to be NP it would have to search for proofs that are at most f(n) bits, where n is the input size of the theorem. I've heard it phrased that it would be able to find proofs of reasonable length, which fits with the whole f(n) thing, except n and f(n) would be the size of formal proofs. Obviously, definitions in mathematics are layered upon others tracing all the way back to ZFC (the same for theorems), so wouldn't one expect that n would be a lot larger than the the size of the informal statement, thus making this far more impractical? Would the expansion into formal proofs raise the expected degree of f? In other words, informally speaking, if k is the size of an informal statement and g(k) is the equivalent formal one; how does g grow? Related to all of this, since mathematicians can synthesize many layers of formal mathematics into a succinct definition by intuition, wouldn't it be reasonable to expect that many theorems with extremely large formal proofs have reasonable length informal ones? And if we aren't using human intuition, we wouldn't know how to construct a formal system incorporating these new informal concepts? In short, wouldn't it be reasonable to expect that a large chunk of interesting mathematics being done wouldn't be of a "reasonable size" if it were converted into formal statements (how would you be able to decide how large f(n) should be relative n if all of your intuition is with informal proofs?) Finally, when you compound all of this with the possibility that if P = NP that the algorithm might run in O(x ** 4), or bigger, doesn't the idea of machines doing a significant portion of mathematics (if P = NP) become unrealistic? Or did I overlook something obvious? Thank you for any thoughts or answers:-) Phoenixia1177 (talk) 08:56, 16 December 2011 (UTC)
- For my last point: it's not just that x ** 4 is fairly slow, but if moving from infomral to formal simply doubles length, then we are looking at 16 x ** 4 time. If it should square it, then we would be looking at x ** 8 time, which is horrible. If the NP = P solution ran in x ** 5, then this gets even worse. And so on. Sorry if that was obvious; also sorry if I sound like I'm ranting and not being clear :-) Phoenixia1177 (talk) 10:52, 16 December 2011 (UTC)
- There's certainly no guarantee that some as-yet-unknown polynomial-time solution to an NP-complete problem would necessarily trivialize the finding of mathematical proofs, I agree with you there.
- There's something called Cobham's thesis that asserts that the class P is the same as the class of problems with feasible solutions. Taken literally, Cobham's thesis is obviously false, just can't be taken remotely seriously. I assume it's not meant to be taken that literally, and that it's more a sort of observation that problems that arise naturally in practice tend to follow this rule. If that's true, and if the finding of proofs is considered a problem that arises naturally in practice, then one might expect that an automated-theorem-proving system that worked at least much of the time, could be found. --Trovatore (talk) 09:35, 16 December 2011 (UTC)
- That makes sense. But, suppose that we had a decent solution for some NP problem, how would the moving from informal to formal proofs effect the speed of that? Wouldn't the inputs grow, and couldn't this imapct the performance so that even a practical 3sat solution might become an impractical proog finder? Even more importantly, how would you gauge what would be an unreasonable upper bounds on proof length if you are dealing with formal proofs? If the algorithm didn't use any type of test cases, it would be impractical to do the algorithm incrasing proof length each time, the same goes for starting out with an extremely large number. I guess, ultimately, even if NP problems are in P and practical, it seems like automated proof finding would remain impractical due to size increases and limits on proof size. Or, is there some nice correspondance between formal statement size and informal statement size (I have no idea, but it seems odd that there would be.) Phoenixia1177 (talk) 10:52, 16 December 2011 (UTC)
Is Gödel's incompleteness theorem really so dire?
Some people insist that Gödel's theorem spells death for a potential theory of everything, but is it really so bad? As far as I can tell, all Gödel's theorem proves is that the Gödel sentence cannot be proven in any axiomatic system, but the Gödel sentence is a pretty stupid sentence - basically just "this statement cannot be proven". To be quite frank, so what if an axiomatic system can't prove that statement? If a theory of everything could prove any statement in physics except for the Gödel sentence, I would be satisfied. Am I missing something? Is it possible in theory for there to be a theory of everything which could prove any statement except the Gödel sentence, or are there more dire consequences of the theorem (or other barriers in the search for a TOE)? Widener (talk) 12:42, 16 December 2011 (UTC)
- Gödels theorem says that unprovable sentences exist. The proof of Gödels theorem gives a specific example. You seem to assume that it is the only example. Provability may be a very exclusive property for a sentence. Nonprovability is the rule rather than the exception. Bo Jacoby (talk) 13:28, 16 December 2011 (UTC).
- I understand, but is that actually true? Widener (talk) 14:03, 16 December 2011 (UTC)
- Yes, that is true. Goedel's theorem is that any (sufficiently complex) set of consistent axioms leads to a system with unprovable statements. So even if you accept what you call "the Goedel sentence" as a new axiom, there will still be other unprovable statements. So nonprovability is unavoidable; even if you disregard certain statements that you think are "pretty stupid sentences". You probably know this, but see continuum hypothesis for an example of a nonprovable statement (in ZFC) which is not stupid at all. Staecker (talk) 16:47, 16 December 2011 (UTC)
- Any axiomatic system which contains the Gödel sentence as an axiom is clearly inconsistent. All Gödel's theorem demonstrates is that the Gödel sentence is unprovable; the nonprovability of the continuum hypothesis is not a consequence of Gödel's theorem. So if you don't count the Gödel sentence, you cannot use his theorem to dismiss the otherwise-completeness of any axiomatic system of theory of everything. Widener (talk) 22:10, 16 December 2011 (UTC)
- I think you're wrong about that- though I'm not an expert so somebody else should chime in. The Goedel sentence can be used as an axiom without violating any consistency. The statement "this statement is not provable from the other axioms" can be assumed to be true and added into any existing set of axioms and still be consistent. As for "the nonprovability of the continuum hypothesis is not a consequence of Gödel's theorem", you're right about that- I was just giving this as an example of a nonprovable statement that's not trivially self-referential. Staecker (talk) 03:19, 17 December 2011 (UTC)
- Any axiomatic system which contains the Gödel sentence as an axiom is clearly inconsistent. All Gödel's theorem demonstrates is that the Gödel sentence is unprovable; the nonprovability of the continuum hypothesis is not a consequence of Gödel's theorem. So if you don't count the Gödel sentence, you cannot use his theorem to dismiss the otherwise-completeness of any axiomatic system of theory of everything. Widener (talk) 22:10, 16 December 2011 (UTC)
- Yes, that is true. Goedel's theorem is that any (sufficiently complex) set of consistent axioms leads to a system with unprovable statements. So even if you accept what you call "the Goedel sentence" as a new axiom, there will still be other unprovable statements. So nonprovability is unavoidable; even if you disregard certain statements that you think are "pretty stupid sentences". You probably know this, but see continuum hypothesis for an example of a nonprovable statement (in ZFC) which is not stupid at all. Staecker (talk) 16:47, 16 December 2011 (UTC)
- I understand, but is that actually true? Widener (talk) 14:03, 16 December 2011 (UTC)
- Can you give an outline of the argument that reaches the conclusion that "Gödel's theorem spells death for a potential theory of everything" - or give a link to a source that explains how one thing leads to the other ? I am not sure I see how that argument might be constructed. Gandalf61 (talk) 13:52, 16 December 2011 (UTC)
- Well, it is given in the article theory of everything. Widener (talk) 14:01, 16 December 2011 (UTC)
- Oh, I see. Well, the most that Gödel's theorem might tell us in this context is that we can never formally prove that a candidate TOE is actually a TOE i.e. is complete. But a physicist's benchmark is experiment, not formal proof. So when they say "theory of everything" they really mean "theory that explains the results we have observed in all the experiments we have done so far without needing to be fine tuned by picking arbitrary values for too many parameters". I don't think Gödel's theorem rules out that goal. Gandalf61 (talk) 14:20, 16 December 2011 (UTC)
- Even more fundamental than that: I understand theory of everything in a physics context to mean something like "theory that explains all fundamental interactions". It doesn't necessarily need to lead to an effective method of predicting what all those interactions add up to, in a mathematical sense.
- To put it another (perhaps more aggressive and provocative) way, "theory of everything" really means "theory of everything physical". But mathematics transcends physics, and therefore does not need to be explained by the theory in order for it to be a theory of everything. --Trovatore (talk) 19:37, 16 December 2011 (UTC)
- Oh, I see. Well, the most that Gödel's theorem might tell us in this context is that we can never formally prove that a candidate TOE is actually a TOE i.e. is complete. But a physicist's benchmark is experiment, not formal proof. So when they say "theory of everything" they really mean "theory that explains the results we have observed in all the experiments we have done so far without needing to be fine tuned by picking arbitrary values for too many parameters". I don't think Gödel's theorem rules out that goal. Gandalf61 (talk) 14:20, 16 December 2011 (UTC)
- Well, it is given in the article theory of everything. Widener (talk) 14:01, 16 December 2011 (UTC)
- I'm interested in this topic as well, so I thought I'd chime in with what I think is a tolerable summary of everything so far. I assume Staecker is right about being able to add the Goedel statement to the axioms. I would say that a theoretical physicist's benchmark is mathematics, but not to the extent of abstract completeness that would interest a mathematician. The article theory of everything seems to suggest as much - they can agree the incompleteness theorem has implications for a TOE, but doesn't put it out of business, at least not unequivocally so. The biggest question that still remains, I think, is whether Goedel's theorem itself relates to statements other than self-referential ones. There is a list of statements undecidable in ZFC, but these all seem to be abstract. The question is, would there be any that touch a raw nerve in our consciousness? I for one have no personal connection to questions about levels of infinity, but if there were a statement about prime numbers, such as "There are infinitely many primes whose digits add up to 5" which could not be proven one way or the other, I would find that disconcerting. Do any mathematicians have an opinion on whether these statements are all provably true or false, or is that still an open question? IBE (talk) 04:36, 17 December 2011 (UTC)
- NB I know the OP's question was really about the TOE, but this would be one of the strongest mathematical ways of answering it. If incompleteness starts to get into the number system, it will run riot through mathematics, and potentially have implications for anything that can be computed. IBE (talk) 05:23, 17 December 2011 (UTC)
- As far as we know, the Riemann Hypothesis, Twin Prime Conjecture and P = NP might all be unprovable in ZFC. Unfortunately, since these are all statements of number theory (I believe RH can be reformulated arithmetically, although I'm not sure), proving them undecidable would require constructing non-standard models, and we don't have good ways to do that.--121.74.123.159 (talk) 06:12, 17 December 2011 (UTC)
- "...proving them undecidable would require constructing non-standard models" - is there a reason/good reference to explain this in a way a layperson/ordinary graduate could understand? IBE (talk) 06:48, 17 December 2011 (UTC)
- These are all statements of arithmetic; they depend only on the natural numbers and the operations plus and times. So in the standard model of arithmetic (the naturals), they have a truth value---either true or false. Creating or destroying subsets of the naturals won't affect them, since they don't talk about subsets of the naturals, only the naturals themselves. To build another model with a different truth value (which is how we show that sentences are independent) would require changing the naturals themselves, which means building nonstandard naturals.--121.74.123.159 (talk) 07:07, 17 December 2011 (UTC)
- "...proving them undecidable would require constructing non-standard models" - is there a reason/good reference to explain this in a way a layperson/ordinary graduate could understand? IBE (talk) 06:48, 17 December 2011 (UTC)
- There is a polynomial in several variables with integer coefficients p so that there is no proof whether, or not, it has an integer valued root. To show this, the solution to Hilbert's Tenth Problem is that there is no algorithm that can decide this for all such p; we can construct a Turing Machine that given p will try all possible proofs of "it has an integer root" and "it does not have an integer root." and stop only if one of the proofs work. If there is a proof for every such p, then this machine must halt for all inputs; however, this would make the tenth problem decidable, thus, some such p must exist for which there is no proof. Clearly, this is not some self referential statement (for whatever that p might be) nor is the actual statement overly artificial (though the statement that some such p exists may be) By the way, what did you mean by, "If incompleteness starts to get into the number system, it will run riot through mathematics, and potentially have implications for anything that can be computed." Incompleteness is already in the number system and computation. Phoenixia1177 (talk) 09:41, 17 December 2011 (UTC)
- I was exaggerating a little there. Usually, from what I have seen at any rate, when someone finds a proof of something incomplete in maths, it goes further, and seems to run through the structure of it like gristle (this was from something by Douglas Hofstadter). Perhaps it does in this case, and we haven't found particular polynomials with the property you suggested. At any rate, if you could prove that the twin primes conjecture cannot be proven or disproven within ZFC, I would be expecting this to have a huge knock-on effect. Your example is very strong, and of just the sort I am interested in (thanks for reminding me, as I had forgotten all about it), but less compelling than an explicit example of such a polynomial. IBE (talk) 11:36, 17 December 2011 (UTC)
- There is also a generalization of the Collatz Conjecture that is undecidable, this can, again, be converted into a check for proof type of thing that shows that there is some unprovable sentence about a Collatz like function; this sentence would be even more natural than the one about p above. Phoenixia1177 (talk) 09:46, 17 December 2011 (UTC)
- As for the original question: Godel's Theorem does not imply that no set of rules exist which encode all physical interactions, though it could imply that there is some possible physical system that the rules cannot decide the outcome of. However, I think most people would consider being able to write down the laws to be providing a TOE, I don't think they would require that the theory also be complete. On a side note, a few things: there are other objections to TOEs; since we could add Godel sentences in as axioms to any incomplete TOE, it is true that there would always be a stronger system, but I think the general idea is that if the theory can account for all of the fundamental interactions, then it is a TOE. Phoenixia1177 (talk) 09:56, 17 December 2011 (UTC)
- Something that would be very interesting indeed would be if it were possible to make an actual physical oracle machine] for Turing machine halting. Quantum computing doesn't go anywhere near that, it doesn't look like it can even approach NP problems in P time, but it has shown that the physical world can do some quite strange things. 10:27, 17 December 2011 (UTC)
- It may be possible that there is some physical constant that we can measure that is a noncomputable real number; for example, define a binary number b of the form 0.b1b2b3... where bn is 1 iff the nth turing machine halts. However, I don't think it likely we could determine if some constant is this since we could never accurately measure the entire sequence; and any theory that implied that it was such, would only be true to within experimental verification, thus, not 100% guaranteed. Moreover, unless physics is built layer upon layer infinitely, or there is some continuous aspect of it, I doubt that there is any constant that requires infinite precision; so, even if a theory provably yields such a number, I doubt that reality actually yields such a measurement (I would guess that there is some very large number so that we could truncate everything to that many decimals and never be able to physically tell the difference between that and the original system. I think this be could strengthened to saying that telling the difference would be physically impossible, not just for our means. Though the truncated theory would, probably, not be as mathematically nice.) On a side note, there may be other ways of constructing an oracle, or some hitherto unconsidered notion of computable, from physics, but I'm not sure how. In any case, it will still have undecidables; no system is able to decide its own version of the halting problem. So, short answer: probably not; might not even make sense; even if, probably wouldn't be able to prove it; and it wouldn't rule out undecidables in toto. Phoenixia1177 (talk) 10:43, 17 December 2011 (UTC)
- I can imagine just putting in the axioms and asking if the Collatz conjecture is true and it coming back with 'yes, I have discovered a truly marvellous proof of this, but the margins of this universe are too small to contain it' ;-) Dmcq (talk) 11:28, 17 December 2011 (UTC)
- It may be possible that there is some physical constant that we can measure that is a noncomputable real number; for example, define a binary number b of the form 0.b1b2b3... where bn is 1 iff the nth turing machine halts. However, I don't think it likely we could determine if some constant is this since we could never accurately measure the entire sequence; and any theory that implied that it was such, would only be true to within experimental verification, thus, not 100% guaranteed. Moreover, unless physics is built layer upon layer infinitely, or there is some continuous aspect of it, I doubt that there is any constant that requires infinite precision; so, even if a theory provably yields such a number, I doubt that reality actually yields such a measurement (I would guess that there is some very large number so that we could truncate everything to that many decimals and never be able to physically tell the difference between that and the original system. I think this be could strengthened to saying that telling the difference would be physically impossible, not just for our means. Though the truncated theory would, probably, not be as mathematically nice.) On a side note, there may be other ways of constructing an oracle, or some hitherto unconsidered notion of computable, from physics, but I'm not sure how. In any case, it will still have undecidables; no system is able to decide its own version of the halting problem. So, short answer: probably not; might not even make sense; even if, probably wouldn't be able to prove it; and it wouldn't rule out undecidables in toto. Phoenixia1177 (talk) 10:43, 17 December 2011 (UTC)
- Something that would be very interesting indeed would be if it were possible to make an actual physical oracle machine] for Turing machine halting. Quantum computing doesn't go anywhere near that, it doesn't look like it can even approach NP problems in P time, but it has shown that the physical world can do some quite strange things. 10:27, 17 December 2011 (UTC)
TAOCP Vol 4A page 5
Unsure whether to ask this on maths or IT, decided here better...
In TAOCP Vol 4A there is some stuff about pairs of latin squares. The Euler-Paker method uses "traversals", of which there are 808 in the instance mentioned. I don't undestand what a traversal is. Can anyone elucidate please? -- SGBailey (talk) 14:02, 16 December 2011 (UTC)
- A traversal of a latin square is a set of entries, one in each row and column and one of each symbol.←109.145.12.18 (talk) 16:47, 16 December 2011 (UTC)
- I quote: "For example, one traversal is 0859734216, in Euler's notation, meaning that we choose the 0 in column 0, the 8 in column 1, ..., the 6 in column 9. Each traversal that includes k in L's leftmost column represents a legitimate way to place the ten k's into square M. The task of finding traversals is, in fact, rather easy, and the given matrix L turns out to have exactly 808 of them;". I understand 109's definition of traversal, but that is the matrix M that this exercise is attempting to determine. Still unsure what Knuth means here. -- SGBailey (talk) 19:26, 16 December 2011 (UTC)
- Matrix M in that example is the "rank" or "suit" of the elements of matrix L. From the earlier card example, Matrix L is:
- JAKQ
- QKAJ
- AJQK
- KQJA
- Then, Matrix M is:
- DHSC
- SCDH
- CSHD
- HDCS
- Each position in M is an object that is also in the same position in M. So, the top left object in both of them is the JD. Going to the example you are looking at, L has a unique sequence of 0-9 in each row and, separately, a unique sequence of 0-9 in each column. The goal is to match up a unique set of 0-9 per column and per row in M. But, there's a catch. Each shared position, such as the top left position, is a number pair that cannot be repeated. For example, if the top-second from left position in M is a 1, the object at that position becomes 1,1 (a 1 from L and a 1 from M). That is not allowed in the example shown because the left-most position of the second row down has 1,1 (a 1 in L and a 1 in M). That limits the traversals that are valid for the top row of M. -- kainaw™ 21:09, 16 December 2011 (UTC)
- Matrix M in that example is the "rank" or "suit" of the elements of matrix L. From the earlier card example, Matrix L is:
- Yes I understand all that, but what I don't understand is what is special about wnd where 0859734216 was magiced from. I'm sure there are more than about 80 different possible strings that could fit the top line of 'L', which is where this must go as we have already established that it is the top line that starts with 0. What you have been describing is "the problem", which is well and readily understood. I am having difficultly understanding "the solution". Take the earlier example.
'L' 'M' JAKQ D--- QKAJ S--- AJQK C--- KQJA H---
Now in the top line I can put [D][SH][SC][HC], which is DSCH or DHSC - ok, only two possibilities. Are you saying that in the 10*10 example we get limited to only ~80 possibilities rather than the several million my gut is telling me? -- SGBailey (talk) 08:21, 17 December 2011 (UTC)
- Yes. The top row is limited to much smaller than your gut is telling you because you cannot repeat a pairing on a position. Using that smaller example of the cards, you cannot use [AC] on the top row aligned with the A because AC is already used on the first position of the third line down. You cannot use [KH] on the next position because it is on the bottom row already. If you think of it as a tree of possibilities where each node is a selection of one item and all the child nodes are selections that could come next, disallowing [AC] prunes a whole branch from the tree. A lot of possibilities are removed. If you ignored this whole pairing thing, the first line of the cards thing would allow for [D][CHS][CHS][CHS], which would be six possibilities instead of just two. -- kainaw™ 16:19, 20 December 2011 (UTC)
integrals
will anyone help me with changing order in double integrals?though i can solve few sums but i am not feeling it,please help refering to an example.59.165.108.89 (talk) 15:02, 16 December 2011 (UTC)
- See Order of integration (calculus). Or give a problem you're stuck on. Qwfp (talk) 16:59, 16 December 2011 (UTC)
Exact trajectory of a point in a vector field
For any point in three-dimensional space a corresponding vector field exists such that its components are defined for as , as , and as . Given an initial starting point , , , and , what would the equations be that would give the exact position of the point at any time as it "flows" through the vector field? --Melab±1 ☎ 21:59, 16 December 2011 (UTC)
- I fixed your math tag for you - you were missing the ">" at the end. This looks like a homework question. We have a policy of only helping with homework if you show us that you've at least tried to do it yourself first. How far have you got? --Tango (talk) 22:08, 16 December 2011 (UTC)
- It is not a homework problem. --Melab±1 ☎ 00:00, 18 December 2011 (UTC)
- The three differential equations: dx/dt=ax−yz, dy/dt=by−bz+xz, dz/dt=cz+xy, are non-linear due to the product terms xy, yz, and xz. The very special case (x,y,z)=(0,0,0) is obviously a solution. Slightly more general cases are: (x0eat, 0, 0) and (0, y0ebt, 0) and (0, 0, z0ect). While the product terms are negligibly small, the linear approximations are: dx/dt=ax, dy/dt=by−bz, and dz/dt=cz. These equations can be written dX/dt=AX where X=(x,y,z) is the unknown vector and A is the known matrix ((a,0,0),(0,b,−b),(0,0,c)). It has the solution X=X0eAt. Even if the product terms are not negligible the differential equations can be linearized by substituting (x,y,z) = (x0,y0,z0) + (x',y',z'). But I do not know if the nonlinear equations have elementary exact solutions. Bo Jacoby (talk) 06:25, 18 December 2011 (UTC).
- Can you explain how (x,y,z) = (x0,y0,z0) + (x',y',z') linearizes the equations that I gave? I just don't see it. --Melab±1 ☎ 02:16, 19 December 2011 (UTC)
- The idea is that even if yz is not small, y'z' is. dx/dt = ax−yz = a(x0+x')−(y0+y')(z0+z') = ax0+ax'−y0z0−y'z0 −y0z'−y'z'. So dx'/dt ≈ (ax0−y0z0)+ax'−z0y'−y0z'. This approximate equation is linear. The same for the two other equations. Bo Jacoby (talk) 10:25, 19 December 2011 (UTC).
- Can you think of any set of 3D non-linear equations that have an exact solution? --Melab±1 ☎ 02:22, 20 December 2011 (UTC)
- And how would I calculate the derivative of y and z? --Melab±1 ☎ 02:23, 20 December 2011 (UTC)
The Twelve Days of Christmas
I wanted to link to this from another site, but the 2 is supposed to be on top of the sigma. Also, I wanted to link from the other site to something that just shows the mathematical formula and not the other stuff.Vchimpanzee · talk · contributions · 22:37, 16 December 2011 (UTC)
- To put the 2 on top of the sigma, you need to put curly brackets around the 12 - i.e.
- \sum_{j=1}^{12} + \sum_{i=1}^j.
- That'll give you:
- As for the rest of your question, I don't understand what you mean. Do you mean you want to hotlink the image? Smurrayinchester 00:04, 17 December 2011 (UTC)
- He obviously wanted , which is 364, which is the total number of gifts in The Twelve Days of Christmas (song). [1] --Itinerant1 (talk) 09:23, 17 December 2011 (UTC)
- What a nice application of binomial coefficients! Bo Jacoby (talk) 16:18, 17 December 2011 (UTC).
- Regarding linking the image, I wanted to link only the image, not all that goes with it. Thank you.Vchimpanzee · talk · contributions · 16:40, 17 December 2011 (UTC)
- What a nice application of binomial coefficients! Bo Jacoby (talk) 16:18, 17 December 2011 (UTC).
- He obviously wanted , which is 364, which is the total number of gifts in The Twelve Days of Christmas (song). [1] --Itinerant1 (talk) 09:23, 17 December 2011 (UTC)
- Try one of these: http:/upwiki/wikipedia/en/math/d/2/8/d2811783303f111aadd797ea7efa4ed9.png or http:/upwiki/wikipedia/en/math/9/9/1/991cdcbdfb1b6a32fe994ae843142600.png. I'm not sure how long those links will last- they might be temporary. Staecker (talk) 17:16, 17 December 2011 (UTC)
- Thanks. We forgot one important detail so we need to do it again.Vchimpanzee · talk · contributions · 17:50, 17 December 2011 (UTC)
- Wait, the first link works. That will do it. Thank you.Vchimpanzee · talk · contributions · 17:51, 17 December 2011 (UTC)
- For future reference, all equations on wikipedia are PNG images. Depending on your browser/OS, you can typically right-click on the equation and either save the image to your hard drive or "copy image location" to get the address like the ones above. Staecker (talk) 19:48, 17 December 2011 (UTC)
- Wait, the first link works. That will do it. Thank you.Vchimpanzee · talk · contributions · 17:51, 17 December 2011 (UTC)
- Thanks. We forgot one important detail so we need to do it again.Vchimpanzee · talk · contributions · 17:50, 17 December 2011 (UTC)
December 17
Induction and integration by parts
Consider the proposition P(n):
P(1) was simple enough to prove, but I'm having trouble with the inductive step, P(k) implies P(k + 1). It seems to necessarily involve differentiating something of the form of the original left-hand side. Interestingly, this is possible but long-winded if you expand with the binomial theorem, extract each power of x from the resulting sum of integrals, and apply the fundamental theorem of calculus along with the product rule to differentiate each term of the sum one by one. If I'm not wrong, it is true that
which looks deviously simple. My question: is there some easy way of differentiating without multivariable calculus, or was my example exceptional (or exceptionally clean) for some reason? My observation is that it is only possible because the integral is reducible to something of the form ; i.e., x and u can be separated sufficiently. This all falls under the general query of how best to pull out the induction; I'd be interested in a more elegant way if one exists. Thanks in advance. —Anonymous DissidentTalk 12:15, 17 December 2011 (UTC)
- I think the whole approach is making too much out of the problem. If you apply another integral to the P(k) case you get
- and th job is to simplify the lhs. But if you swap the order of integration as in first year calculus the lhs of the P(k+1) pops right out.--RDBury (talk) 16:16, 17 December 2011 (UTC)
- I've never learned about swapping order of integration, but I suppose that makes sense and simplifies the induction considerably. I'm still interested in the differentiation question though. —Anonymous DissidentTalk 00:14, 18 December 2011 (UTC)
- I can't help with your actual question, but you may be interested in Fubini's theorem. Perhaps you already know about it - you just mentioned changing the order of integration, and this covers it. IBE (talk) 05:23, 18 December 2011 (UTC)
- I'm not sure I do understand this idea about changing the order of integration. I tried to prove my idea of what it means using integration by parts, but failed. Maybe Rdbury could be more explicit about what it means in this context, or how to derive it from integration by parts. —Anonymous DissidentTalk 11:15, 18 December 2011 (UTC)
- I don't want to do an exposition on swapping the order of integration when it should be given in any calculus textbook, so I'll just give this link to a public domain source. Quoting from Leibniz integral rule.
- which in your case gives
- In other words the formula you gave is valid for integrals from a constant to x when the integrand is 0 at the upper limit of integration.--RDBury (talk) 14:51, 18 December 2011 (UTC)
- I don't want to do an exposition on swapping the order of integration when it should be given in any calculus textbook, so I'll just give this link to a public domain source. Quoting from Leibniz integral rule.
- I'm not sure I do understand this idea about changing the order of integration. I tried to prove my idea of what it means using integration by parts, but failed. Maybe Rdbury could be more explicit about what it means in this context, or how to derive it from integration by parts. —Anonymous DissidentTalk 11:15, 18 December 2011 (UTC)
- I can't help with your actual question, but you may be interested in Fubini's theorem. Perhaps you already know about it - you just mentioned changing the order of integration, and this covers it. IBE (talk) 05:23, 18 December 2011 (UTC)
December 18
Platonism vs formalism
As no doubt most of you know, in the philosophy of mathematics there is an ongoing debate between formalists and Platonists. I've often wondered whether the following has any relevance. The success of formal systems explains in part the popularity of formalism, but the axioms in any system consist of information. If the system is powerful enough to generate information theory, it must therefore relate back to the information in the axioms, insofar as it provides a mathematical foundation for the amount of information they contain. The ability to define information theory is fairly basic, I think, and at any rate is found in ZFC. It must be impossible to create two incompatible versions of information theory, for otherwise there would be two different ways of reading the information in the axioms. As a result, different systems must still produce the one theory of information. This at least appears to suggest that mathematics is already present before we define axiom systems, because information theory provides a necessary foundation for interpreting axioms in the first place. Can this put a dent in formalism, or is it going too far? IBE (talk) 17:25, 19 December 2011 (UTC)
- I don't see why it would be impossible to create two incompatible versions of information theory. Really I don't even understand what that statement means, in any precise way. As Wolfgang Pauli once put it (talking about something else), not only is this not right, it is not even wrong. Looie496 (talk) 17:41, 19 December 2011 (UTC)
- A second version of information theory might state, for example, that the wordcount until now for this thread, about 270 words, could be condensed into 5 characters. Of course no such theory is possible, but that's what I'm saying. If you could come up with any other information theory that has theorems that are false as we know them, I would be interested. IBE (talk) 17:55, 19 December 2011 (UTC)
- If you can come up with formal criteria for calling a body of statements an "information theory", and a schema for translating those statements into ordinary English (so that we can decide whether they are true), then your problem will be well-specified. Currently it is not. Looie496 (talk) 17:59, 19 December 2011 (UTC)
- I confess I should have linked the article information theory, but I did not anticipate the objection. I thought it had clear statements mathematically speaking, and as understood in normal English. I don't know what more is required other than to note that the information contained in a random variable is defined as the expected value of the informativeness of each outcome, which itself equals -log(p); p being its probability. This gives more. Claude Shannon's 1948 paper established the discipline, and I thought until now it was sufficiently rigorous, precise and clear. IBE (talk) 18:22, 19 December 2011 (UTC)
- If I had to wager a guess, I'd say that you are confusing what it means to say that axioms contain information (which I imagine to be an informal notion about how well they capture some system- mental or physical) and information in information theory, which has nothing to do with having meaning. Moreover, taken in the most sensible fashion possible, you would at best be talking about applying information theory to a system of axioms, which wouldn't say anything either way about philosophy (if it could say anything about anything). Phoenixia1177 (talk) 22:23, 19 December 2011 (UTC)
- A second version of information theory might state, for example, that the wordcount until now for this thread, about 270 words, could be condensed into 5 characters. Of course no such theory is possible, but that's what I'm saying. If you could come up with any other information theory that has theorems that are false as we know them, I would be interested. IBE (talk) 17:55, 19 December 2011 (UTC)
December 20
December 19
Marcus Hutter versus Yuri Matiyasevich
Marcus Hutter claims to have found an algorithm for all well defined problems, but such an algorithm would be able to ascertain if a given Diophantine equation is solvable, and it could do this for any Diophantine equation. Yet, as Yuri Matiyasevich showed, no such algorithm exists - see Hilbert's tenth problem. How is this contradiction resolved?
At first I thought that it was because Marcus Hutter's algorithm M could only solve problems p for which there exists another algorithm (and could only do so five times slower too, as per the link), and since there is no algorithm for ascertaining if a general Diophantine equation has solutions, M would be unable to do so too. That makes no sense though - for any particular Diophantine equation, there exists an algorithm which can determine if it has solutions - a very trivial one in fact: simply the algorithm which ignores the input and just prints out the correct answer! That would imply that not only could M solve p for any p, but it could do so in constant time (if his claim in the link is correct) but of course this still contradicts Matiyasevich's solution to Hilbert's tenth problem. Widener (talk) 06:17, 20 December 2011 (UTC)
- Without reading more than the abstract, it looks like he's only considering algorithms with correctness proofs. While the algorithm for solving a Diophantine equation you describe exists, it lacks (in general) a correctness proof. This just seems to be a particularly efficient dovetailing.--121.74.123.159 (talk) 10:41, 20 December 2011 (UTC)
Statistical test for Gender, Race, and Socioeconomic status (three options) and a three-scale likert
Hi, I was wondering if there is a test for significance to see if Gender, Race, and Socioeconomic status (with 3 options of below poverty, around poverty, and above poverty) all being used a separate IVs cam be seen as having an effect on the DV being the results of a three option likert? What is the best option. Thanks. (By the way n=35 with some responses lacking, I know that isn't the best but the population was already extremely small... --109.66.119.146 (talk) 12:42, 20 December 2011 (UTC))--109.66.119.146 (talk) 12:28, 20 December 2011 (UTC)
Going between hexagonal and parallelogram tilings
Giving any parallelogram or hexagonal tiling of the plane (where the hexagon has parallel opposite sides, of equal length), we can form a lattice, whose action on the plane produces the torus, with fundamental domain the primitive tile. An induced flat metric can be put on the torus, via the hexagon or parallelogram. However, any lattice in the plane can be uniquely determined by just two vectors, i.e. we may convert any hexagonal lattice into an equivalent parallelogram lattice (producing a torus with the same, flat metric). I can see how to get from a hexagonal tile to a parallelogram tile (by taking three mutually non-adjacent vertices in the hexagon, joining two lines between them, then cutting along these, while gluing opposite sides of the hexagon). However, when I try to pass from a parallelogram tile to a hexagon, I seem to have too many degrees of freedom, so I can't come up with a canonical (up to lattice isometries) hexagonal tile to send it to. Is this possible? It seems like it should be, when thinking about the metrics induced on the torus -- non-equivalent parallelogram (or hexagonal) lattices give rise to different metrics on the torus, so it seems like there should be a bijection between the two sets of lattices/tilings. Any help much appreciated! Icthyos (talk) 12:33, 20 December 2011 (UTC)
- In a rather similar problem that I worked on long ago, to go from one lattice to another, I first went to a square grid lattice, then to the other lattice. Mapping to and from squares was much easier because the vectors are based on Euclidean space. The trick is to make the square lattice with smaller tiles (you won't be able to map to them all) so you can easily map back and forth between lattices where the borders don't line up easily. -- kainaw™ 16:09, 20 December 2011 (UTC)
- (ec) I'm not sure if this is what you want, but if I got it properly it is enough to mark any point P inside a parallelogram L, join P with lines to arbitrary three vertices of L, and then copy that in all parallelograms... --CiaPan (talk) 16:18, 20 December 2011 (UTC)
- That was my approach, except I require that the angles between each pair of lines meeting at P be , so my hexagons are always regular. I was just finding it difficult to show that the (finite?) number of points P for which this held, all produced the same hexagonal lattice/tiling (up to isometries) when regluing. I suppose if I could do it for the bog-standard equal sided regular hexagon, then I could argue using something like shear transformations (is there a unique shear transformation mapping one parallelogram to another?) Icthyos (talk) 17:29, 20 December 2011 (UTC)
ZFC being consistent if a proper sub-set of ZFC is consistent
It has already been proved, that there is a proper sub-set A of ZFC, satisfying that - if A is consistent - then ZFC is consistent as well (Of course, A=ZF). Is there any other proper sub-set A of ZFC, satisfying that property? (I don't count the Axiom of Regularity) 77.127.51.217 (talk) 13:04, 20 December 2011 (UTC)