Wikipedia:Reference desk/Mathematics: Difference between revisions
→Billiard ball accident: new section |
|||
Line 208: | Line 208: | ||
:::To your last question: I think not, though I welcome correction :) There may be relations among a few of the various hulls, but as you say, they are not all subclasses of one or the other. I guessed convex hull because I think of that as the most commonly used concept, but that is of course subjective. Like [[ring]] or [[field]], I think hull is just an English word that has been co-opted into special service in math, perhaps to serve as an analogy or source of intuition. But usually (always?) in math, "hull" appears with an adjective to specify which concept is meant. I've never heard "hull" used in math talks or texts without an antecedent adjective (perhaps implicit after introduction). [[User:SemanticMantis|SemanticMantis]] ([[User talk:SemanticMantis|talk]]) 01:59, 4 April 2012 (UTC) |
:::To your last question: I think not, though I welcome correction :) There may be relations among a few of the various hulls, but as you say, they are not all subclasses of one or the other. I guessed convex hull because I think of that as the most commonly used concept, but that is of course subjective. Like [[ring]] or [[field]], I think hull is just an English word that has been co-opted into special service in math, perhaps to serve as an analogy or source of intuition. But usually (always?) in math, "hull" appears with an adjective to specify which concept is meant. I've never heard "hull" used in math talks or texts without an antecedent adjective (perhaps implicit after introduction). [[User:SemanticMantis|SemanticMantis]] ([[User talk:SemanticMantis|talk]]) 01:59, 4 April 2012 (UTC) |
||
::::I think that's a good exposition, and I would further add that, while there may be commonalities among different sorts of hulls (both the convex hull and the Skolem hull, for example, are closures of sets under a certain collection of operations), it would be a very bad idea to abstract those commonalities and write a Wikipedia article about them. It would be a form of [[WP:OR|original research]], in the Wikipedia-term-of-art meaning of that phrase. If ''hull'', as a term expressing those commonalities, ever becomes an accepted part of mathematical terminology ''outside'' of WP, ''then'' we can write an article about it. --[[User:Trovatore|Trovatore]] ([[User talk:Trovatore|talk]]) 02:33, 4 April 2012 (UTC) |
::::I think that's a good exposition, and I would further add that, while there may be commonalities among different sorts of hulls (both the convex hull and the Skolem hull, for example, are closures of sets under a certain collection of operations), it would be a very bad idea to abstract those commonalities and write a Wikipedia article about them. It would be a form of [[WP:OR|original research]], in the Wikipedia-term-of-art meaning of that phrase. If ''hull'', as a term expressing those commonalities, ever becomes an accepted part of mathematical terminology ''outside'' of WP, ''then'' we can write an article about it. --[[User:Trovatore|Trovatore]] ([[User talk:Trovatore|talk]]) 02:33, 4 April 2012 (UTC) |
||
:Thanks.[[Special:Contributions/213.249.187.63|213.249.187.63]] ([[User talk:213.249.187.63|talk]]) 15:41, 4 April 2012 (UTC) |
|||
= April 4 = |
= April 4 = |
Revision as of 15:41, 4 April 2012
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
March 26
Out of curiosity, what are the odds of clicking Special:Random, landing on an article, clicking again, and winding up on the exact same page? Assuming the system worked as it was supposed to and the repeated result wasn't simply a cache or browser error. I have an idea of how I would typically figure this out, but I suspect there are a few more variables than I can immediately ID. Thanks, Juliancolton (talk) 03:04, 26 March 2012 (UTC)
- If there are n articles from which random selects, and they are truly randomly selected, then the chance of hitting the same article again should be 1/n. So, if n = 1 million, the chances are one in a million. StuRat (talk) 03:27, 26 March 2012 (UTC)
- Of course assuming 'truly randomly' = 'independent of previous selections and with a uniform distribution'. --CiaPan (talk) 13:21, 26 March 2012 (UTC)
- You may need to look into how MediaWiki is set up; the developers might have included a specification preventing the link from sending you to the page whence you started, in which case you'd have no chance at all. Nyttend (talk) 15:26, 26 March 2012 (UTC)
- If it is a single server in one process then you might end up with the chances being far less than just one over the number of articles because simple random number generators avoid picking the same number twice in succession. This is the sort of flaw that helped break the Enigma code during the second word war. Dmcq (talk) 17:04, 26 March 2012 (UTC)
- You may need to look into how MediaWiki is set up; the developers might have included a specification preventing the link from sending you to the page whence you started, in which case you'd have no chance at all. Nyttend (talk) 15:26, 26 March 2012 (UTC)
- Of course assuming 'truly randomly' = 'independent of previous selections and with a uniform distribution'. --CiaPan (talk) 13:21, 26 March 2012 (UTC)
- Given the way MediaWiki finds random articles, if I'm not mistaken, the chances of getting the same article both times when clicking Special:Random twice is 2/n. -- Meni Rosenfeld (talk) 11:03, 27 March 2012 (UTC)
1/n surely?It does seem to be a very good random generator, the sort that people say can't be random because they get coincidences like that :) By the way the Low-discrepancy sequence is quite interesting I think, sometimes giving people wht they think of as random is a good idea as you cover everything more uniformly when sampling. Dmcq (talk) 11:21, 27 March 2012 (UTC)- I've had a look at that description of how the next random page is chosen and I've come to the conclusion that I don't understand it at all. They say the page is chosen randomly, and in that case the chance of getting the same article twice with two clicks should be 1/n, but I'll need to check the algorithm and try and figure out exactly what they are saying. Dmcq (talk) 11:29, 27 March 2012 (UTC)
- Every page is assigned upon creation a random real index in [0, 1]. Clicking on Special:Random generates a real number in [0, 1] and returns the page with the smallest index larger than the result. Thus the probability that article i will be selected is equal to the difference between its index and the next highest index. When this is distributed exponentially with mean 1/n, so , and . The chance that the two chosen articles are identical is . -- Meni Rosenfeld (talk) 12:30, 27 March 2012 (UTC)
- Thanks for that. They'd be better off if they want to actually show each page with about the same probability of doing something like multiplying the golden ratio by the index of the page where the index goes up by one for each new page and getting the fractional part of that rather than assigning a random number to each page. Dmcq (talk) 12:52, 27 March 2012 (UTC)
- This suggestion is actually a very neat, simple replacement approach, and has the minor drawback that it uses additional state: the latest index number. It also does not deal with page deletions perfectly, but ignoring that, the distribution of page probabilities will, I guess, be quite tightly bounded, not extending to zero as with the exponential distribution. — Quondum☏✎ 13:12, 27 March 2012 (UTC)
- Okay I've put a suggestion at Wikipedia:Village_pump_(technical)#Special:Random, you never know, someone might be interested. Dmcq (talk) 23:09, 27 March 2012 (UTC)
- This suggestion is actually a very neat, simple replacement approach, and has the minor drawback that it uses additional state: the latest index number. It also does not deal with page deletions perfectly, but ignoring that, the distribution of page probabilities will, I guess, be quite tightly bounded, not extending to zero as with the exponential distribution. — Quondum☏✎ 13:12, 27 March 2012 (UTC)
- Thanks for that. They'd be better off if they want to actually show each page with about the same probability of doing something like multiplying the golden ratio by the index of the page where the index goes up by one for each new page and getting the fractional part of that rather than assigning a random number to each page. Dmcq (talk) 12:52, 27 March 2012 (UTC)
- Every page is assigned upon creation a random real index in [0, 1]. Clicking on Special:Random generates a real number in [0, 1] and returns the page with the smallest index larger than the result. Thus the probability that article i will be selected is equal to the difference between its index and the next highest index. When this is distributed exponentially with mean 1/n, so , and . The chance that the two chosen articles are identical is . -- Meni Rosenfeld (talk) 12:30, 27 March 2012 (UTC)
TRI AN GLE IN E QUAL I TY
Given a metric , how do you show that ? It's pretty tricky and I can't seem to figure it out. Widener (talk) —Preceding undated comment added 12:42, 26 March 2012 (UTC).
- It follows by the fact that the function is increasing and concave. Sławomir Biały (talk) 12:47, 26 March 2012 (UTC)
- I'm afraid I still don't get it. Widener (talk) 19:45, 26 March 2012 (UTC)
- BTW, what's with the bizarre spacing of the question title ? StuRat (talk) 20:01, 26 March 2012 (UTC)
- Draw a graph of , then try to prove it. Sławomir Biały (talk) 00:12, 27 March 2012 (UTC)
- You want to prove that, for nonnegative a, b, c, if a + b ≥ c, then φ(a) + φ(b) ≥ φ(c), where φ is as defined by Slawomir. Since φ is increasing on [0,+∞), this amounts to proving that φ(a) + φ(b) ≥ φ(a + b). Now look at the graph and say why φ(a + b) - φ(a) ≤ φ(b) - φ(0). Formally, this can be done: (a) by viewing the members of this inequality as integrals of φ′ and comparing them by using the fact that φ′ is decreasing; or (b) by using suitable convexity inequalities for the function -φ. 64.140.121.160 (talk) 06:50, 27 March 2012 (UTC)
- Draw a graph of , then try to prove it. Sławomir Biały (talk) 00:12, 27 March 2012 (UTC)
Adding exponentials
I'm tutoring this student, but there's this one problem that absolutely stumps me.
2^(5/2)-2^(3/2)
I know the answer is 2^(3/2) but I have no idea how to get it. I tried everything, logs, rules of exponents, etc. I know you can't combine them unless they have the same exponent but I don't know how to get there. ScienceApe (talk) 17:11, 26 March 2012 (UTC)
- 22.5 - 21.5= 21.5 * (21-1)=21.5 *(2-1)= 21.5 84.197.178.75 (talk) 17:22, 26 March 2012 (UTC)
- Thanks! I knew it was something so stupid and simple, but I couldn't figure it out. ScienceApe (talk) 18:54, 26 March 2012 (UTC)
- Apologies if this is already obvious, but a more intuitive expression of the identical math from above is:
- 22.5 - 21.5= 2 * 21.5 - 21.5= 21.5
- as in "two things minus one of the same things is one of those things." -- ToE 07:25, 28 March 2012 (UTC)
- The steps are 22.5 − 21.5 = 21+1.5 − 20+1.5 = 21 * 21.5 − 20 * 21.5 = (21 − 20 ) * 21.5 = (2 − 1 ) * 21.5 = 1 * 21.5 = 21.5. Intuitiveness is subjective. The OP did not need all these intermediate results. Bo Jacoby (talk) 21:39, 31 March 2012 (UTC).
- Apologies if this is already obvious, but a more intuitive expression of the identical math from above is:
- Thanks! I knew it was something so stupid and simple, but I couldn't figure it out. ScienceApe (talk) 18:54, 26 March 2012 (UTC)
Show that these metrics are equivalent
Given a metric , show that and are equivalent. You can find such that quite easily, but what about , or is there another way to show equivalence? Widener (talk) 20:02, 26 March 2012 (UTC)
- works for that direction. Are you sure you don't mean the other direction? For that one, it's helpful to remember that you don't need to do it for all , only sufficiently small . In particular, you can assume the denominator is at most 2.--121.74.125.218 (talk) 21:04, 26 March 2012 (UTC)
- Ouch, I meant this:. Sorry. Widener (talk) 21:13, 26 March 2012 (UTC)
- Well, it should still work. You can assume the denominator is between 1 and 2, which simplifies things.--121.74.125.218 (talk) 21:19, 26 March 2012 (UTC)
- How can you make that assumption? You don't get to choose , you can only choose . — Preceding unsigned comment added by Widener (talk • contribs) 21:25, 26 March 2012 (UTC)
- Because if you do it for small distances, you can do it for all distances.--121.74.125.218 (talk) 22:21, 26 March 2012 (UTC)
- Argh; I mixed up the inequalities as well. Widener (talk) 01:21, 27 March 2012 (UTC)
- Because if you do it for small distances, you can do it for all distances.--121.74.125.218 (talk) 22:21, 26 March 2012 (UTC)
- How can you make that assumption? You don't get to choose , you can only choose . — Preceding unsigned comment added by Widener (talk • contribs) 21:25, 26 March 2012 (UTC)
- Well, it should still work. You can assume the denominator is between 1 and 2, which simplifies things.--121.74.125.218 (talk) 21:19, 26 March 2012 (UTC)
- Ouch, I meant this:. Sorry. Widener (talk) 21:13, 26 March 2012 (UTC)
- Can't you use the sufficient condition given in Equivalence of metrics:
- there exists a strictly increasing, continuous, and subadditive such that
- Then all you have to do is show that x/(1+x1/2) for x>=0 is continuous, strictly increasing and concave since that implies subadditive. Concavity can be proven by showing the derivative is monotonely decreasing. The advantage is you only have to deal with x/(1+x1/2), not with a function d1(x,y) that you don't know. 84.197.178.75 (talk) 22:03, 26 March 2012 (UTC)
- and f(0) = 0, forgot that condition 84.197.178.75 (talk) 22:06, 26 March 2012 (UTC)
- Why do those conditions work? Widener (talk) 01:26, 27 March 2012 (UTC)
- Good question, I guess you'd have to prove that as well ...
- Other way then, starting from the definition of equivalency that you use, if I understand correctly you mean, , and the other way round? And I assume the right function is d/sqrt(d+1), not d/(1+sqrt(d)).
- take = in the first case; if d< then d/sqrt(1+d) < since the sqrt is 1 or bigger..
- Reverse case: take =minimum of (1/sqrt(2), /2). You can show that d/sqrt(d+1) is increasing (by calculating the derivative), so if d/sqrt(d+1) < 1/sqrt(2), then d must be smaller than 1; therefore d/sqrt(d+1)< gives d<sqrt(d+1) * < sqrt(2) * < 2 * <= ; ie d< Ssscienccce (84.197.178.75) (talk) 14:38, 27 March 2012 (UTC)
- Why do those conditions work? Widener (talk) 01:26, 27 March 2012 (UTC)
- and f(0) = 0, forgot that condition 84.197.178.75 (talk) 22:06, 26 March 2012 (UTC)
March 27
Trigonometry practice questions
Is there some websites where I can practice trigonometry like right-angled triangle and Pythagorean theorem, sine, cosine, tangent, cosecant, secant, cotangent, degrees, minutes and second form, angle of elevation, angle of depression, trigonometric functions of any angle, converting between degree and radian measure, graphs of the sine, cosine, and tangent, applications of trigonometry; law of sines and law and cosines? — Preceding unsigned comment added by 70.54.64.180 (talk) 01:09, 27 March 2012 (UTC)
- All I can come up with is a search for "Trigonometry exercises", I'm guessing you can do that as well. Some results anyway:
- http://math.usask.ca/emr/menu_trig.html
- http://www.analyzemath.com/trigonometry_worksheets.html
- http://www.homeschoolmath.net/online/trigonometry.php
- math.bard.edu/belk/math141/TrigonometryExercises.pdf
- http://www.khanacademy.org/exercise/inverse_trig_functions Ssscienccce (84.197.178.75) (talk) 17:51, 27 March 2012 (UTC)
Help me name or find the existing name for this geometric concept!
This may have a proper name, if so - let's discuss. If not, let's name it. This is for a web application in C#, so whatever we call it I will start naming as such in my code.
I'm taking GPS data as a collection of n 'Points', each point has latitude (degrees), longitude (degrees), elevation (meters), and datetime(DateTime object). From this, I compute a list of n-1 'Segments.' In each Segment object, I have distance(meters), time(seconds), and velocity(meters/second), effectively taking the derivative between each consecutive point.
Now, I want to n-2 '<name this object>s'. Each will have acceleration (meters/second/second). A good name for a collection of segments may be 'Path' or 'Track', but this is specifically an object that is two segments only (comprised from three consecutive points), which holds the second derivative value.
I'm leaning toward 'Segue' and this will be the object's name as I flesh out the details. Is there an existing name for this concept, or does anyone have any suggestions? — Preceding unsigned comment added by Ehryk (talk • contribs) 03:42, 27 March 2012 (UTC)
- I'm not quite following. Why are your computing such values ? As for acceleration, wouldn't you also need an initial velocity (speed and direction) in order to ensure that you go through all 3 locations at the indicated times ? Or do you assume the initial velocity is always zero ? (If so, some combos of 3 locations and times might not work.) And don't you also want to know the velocity at points 2 and 3, as well ? Also note that your acceleration will be a 3D vector, so will have i, j, k components.
- Perhaps I'm not quite following. I am pulling the points from GPS data which stores all the points, I'm analyzing it to determine things like max acceleration, average velocity, etc. I don't assume initial velocity is zero, nor am I trying to find the velocity AT any point, or the acceleration AT any segment. I'm computing velocity BETWEEN points, and acceleration BETWEEN segments, and looking for the best name for 'Between two connected segments.' Ehryk (talk) 15:04, 27 March 2012 (UTC)
- In the first case, you determine the average velocity, which has some value. However, in the second case you're calculating the average acceleration, which doesn't seem of much use to me, say, when calculating a car trip, since the acceleration is so uneven (sometimes positive, negative when you brake, sometimes zero). Since periods of accel and decel are so short, the three GPS coords would have to be taken within maybe a second of each to give a good approximation for current accel, and at such resolution, inaccuracies in the GPS points would likely throw it off. So, is your application designed to find car accel, or something else ? StuRat (talk) 17:02, 27 March 2012 (UTC)
- Why does uneven lead you to unuseful? The time resolution can be set by the individual GPS units, but on a Garmin under 'Fine' update mode we're getting points every 0.5 - 3 seconds. I can then compile these values into 'Time spent braking', 'Time spent coasting', and 'Time spent accelerating', 'Max acceleration', 'Max deceleration', etc. - each of which seem useful to me.Ehryk (talk) 19:21, 27 March 2012 (UTC)
- Well, even at the 0.5 second increment, that's a second elapsed for 3 points, and the accel may change quite a bit over a second. Also, as I said before, the error in the GPS coords must be significant in such tightly spaced points. Do you know what the accuracy is ? I'd think, to get accurate results for instantaneous accel, you'd want to use an accelerometer. StuRat (talk) 04:24, 28 March 2012 (UTC)
- You're quite right, but the point of this isn't really for accuracy with regard to instantaneous acceleration, average will do just fine. All I'm trying to glean is more basic analysis: when were you, on average, increasing in speed, decreasing, staying roughly the same, and in the context of driving in this application, if you can't hold your accel/decel for over a second, I don't want it.Ehryk (talk) 09:19, 28 March 2012 (UTC)
- Without knowing anything else, I'd want to call the first group 'Speed' and the second group 'Accel' (even though more info is stored than just that). Or you could use the 3 point analogue to a 'Segment', and call the second group an 'Arc'. StuRat (talk) 05:44, 27 March 2012 (UTC)
- "Arc" sounds like a good word for this, though I don't know of a standard term for what you're describing. Or you may want to use Seg1, Seg2 and Seg3 for points, segments and arcs. -- Meni Rosenfeld (talk) 12:38, 27 March 2012 (UTC)
- I'm looking for the name of the geometric concept that will contain speed, acceleration, distance values, etc. Segment is precisely what I want for the first derivative. 'Arc' doesn't sit right with me, because I immediately think of an arc of a circle, so I'd store things like 'radius, start, end, degrees' in that. I can go with new terms here, as well - if there isn't a term that precisely describes this already. Ehryk (talk) 15:04, 27 March 2012 (UTC)
- Note that an arc isn't necessarily circular. For example, there are also arcs described by polynomial equations. See arc (geometry). StuRat (talk) 17:05, 27 March 2012 (UTC)
- If you want to focus on the betweenness, you can use the adjectives interstitial [1] or interpolated, so maybe "interstitial velocity" would be a good word for an estimated velocity between two known velocities. If you want to focus on the "completeness" of a set of values for acceleration, velocity, etc, you could try something with telos, which is really just greek for the end. As in, once you know all these variables in a deterministic Newtonian system, you know the telos of the object. I also like Meni's suggestion of arc. It (and telos) is often used in a sense of "trajectory" or "ultimate path", and is not limited to circles. SemanticMantis (talk) 16:51, 27 March 2012 (UTC)
- Arc was StuRat's suggestion, I merely concurred. -- Meni Rosenfeld (talk) 18:33, 27 March 2012 (UTC)
- If you want to focus on the betweenness, you can use the adjectives interstitial [1] or interpolated, so maybe "interstitial velocity" would be a good word for an estimated velocity between two known velocities. If you want to focus on the "completeness" of a set of values for acceleration, velocity, etc, you could try something with telos, which is really just greek for the end. As in, once you know all these variables in a deterministic Newtonian system, you know the telos of the object. I also like Meni's suggestion of arc. It (and telos) is often used in a sense of "trajectory" or "ultimate path", and is not limited to circles. SemanticMantis (talk) 16:51, 27 March 2012 (UTC)
- Thanks, I appreciate these and I'll consider them. I was hoping for a more definite term, like how Segment immediately evokes connecting two points, that would mean two connected segments in itself. It's good to note that there is not a term that means this.Ehryk (talk) 19:21, 27 March 2012 (UTC)
- not so fast, Ehryk. There's the concept span, which in bridging and other fields can be thought to be composed of segments. Travel span analysis? --Tagishsimon (talk) 19:27, 27 March 2012 (UTC)
- Thanks, I appreciate these and I'll consider them. I was hoping for a more definite term, like how Segment immediately evokes connecting two points, that would mean two connected segments in itself. It's good to note that there is not a term that means this.Ehryk (talk) 19:21, 27 March 2012 (UTC)
- Right, but if I wanted 'composed of segments' I could use: Span, Path, Track, Polygon (if the ends connect), Journey... and others. I don't mean "Contains one or more segments", I want a word: A geometric region is an <xxxx> iff it is TWO segments connected such that they share an endpoint. Unless we're all mistaken, there isn't a word that means specifically this.Ehryk (talk) 19:40, 27 March 2012 (UTC)
- Note that the term spline is used to describe two or more arcs blended together (and those arcs can include line segments, as in the first illustration in that article). StuRat (talk) 04:18, 28 March 2012 (UTC)
- I'd have to say that's the closest I've come to an answer. Keep in mind, I'm up for made up words, too: Bisegment, DualSegment, SegAngle, Kink, Turn, etc. etc. - throw some new ideas out!Ehryk (talk) 09:19, 28 March 2012 (UTC)
- Vertex ? Corner ? StuRat (talk) 03:23, 31 March 2012 (UTC)
Homomorphisms from GL(n,Z) into a symmetric group?
Fix n > 1. Other than the homomorphism induced by looking at determinants, are there any non-trivial homomorphisms from GL(n,Z) into a (non-trivial) symmetric group? The determinant map is the only one I've been able to come up with. I know generators for the automorphism group of GL(n,Z), and have tried pre-composing the determinant map by these, but they either don't change it or make the map trivial. It's such a nicely phrased question, surely someone must have tackled it before...? Thanks for any tips! Icthyos (talk) 14:20, 27 March 2012 (UTC)
- Actually, it turns out the only target group I'm interested in is the group of order two. I'm thinking it might work to show that any homomorphism from GL(n,Z) to Z2 has the property of being a determinant-like function, then invoking uniqueness of such a function to show the only homomorphism is Det. Does that sound plausible? Icthyos (talk) 21:03, 27 March 2012 (UTC)
- The problem is equivalent to finding all normal subgroups of finite index. The mod n subgroups provide a great many examples. The normal subgroups have been classified to some extent by Newman in 1967. Sławomir Biały (talk) 22:16, 27 March 2012 (UTC)
- Ah-hah, what a clever way of thinking about it. I forgot about Cayley's theorem. Thanks! Icthyos (talk) 18:25, 28 March 2012 (UTC)
- Congruence subgroup may be of interest for Sławomir's suggested examples, though that article needs more on what the solution of the problem was, mostly, those are the only examples. Especially for the case of maps into Z2 or an abelian group, see also Algebraic_K-theory#K1 for the infinite general linear group over Z. Milnor's book IIRC should have enough on the finite-dimensional (sub)groups you are particularly interested in. (always approximate the finite by the easier infinite :-) ). The point is that these groups are simple enough, noncommutative enough, that for maps to abelian groups, only the determinant should be there. I.e. the commutator subgroup should be generated by the elementary matrices and should be all of SL. (see also Special_linear_group#Relations_to_other_subgroups_of_GL.28n.2CA.29 and other linked articles here and there).John Z (talk) 10:17, 1 April 2012 (UTC)
- Ah-hah, what a clever way of thinking about it. I forgot about Cayley's theorem. Thanks! Icthyos (talk) 18:25, 28 March 2012 (UTC)
- The problem is equivalent to finding all normal subgroups of finite index. The mod n subgroups provide a great many examples. The normal subgroups have been classified to some extent by Newman in 1967. Sławomir Biały (talk) 22:16, 27 March 2012 (UTC)
March 28
Determining whether a given polynomial is square.
Given a polynomial p(x), is it possible to tell for what integer values of x make p(x) prime? p(x) = x4 + x3 + x2 + x + 1. The only solution I've found is x = 3, p(x) = 121, and I think it's the only one.
I've tried taking the square root, which gets me p(x) = (x^2 + 1/2 x + 3/8)2 + 5/8 x + 55/64. However, this does not seem to be useful. I was attempting to get it in the form sqrt(p(x)) = sqrt(a(x)) + b(x), and then solve a(x) is a square and b(x) is an integer. But instead I got sqrt(p(x)) = sqrt(a(x) + b(x)).
Then I tried to use modular arithmetic to find restrictions on x. p(x) = x(x+1)(x2 + 1) + 1, so I split n=sqrt(p(x)) into two cases, even and odd. If n is even, then p(x) is divisible by 4, which is impossible. So n must be odd. I tried other mods, but I couldn't get anything useful.
Is there another method of solving this, or should I stick with the modular arithmetic? Eaglenoise (talk) 18:11, 28 March 2012 (UTC)
- Most of them look prime to me:
x p(x) -- ---- -6 1111 -5 521 -4 205 -3 61 -2 11 -1 1 0 1 1 5 2 31 3 121 4 341 5 781 6 1555
- Of that list, the only p(x)'s over 1 that aren't prime are 121 (11 squared), 205 (5×41) and 1555 (5×311). StuRat (talk) 19:54, 28 March 2012 (UTC)
- Clarification needed: Do you mean to check for it being squared, as you said in the title, or prime, as you said in the first line ? If you meant squared, then I only find three values, where x is between -999 and 999:
x p(x) -- ---- -1 1 0 1 3 121
- Somebody else will have to help you with a proof that there are no more beyond that. StuRat (talk) 21:43, 28 March 2012 (UTC)
- Yes, I meant p(x) is a square. Sorry about that. And it says positive integers, so I think 3 is the only solution. Eaglenoise (talk) 23:47, 28 March 2012 (UTC)
- OK, do you need a proof, or can we mark this resolved ? StuRat (talk) 01:28, 29 March 2012 (UTC)
- I think it needs to be a proof. Not necessarily a formal one, but I need to show that no other solutions exist, using something better than proof by lots of samples. Eaglenoise (talk) 01:46, 29 March 2012 (UTC)
- You can use calculus to do it. If you let y = √(x4+x3+x2+x+1), then you can write the Taylor series expansion for y in terms of 1/x. I just used Wolfram Alpha for this [2] to find y = x2 + x/2 + 3/8 + 5/16x + O(1/x2) but it can be done by hand also. The x2 + x/2 + 3/8 part differs from an integer by at least 1/8, so I think if you can find a bound on x over which the tail of the series is below 1/8 using Taylor's theorem you'll prove what you need. Then check all x below this bound explicitly as StuRat was doing. Rckrone (talk) 14:34, 29 March 2012 (UTC)
- Okay, after several ninth degree polynomials, I've got it down to a simple inequality. Thanks! Eaglenoise (talk) 02:38, 30 March 2012 (UTC)
An apparent counter-example against the completeness of Presburger Arithmetic:
Our Article claims that Presburger Arithmetic is complete. Isn't the following proposition a counter-example?
- ∀x ¬(x+x=1)
77.127.183.58 (talk) 19:06, 28 March 2012 (UTC)
- What makes you think that proposition is unprovable in Presburger Arithmetic? If I remember correctly, Presburger Arithmetic is more or less Peano Arithmetic without multiplication, so you still have induction. It should be a simple matter to prove that by induction. --Trovatore (talk) 19:20, 28 March 2012 (UTC)
- Hello Trovatore, I'm from the restrahnt (rather than the restrnt). Just think about the model of all rational non-negative numbers: This model satisfies Presburger axioms, yet not the proposition I've suggested. This proves that my proposition is not provable in Presburger Arithmetic. Similarly, the model of all non-negative integers, proves that the negation of my proposition is not provable (in Presburger Arithmetic) either. 77.127.183.58 (talk) 19:42, 28 March 2012 (UTC)
- I may be speaking out of turn, but it would seem to me that in the model of non-negative rational numbers that the axiom schema 5 (induction) is not satisfied. If it was, one could prove all sorts of mathematical weirdnesses for rational numbers. — Quondum☏✎ 20:04, 28 March 2012 (UTC)
- Hello Trovatore, I'm from the restrahnt (rather than the restrnt). Just think about the model of all rational non-negative numbers: This model satisfies Presburger axioms, yet not the proposition I've suggested. This proves that my proposition is not provable in Presburger Arithmetic. Similarly, the model of all non-negative integers, proves that the negation of my proposition is not provable (in Presburger Arithmetic) either. 77.127.183.58 (talk) 19:42, 28 March 2012 (UTC)
Yes, I see. Thank you all. 77.127.183.58 (talk) 20:16, 28 March 2012 (UTC)
Implementing RK45 Correctly
Hey everyone, so I am trying to write my own RK45 and it will solve a 2x2 system of ODEs. The RHS will be F(x,y) returning (x',y'). For just a single ODE where our right-hand side is just a scalar x'=f(x), we compute k1,k2,k3,k4,k5,k6 and then we get (which is the 4th-order approximation) and (which is the 5th-order approximation). Then we get the scalar
and the optimal time step is then the scalar times the current time step.
First question, after I get the scalar, I am supposed to repeat the RK step with this new time step, right? Because the first attempt was just "experimentation" to see what would be the optimal step size. So for every step, I have to do RK twice?
Second, if I have to repeat twice, then the second time should I do RK4 or RK5? RK4 would require a total of 10 function calls and RK5 will require a total of 12 function calls.
Third, if I don't have to repeat twice, then why not? Do I just use one of the approximations I already computed, or ? And should I just use because I have already computed it AND it is fifth order accurate.
Fourth, finally for my vector function (x',y')=F(x,y), which norm should I use in the denominator for the scalar? Just the Euclidean norm or the max or what? Thanks! - Looking for Wisdom and Insight! (talk) 23:07, 28 March 2012 (UTC)
- Maybe a question for the computing ref desk? Or include some wikilinks to the relevant articles, so people know exactly what it's about... Ssscienccce (84.197.178.75) (talk) 12:19, 1 April 2012 (UTC)
Well, I thought it was more relevant here than at the computer science desk and I thought people here would know RK methods. Here is the Runge–Kutta–Fehlberg_method I was talking about. Either the questions are so obvious that no one thought they needed answering or the questions are so uncommon that no one has any answers or suggestions. I googled too and couldn't really come up with anything satisfactory for any of these questions. So any help would still be appreciated. - Looking for Wisdom and Insight! (talk) 10:22, 2 April 2012 (UTC)
Deciding whether a pentagon is cyclic
I'm reposting here this question from Georgia guy (talk) from Talk:Pentagon:
- A triangle ABC is always cyclic.
- A quadrilateral ABCD is cyclic if and only if A+C = 180 degrees (B+D will also always be 180 degrees.)
- A pentagon ABCDE is cyclic if and only if....
Duoduoduo (talk) 19:26, 29 March 2012 (UTC)
- A pentagon ABCDE is cyclic if and only if quadrilaterals ABCD and BCDE are both cyclic. Bo Jacoby (talk) 20:28, 29 March 2012 (UTC).
- And the next case, if you're interested: a hexagon ABCDEF is cyclic if and only if the sum of both sets of alternate angles (A + C + E and B + D + F) is 360 degrees.→31.53.233.105 (talk) 11:39, 30 March 2012 (UTC)
- So is it true whenever the number of sides n is even that a polygon is cyclic iff each set of alternate angles sums to 90°×(n–2)? And, is there any comparable rule when n is odd? Duoduoduo (talk) 14:40, 30 March 2012 (UTC)
- This condition is necessary but not sufficient. For a counterexample, start with a regular hexagon, then take a pair of opposite parallel sides and increase their length by a factor of 100. Angles are all still 120 degrees so this polygon satisfies the alternate angle condition, but it is clearly not cyclic. Gandalf61 (talk) 07:38, 31 March 2012 (UTC)
March 29
math onto graphmatica
How to insert an equation like y=2-3 with the exponent 2 above 2, and y=5 to the power of x+3 and y=2 to the power of 2x-1 onto the program Graphmatica? — Preceding unsigned comment added by 70.31.20.138 (talk) 19:50, 29 March 2012 (UTC)
- I haven't used graphmatica, but the usual way is to add parentheses, like this: y=5^(x+3). That works on Wolfram alpha, for instance. 130.76.64.118 (talk) 21:40, 29 March 2012 (UTC)
A notation for Hadamard product
I was curious if anyone knew of a common notation for the Hadamard product akin to capitol sigma notation for summation. That is, I need to denote the Hadamard product of over all i. Anyone seen something like this? Is there a common convention? Thanks, --TeaDrinker (talk) 19:54, 29 March 2012 (UTC)
- . Bo Jacoby (talk) 20:34, 29 March 2012 (UTC).
- Ah sure, just describe the product pointwise; that would certainly work. Thanks, --TeaDrinker (talk) 21:32, 29 March 2012 (UTC)
March 30
World Population
Roughly how do various agencies and organizations around the world estimate the world's population? I'm guessing that they mainly use censuses, but what calculations do they use to estimate uncounted areas or non-responders? Thanks, 207.6.208.66 (talk) 01:53, 30 March 2012 (UTC)
- I don't have much knowledge in this field, but you may find it of interest to read through this page on the US Census Bureau estimates and projects population. From a quick skim-through, it seems that a lot of it for areas without a census is basically informed guessing, looking at things like immigration/emigration, any natural or health disasters, and data from public health efforts. 151.163.2.8 (talk) 18:22, 30 March 2012 (UTC)
- Births, deaths, school enrollments, and consumption of energy and food figure in to many calculations. Dru of Id (talk) 09:47, 31 March 2012 (UTC)
- You have to do sampling and a thorough check of the sampled areas. And when they say that hut in the back garden is just there so they can accommodate their aged grandmother on visits you have to reassure them you're not going to hand in illegal aliens or persecute them for renting it out. Dmcq (talk) 13:59, 31 March 2012 (UTC)
Lottery odds
Can someone check my calculation of the odds of winning the lottery described here? The article says one in 176 million (which would give a positive expectation, which is very usual for a lottery, even with a rollover), but I calculate one in 21 billion (which would give a very negative expectation). I suspect that the 176 million figure is the chance of winning anything at all, rather than just the jackpot, but the article is only talking about the jackpot. --Tango (talk) 22:25, 30 March 2012 (UTC)
- Here is a more detailed article with the same odds. --Tango (talk) 22:27, 30 March 2012 (UTC)
- (56*55*54*53*52)/(5*4*3*2*1)*46 = 175,711,536 combinations. The 1 in 176 million chance of winning the jackpot is correct. Dragons flight (talk) 22:33, 30 March 2012 (UTC)
- Yeah, I forgot the 5 don't need to be in order - careless of me! In that case, it's a rather good lottery! Once you allow for diminishing marginal utility of money, there probably isn't a positive expected utility, but that's still the highest expectation I've seen in a lottery. Thanks for your help! --Tango (talk) 00:02, 31 March 2012 (UTC)
- Also, the expected return is not "the entire jackpot;" it's "the jackpot, divided by number of winners." If it were possible to win the entire jackpot, everybody with $176,000,000 would purchase one of each possible numeric tickets, and would win the $500,000,000 jackpot, netting some three hundred million dollars. Nimur (talk) 00:35, 31 March 2012 (UTC)
- Yeah, I forgot the 5 don't need to be in order - careless of me! In that case, it's a rather good lottery! Once you allow for diminishing marginal utility of money, there probably isn't a positive expected utility, but that's still the highest expectation I've seen in a lottery. Thanks for your help! --Tango (talk) 00:02, 31 March 2012 (UTC)
March 31
A maximal theory for solving physical problems
Hi everyone
This is a problem that I've wondered about and tried to solve for a very long time, but it's a bit abstract and difficult to explain. I'll start with the motivation for solving it - In the past I tried to find some general algorithm for solving all problems in mathematics. Ultimately I concluded that no such algorithm exists; in fact, I discovered Entscheidungsproblem (which I think can be re-framed into exactly this) and the proofs that it was not possible to solve Entscheidungsproblem. Giving up on that, I then considered if there was a "best possible approximation" for such an algorithm. I formally framed it like this: Does there exist an algorithm q such that for all algorithms p and problems x we have "p can solve x" implies "q can solve x". I'm still a bit unsure about this, but again I believe the answer is "no". Given any algorithm which can solve a subset of all problems, you can create a new improved algorithm from it just by adding the answer to a problem that it can't solve to the algorithm. You can limit the problems to ones for which there exist correctness proofs, which Marcus Hutter has done. However, for any particular problem, this algorithm takes over 5 times as long to solve as the most efficient algorithm for that problem, so it can again be improved by replacing it with the most efficient algorithm for this problem.
Can anything like this be done when limited to physical or synthetic problems? Widener (talk) 05:36, 31 March 2012 (UTC)
- Can the "algorithm" be a meta-algorithm which chooses the correct algorithm for each problem ? StuRat (talk) 05:41, 31 March 2012 (UTC)
- I don't see why not. Widener (talk) 05:45, 31 March 2012 (UTC)
- In that case, I'd say sure, we can make a meta-algorithm which will search through all the possible solution algorithms and find the appropriate one for each math problem. This would be quite complex, as there are many ways to solve a math problem, and it isn't always obvious which one is correct. In some cases it may be necessary to try multiple methods to find the one that works.
- Are you sure this is possible? Is the set of all the possible solution algorithms recursively enumerable (or even countable)? Widener (talk) 14:20, 31 March 2012 (UTC)
- In fact, what you described looks very similar to Marcus Hutter's algorithm. Widener (talk) 14:22, 31 March 2012 (UTC)
- Recognizing what type of problem you have would fall into the artificial intelligence area (especially if it starts as a story problem). The computer that could do this would be something like Watson. It might also need to ask follow up questions, like whether we want imaginary number solutions. StuRat (talk) 14:04, 31 March 2012 (UTC)
- In that case, I'd say sure, we can make a meta-algorithm which will search through all the possible solution algorithms and find the appropriate one for each math problem. This would be quite complex, as there are many ways to solve a math problem, and it isn't always obvious which one is correct. In some cases it may be necessary to try multiple methods to find the one that works.
- That hutter1.net link seems to be broken. 81.98.43.107 (talk) 11:29, 31 March 2012 (UTC)
- Fixed it. Widener (talk) 13:17, 31 March 2012 (UTC)
- Apart from infinite memory, one can make a physical model of a turing machine or equivalent, so couldn't Gödel or Entscheidungsproblem be translated into a physical problem, leading to the same conclusion? Ssscienccce (84.197.178.75) (talk) 13:43, 1 April 2012 (UTC)
- That is a good point; it reminds me of Stephen Hawking's criticism of a theory of everything. Memory may be significant though. Widener (talk) 05:43, 2 April 2012 (UTC)
Differential Equations
Given the location of the (in my case, two, but I'd be interested in the general case as well) regular singular points of a second order linear ordinary differential equation, how do you go about constructing the differential equations with said singularities? In my particular example, I attempted to use the form of the second order linear ODE with three regular singular points and take the exponents (i.e. the roots of the associated indicial equation) as zero and 'work down' to two singular points, which happen to be zero and infinity, but then I realised that the equation I was given purposefully excludes consideration of the point at infinity. Can anyone help me? Thanks. 131.111.216.115 (talk) 15:56, 31 March 2012 (UTC)
- According to our article Regular singular point and Wolfram, the regular singularities are give by the functional coefficients having simple poles at the at the required points. If you want singularities as p1, …, pn then why not try
- — Fly by Night (talk) 21:45, 31 March 2012 (UTC)
- Thank you for your reply but, perhaps before addressing it, I should make clear precisely what I am attempting to do. I have to find all differential equations of the form such that zero and the point at infinity are regular singular points of the equation, meaning that p(z) and q(z) have, at most, a simple pole and a double pole respectively at these points, while every other point is an ordinary point. Now, to your answer: I can see how this would be helpful, after setting n=2, but it will only work for one case, e.g. we are assuming here that q(z) is analytic everywhere, which will not be true in general. Is this the best method or could it be done more efficiently? Thanks. 131.111.216.115 (talk) 09:35, 1 April 2012 (UTC)
- — Fly by Night (talk) 21:45, 31 March 2012 (UTC)
Highly leveraged small financial bets as an alternative to lottery tickets
I'm sure this issue has been addressed before. I have a Contract for difference account which I have yet to ever use. Can someone point me in the direction of any articles / information on using such accounts to make small, highly leveraged bets, instead of traditional lottery tickets (and the relative mathematics and odds involved)? — Preceding unsigned comment added by 58.111.224.202 (talk) 23:40, 31 March 2012 (UTC)
- Leveraged products such as CFDs appear attractive because they offer the potential for large gains on a small initial deposit. However, they also carry the risk of large losses. With a lottery ticket your maximum loss is limited to the price of the tickets that you buy. But with a CFD your maximum potential loss is very large indeed, and can certainly be many times your initial deposit. Another difference is that a lottery ticket is a "fire and forget" purchase, whereas a CFD account needs constant attention, as you must be continually deciding whether to close out your positions or roll them over. So two questions to ask yourself before you start trading CFDs are (i) can I afford large losses and (ii) can I afford the time required to manage my positions. Gandalf61 (talk) 01:05, 1 April 2012 (UTC)
- (From the OP, on a different computer) Aren't those two problems solved simply by setting a Stop price on your position? That being a given, how else do the two compare, favourably or otherwise? 114.78.181.219 (talk) 01:59, 1 April 2012 (UTC)
- Average return on investment in lottery tickets is almost always negative (I'm not sure of any case where it's actually in your interest to buy a lottery ticket, but theoretically, it could happen). Average return on investment in any stock fund is almost always positive (with the occasional exception, like if Bernie Madoff with your money). You might want to consider penny stocks, as they offer the possibility of high returns but with a minimal investment, much like the lottery. StuRat (talk) 01:09, 2 April 2012 (UTC)
- The return can be positive if a lottery produces no winner and the prize money rolls over -- especially if that happens repeatedly, as with the MegaMillions lottery that was recently covered in the US news media. In that case, a $1 investment produced an expected return of around $2 -- although the chances of winning were estimated at around 1 in 176 million. Looie496 (talk) 22:49, 2 April 2012 (UTC)
- Can you provide a source for those calcs ? I just did them myself, and figured it had a (negative) return of around 50 cents on a dollar. Did they count taxes, inflation for the installment plan, and having to split the prize with the other winners ? StuRat (talk) 23:42, 2 April 2012 (UTC)
- It's well-known that draws can be positive in certain situations, but it would take many lifetimes to win the prize. Anyways, this isn't my question. 23:56, 2 April 2012 (UTC) — Preceding unsigned comment added by 58.111.224.202 (talk)
April 1
Probability - bound on a sum of binomial random variables
Hi,
I might have asked this before here sorry, I don't think I have but I can't recall - if I did I don't think I got a response so I thought I might try again either way! Thanks for your help in advance, it's what looks to be a reasonably simple probability problem I can't quite get my head around.
I have independent binomial random variables ; so the expectation of any is . It is easy enough to show (by Chernoff for example) that , for n large enough.
Now, a paper I'm reading claims it follows from this that with probability at least , the sum of the sizes of the is at most . (This will probably be "for all n sufficiently large")
However, I can't see how to get this result out; the looks like some sort of complement and union bound, but since we're working with m variables and m is a bit of an awkward number, I can't quite get this out nicely. Also there's a factor of 2 in the left hand side of of which didn't seem to fit quite right with directly applying a union bound (not that I could find the right one to apply anyway), so any help you could offer would be much appreciated. I should add, the paper is not completely error-free so it may be a mistake was made, but one thing I do know for certain is that we have to show "with large probability" the sum of the sizes of the is at most , as this specific number is important later on in the paper. Can anyone explain how we might arrive at this result? I don't doubt it the result is true; heuristically we have first deduced that "each C_j is likely to be small", then we want to show that "the sum is likely to be small", but I can't quite get it to work formally. Thank you! :) Totenines99 (talk) 02:11, 1 April 2012 (UTC)
- You have that m < √(log n), so the expected value of the sum is μm < μ√(log n), which is less than your target bound, μlog n, by a factor of √(log n). If every Cj is less than 2μ then the whole sum will be less than 2mμ < μlog n. You have that Pr(Cj > 2μ) < 1/n2, so the probability any of them exceed 2μ is at most m/n2 by union bound. Since m < log n, this gives you the bound you want. Rckrone (talk) 04:36, 1 April 2012 (UTC)
Game Theory Textbook
I'm a mathematics student with no exposure to game theory. I would like to remedy this. Can anyone suggest an introductory textbook on the subject which has been developed with mathematicians as the intended audience? Thanks in advance! Korokke (talk) 03:01, 1 April 2012 (UTC)
- I found "Game Theory, A Very Short Introduction" by Ken Binmore, Oxford ISBN 978-0-19-921846-2 to be a very readable introduction to the subject.--Salix (talk): 07:25, 1 April 2012 (UTC)
- Binmore is a great read but not a textbook. "Game Theory" by Webb in the SUMS series is supposed to be good and is aimed at undergraduates. Tinfoilcat (talk) 08:55, 2 April 2012 (UTC)
- It's a while since I read it, but I remember liking "Games and Information" by Eric Rasmusen (2nd edition, 1994) - Amazon link. AndrewWTaylor (talk) 11:43, 2 April 2012 (UTC)
- Thank you, I'll investigate those! Korokke (talk) 09:55, 4 April 2012 (UTC)
<V/Vmax> Test
(This may well require general mathematical intuition as opposed to any understanding of statistical tests or physical phenomena - I can't say for sure because I don't know what to do - so could anyone who reads this please not be put off by the possibility that it's outside their field of expertise?)
I am attempting to perform a calculation that involves the <V/Vmax> test 1 and am unsure of how to proceed. Some (minor) background first though. I wish to test whether a sample of objects has a uniform comoving density that doesn't change with (cosmic) time and I have been told to make use of the <V/Vmax> test. We assume that we detect each object with observed flux and we determine the observed flux and the redshift for each object. Corresponding to each luminosity L is a maximum redshift , corresponding to which there is a maximum volume . I have a formula for computing the comoving volume as , where D_C is the comoving distance and is a function of z .
I am now meant to determine V and V_max, having been given pairs of data z and . I cannot see how to perform this calculation given the information I have. If someone could point me in the correct direction, I would greatly appreciate it (and if there is more information needed that I haven't provided, please just ask). Thanks. meromorphic [talk to me] 10:03, 1 April 2012 (UTC)
- If I understand correctly, it involves a list of all the galaxies above a certain minimum luminosity, each with it's redshift, which basically gives the distance of the galaxy. Then you calculate how much further the galaxy would have to be to exactly match the minimum luminosity, and that gives you V/Vmax for that galaxy ( that's (d/dmax)^3, no?), and if the average of all those values is 0.5 then your sample is complete (assuming a constant space density). So it seems to me you do have enough info, z gives you the distance, the redshift gives you the dmax...
- what isn't clear for me is why one would need the distance at all, it would seem logical to me that the luminosity on it's own determines the value of d/dmax and therefore the value V/Vmax .... maybe I'm missing something... Ssscienccce (84.197.178.75) (talk) 13:10, 1 April 2012 (UTC)
- link I found useful: http://www.astro.virginia.edu/class/whittle/astr553/Topic04/Lecture_4.htmlSsscienccce (84.197.178.75) (talk) 13:16, 1 April 2012 (UTC)
- I'm not actually provided with the minimum luminosity. Earlier on in the project that I'm doing, I am provided with a formula (which I may or may not be meant to use for this section, I'm unsure) relating observed flux to luminosity but since I only have the ratio f/f_0 and no absolute values, I can only compute the ratio of two luminosities. Also, at the risk of asking a stupid question, will the minimum luminosity be the same for all galaxies in the sample or will it be different according to the galaxy I'm considering? Thanks. meromorphic [talk to me] 14:02, 1 April 2012 (UTC)
- Actually scratch that. On rearranging the formulae I have, I can derive an expression for V/V_max in terms of f/f_0, z and z_max. It's now z_max that is causing the problem. Any suggestions? meromorphic [talk to me] 14:10, 1 April 2012 (UTC)
- I'm not actually provided with the minimum luminosity. Earlier on in the project that I'm doing, I am provided with a formula (which I may or may not be meant to use for this section, I'm unsure) relating observed flux to luminosity but since I only have the ratio f/f_0 and no absolute values, I can only compute the ratio of two luminosities. Also, at the risk of asking a stupid question, will the minimum luminosity be the same for all galaxies in the sample or will it be different according to the galaxy I'm considering? Thanks. meromorphic [talk to me] 14:02, 1 April 2012 (UTC)
April 2
Closure of the union of sets
Is this true:? is trivially obvious, so all there is to show now is . I believe the answer is "yes". If is an interior point of some then it is clearly in that set. If is a boundary point of some then it will be in (although not necessarily as a boundary point). Widener (talk) 11:31, 2 April 2012 (UTC)
- cl(Ai) is the minimal closed set containing Ai. Since is also a closed set containing Ai, it must contain cl(Ai). Therefore . Since is closed, this implies . Rckrone (talk) 17:31, 2 April 2012 (UTC)
Group theory on Cayley graphs
I came across something in a paper the other day which said "obviously, we may suppose that...", and I can't see why this thing is obvious. I asked this question elsewhere and got back a few proofs of the statement, but none of them seemed as "obvious" as the paper suggested, so I thought I'd ask here just in case anyone was able to come up with a 1 or 2-line proof (most of the proofs I saw were around 10 lines.) I know that's not actually a lot shorter, I'm just curious if the author of the paper thought of something even easier no-one else has suggested yet.
I have a finite connected Cayley graph with vertices (or elements), with generating set (so we assume spans the group), and I want to write a general element in terms of the generators. The paper says it's "obvious" that we can always write as a product of generators , such that ; i.e. we can always write an element as a product of generators of length at most half the order of the group.
For the theorem I am using this to prove, I also assume is closed under conjugation, i.e. contains full conjugacy classes, but I don't think this is needed here. As I said above, the shorter the proof possible, the better. I have been shown a nice proof which works for any connected finite vertex-transitive graph, showing the neighbourhood is equal to the neighbourhood - essentially a graph theoretic proof, while I feel that this should really be provable with group theory. Still, maybe I'm wrong, it just looks like such an elementary statement.
Thanks for your help, and I'd equally be happy to hear if you don't believe there is a "trivial" proof too. Estrenostre (talk) 14:02, 2 April 2012 (UTC)
- I think the length of the proof (similarly the obviousness) depends on what things you're assumed to know, and also what statements have been made earlier in the paper. One route is to use the fact that the Cayley graph of a non-trivial finite group is 2-connected. Therefore any vertex has 2 disjoint paths to the identity (assuming n > 2). They can't both have length > n/2. Rckrone (talk) 17:49, 2 April 2012 (UTC)
- There weren't any statements earlier in the paper which were related to group theory or Cayley graphs, maybe it's just meant to be common knowledge. Your proof is largely what I've seen elsewhere but presented much more concisely, so I'm happy enough with that as seeming "obvious"; thanks Rckrone. Estrenostre (talk) 21:35, 2 April 2012 (UTC)
- To be honest, the fact that a finite Cayley graph is 2-connected seemed right to me but I had to look it up to be sure, and proving it took me some effort. I don't know if that's a well known fact. If there was nothing leading up to this claim then it seems like what the authors really meant by "obvious" was that they trust that the reader can probably work it out themselves and they don't want to bother providing a proof. Rckrone (talk) 22:42, 2 April 2012 (UTC)
- Since a Cayley graph is vertex-transitive, if any vertex is a cutpoint then every vertex is a cutpoint. There is no connected graph where every vertex is a cutpoint (consider a leaf of a spanning tree). So the graph is 2-connected. Watkins showed in 1970 that a connected transitive graph of degree d has connectivity at least 2(d+1)/3. McKay (talk) 03:11, 3 April 2012 (UTC)
- To be honest, the fact that a finite Cayley graph is 2-connected seemed right to me but I had to look it up to be sure, and proving it took me some effort. I don't know if that's a well known fact. If there was nothing leading up to this claim then it seems like what the authors really meant by "obvious" was that they trust that the reader can probably work it out themselves and they don't want to bother providing a proof. Rckrone (talk) 22:42, 2 April 2012 (UTC)
- There weren't any statements earlier in the paper which were related to group theory or Cayley graphs, maybe it's just meant to be common knowledge. Your proof is largely what I've seen elsewhere but presented much more concisely, so I'm happy enough with that as seeming "obvious"; thanks Rckrone. Estrenostre (talk) 21:35, 2 April 2012 (UTC)
Morley triangle
- Is there a formula for the side of the Morley triangle, preferably solely in terms of the sides of the reference triangle?
- Is there a formula for the area of the Morley triangle, preferably solely in terms of the area of the reference triangle? Duoduoduo (talk) 15:41, 2 April 2012 (UTC)
- Naively, isn't the second question impossible? An affirmative answer is tantamount to saying that all triangles of a given area have the same Morley triangle, and vice versa. Aren't there two different-area triangles with a same-area Morley triangle, or two same-area triangles with different-area Morleys triangles? (apologies if I've missed something obvious, but a positive answer to the the second question seems to imply a much stronger claim than it would for the first. I'd also be interested to see proofs contraindicating my claims...) SemanticMantis (talk) 02:10, 3 April 2012 (UTC)
- According to the Mathworld article First Morley Triangle,
- It has side lengths
- where R is the circumradius of the reference triangle and A, B, C are the angles of the reference triangle. By extension, the area could be expressed using Heron's formula in terms of the sides. From this, can it be concluded that both questions are impossible? Duoduoduo (talk) 14:27, 3 April 2012 (UTC)
- I'm not sure I follow your logic. Can you describe the reasoning a little more? Actually, I think this formula might lead to an answer to your first question. We can find R in terms of known sides of the reference triangle, and also find all angles in terms of the sides, right? SemanticMantis (talk) 21:56, 3 April 2012 (UTC)
- where R is the circumradius of the reference triangle and A, B, C are the angles of the reference triangle. By extension, the area could be expressed using Heron's formula in terms of the sides. From this, can it be concluded that both questions are impossible? Duoduoduo (talk) 14:27, 3 April 2012 (UTC)
- You're right. I think what I had in mind when I wrote that was that there must be no algebraic formula involving only roots of real numbers -- algebraically trisecting an arbitrary angle requires solving a cubic with three real roots, which in general can be done algebraically only by taking cube roots of non-real numbers. Duoduoduo (talk) 15:06, 4 April 2012 (UTC)
- Ok, I think we've learned 1. That the second question cannot have an affirmative answer. 2. That there is an answer to the first question as-written, but it is a not closed form, "by radicals" type formula. Anyone disagree? (not looking to pick a fight, I just haven't worked this all out rigorously enough to thoroughly trust my reasoning) SemanticMantis (talk) 15:16, 4 April 2012 (UTC)
- You're right. I think what I had in mind when I wrote that was that there must be no algebraic formula involving only roots of real numbers -- algebraically trisecting an arbitrary angle requires solving a cubic with three real roots, which in general can be done algebraically only by taking cube roots of non-real numbers. Duoduoduo (talk) 15:06, 4 April 2012 (UTC)
April 3
Car accident
I was sitting at a traffic light (in neatral,handbrake off)and was hit from behind. My vehicle travelled 7m and the vehicle that struck me 4m. If I know the weights of both vehicles (and the time elapsed if required)how can I calculate the speed of the vehicle that struck me at impact. — Preceding unsigned comment added by 2.26.160.145 (talk) 11:00, 3 April 2012 (UTC)
- I don't think this is solvable, because it depends on the friction that both vehicles experienced. Without friction, the vehicles would never stop moving (or at least one of them). The vehicle behind you was likely using the brakes. The only way you could solve this is if you knew the velocity of both cars immediately after the collision. -- Lindert (talk) 15:32, 3 April 2012 (UTC)
- Agreed, and you would also need to know if the ground was level, and, if not, the angle. StuRat (talk) 18:28, 3 April 2012 (UTC)
- You may experiment on a quiet road to find the acceleration as your car slows down. Draw a line on the road with chalk. Drive slowly and put your car in neutral when reaching the line. Read the speed v when crossing the line (m/s). Measure the distance s before the car stops (m). Then compute the average acceleration a (m/s2) from the equation as=v2/2. Assuming constant acceleration, set s=7m−4m in the equation and compute the speed of your car as the push from the other car ceased. Bo Jacoby (talk) 06:51, 4 April 2012 (UTC).
- In all, with certain plausible assumptions and measurements, you may be able to find a rough estimate of the car's velocity when it hit you. From the description, it sounds as though the other car was not braking significantly (it travelled more than 50% of the distance your car did with minimal friction on yours); one could work from this hypothesis to produce a plausible scenario. Bo Jacoby's suggested measurement, the slope, the law of conservation of momentum and these assumptions, you should be able to deduce a plausible figure. — Quondum☏✎ 07:42, 4 April 2012 (UTC)
- The car behind would not have shunted for 4m, but it wouldn't have been like a billard ball either because of the crumple zone which would absorb some of the energy. So momentum is conserved except for friction but energy is not. There might be some accident site on the web which would allow one to estimate the speed from what happens with actual cars. If you consider it as probably more like an inelastic collision asnd the cars were roughly comparable in weight then they were probably going at something like twice the speed you'd calculate using 7m I think. Dmcq (talk) 08:36, 4 April 2012 (UTC)
- You may experiment on a quiet road to find the acceleration as your car slows down. Draw a line on the road with chalk. Drive slowly and put your car in neutral when reaching the line. Read the speed v when crossing the line (m/s). Measure the distance s before the car stops (m). Then compute the average acceleration a (m/s2) from the equation as=v2/2. Assuming constant acceleration, set s=7m−4m in the equation and compute the speed of your car as the push from the other car ceased. Bo Jacoby (talk) 06:51, 4 April 2012 (UTC).
- Agreed, and you would also need to know if the ground was level, and, if not, the angle. StuRat (talk) 18:28, 3 April 2012 (UTC)
A level differentiation problem
f(x) = (1-x)^2.ln(1-x). I got f'(x)=-2(1-x).ln(1-x) - (1-x)^2 / (1-x), cancelling the second term to 1+x. The question actually asked for the second derivative at a particular point, which I got wrong in the book's answers, but would have got if the term was + (1-x)^2 / (1-x). Sketching a graph of ln(1-x) has a negative gradient for all x, so the book is right, but where is my use of the chain rule (as I think I've done it) wrong. Many thanks in advance for the replies. — Preceding unsigned comment added by 109.150.16.91 (talk) 14:36, 3 April 2012 (UTC)
- The (1-x)² / (1-x) ratio does NOT cancel to (1+x) — (1-x²) / (1-x) does, but (1-x)² / (1-x) cancels to (1-x). --CiaPan (talk) 14:59, 3 April 2012 (UTC)
- Doh! What a tit (OP here). Many thanks. — Preceding unsigned comment added by 109.150.16.91 (talk) 15:50, 3 April 2012 (UTC)
Hull
Noting the entries at Hull I wonder if someone could supply a definition for Hull (mathematics) (and given my low level of knowledge, perhaps even create a stub?). Thanks.Oranjblud (talk) 20:18, 3 April 2012 (UTC)
- Are you looking for convex hull? Or do you mean something else? SemanticMantis (talk) 21:51, 3 April 2012 (UTC)
- The page Hull lists several forms - I assumed these subtypes are not all sub-types of convex hull - eg Skolem hull is there an accepted definition of the term "hull" in maths?213.249.187.63 (talk) 22:41, 3 April 2012 (UTC)
- To your last question: I think not, though I welcome correction :) There may be relations among a few of the various hulls, but as you say, they are not all subclasses of one or the other. I guessed convex hull because I think of that as the most commonly used concept, but that is of course subjective. Like ring or field, I think hull is just an English word that has been co-opted into special service in math, perhaps to serve as an analogy or source of intuition. But usually (always?) in math, "hull" appears with an adjective to specify which concept is meant. I've never heard "hull" used in math talks or texts without an antecedent adjective (perhaps implicit after introduction). SemanticMantis (talk) 01:59, 4 April 2012 (UTC)
- I think that's a good exposition, and I would further add that, while there may be commonalities among different sorts of hulls (both the convex hull and the Skolem hull, for example, are closures of sets under a certain collection of operations), it would be a very bad idea to abstract those commonalities and write a Wikipedia article about them. It would be a form of original research, in the Wikipedia-term-of-art meaning of that phrase. If hull, as a term expressing those commonalities, ever becomes an accepted part of mathematical terminology outside of WP, then we can write an article about it. --Trovatore (talk) 02:33, 4 April 2012 (UTC)
- To your last question: I think not, though I welcome correction :) There may be relations among a few of the various hulls, but as you say, they are not all subclasses of one or the other. I guessed convex hull because I think of that as the most commonly used concept, but that is of course subjective. Like ring or field, I think hull is just an English word that has been co-opted into special service in math, perhaps to serve as an analogy or source of intuition. But usually (always?) in math, "hull" appears with an adjective to specify which concept is meant. I've never heard "hull" used in math talks or texts without an antecedent adjective (perhaps implicit after introduction). SemanticMantis (talk) 01:59, 4 April 2012 (UTC)
- The page Hull lists several forms - I assumed these subtypes are not all sub-types of convex hull - eg Skolem hull is there an accepted definition of the term "hull" in maths?213.249.187.63 (talk) 22:41, 3 April 2012 (UTC)
- Thanks.213.249.187.63 (talk) 15:41, 4 April 2012 (UTC)
April 4
Calculating Probabilities while playing "Lets Make a Deal.
While playing Lets Make a Deal, you are shown 3 doors. Behind one door is a car, but there are goats behind the other two doors. You pick door #1. Then the host shows you that there is a goat behind door #3. He then asks you if you want to keep door #1, or change to door #2. Should you stay with your first pick, or pick the other door. Which has a higher probability of winning the car? — Preceding unsigned comment added by Psychoshag (talk • contribs) 03:29, 4 April 2012 (UTC)
- We have an article on the Monty Hall problem, which will likely tell you more than you want to know. --Trovatore (talk) 03:30, 4 April 2012 (UTC)
there are 4 kinds of topology in Set{1,2},and 29 in Set{1,2,3}, is exist a Formula in a Set {1,2,3,4,5,6...n}?
there are 4 kinds of topology in Set{1,2},and 29 in Set{1,2,3}, is exist a Formula in a Set {1,2,3,4,5,6...n}? — Preceding unsigned comment added by Cjsh716 (talk • contribs) 06:26, 4 April 2012 (UTC)
- The number of topologies on a set with n labelled elements is sequence A000798 at OEIS. But I can't see anything there that suggests there is a known formula. Gandalf61 (talk) 09:26, 4 April 2012 (UTC)
Evaluation of Residue using Analytic Continuation
Starting from the Euler integral representation of the Gamma Function, I have derived the expression and have to use this to find the residue of the Euler integral at z=-m, m an integer. From the way the question is worded, I don't think this should be a difficult task but I haven't evaluated residues in this manner before and need some help in finding the correct approach. Thanks. meromorphic [talk to me] 09:52, 4 April 2012 (UTC)
- The integral is holomorphic at , so only the term in the summation contributes to the residue. Sławomir Biały (talk) 12:21, 4 April 2012 (UTC)
- (At the risk of asking a potentially obvious question...) So the residue is just ? meromorphic [talk to me] 14:24, 4 April 2012 (UTC)
- No, it's just (-1)^m/m! Sławomir Biały (talk) 14:39, 4 April 2012 (UTC)
- Ah, I'm with you now. Many thanks. meromorphic [talk to me] 14:44, 4 April 2012 (UTC)
- No, it's just (-1)^m/m! Sławomir Biały (talk) 14:39, 4 April 2012 (UTC)
Bounding a tricky series
I'm trying to show the following series is O(log(log(x))) for all x > e.
So far, I have tried, separately: (1) Euler-Maclauren summation, (2) expressing the summand using Vieta's formula [3], (3) writing the sine factor as an exponential and looking for telescoping terms in the series, and (4) trying to relate the sum to an entire function whose order necessarily contains a loglog term (see [4]) . I get the feeling I'm closest with the first two, but I must be missing something.
After that, I'd like to show the existence of a strictly increasing sequence (converging to zero) such that , where c is a positive constant independent of n. I wasn't sure how to approach this, but it reminded me of the construction of an entire function with prescribed zeros.
But now I am stumped. Any pointers would be greatly appreciated. Korokke (talk) 10:53, 4 April 2012 (UTC)
- I'm not sure if it helps, but writing transforms the summation into
- and the problem is now to show that this is O(y) for . Sławomir Biały (talk) 12:27, 4 April 2012 (UTC)
Evaluation of an Integral using Special Functions
I have to evaluate the integral using the substitution x=2t-1, which transforms the integral to and any standard properties of special functions that I like. Now, given the limits, I expect that the special function I want to use is the hypergeometric function with a=0, b=1/3 and c=1 but I cannot see the direct link. I'm tempted to say that I want to use the relation but I am not least confused by the lack of the variable z in the integral I want to compute. Any ideas? Thanks. meromorphic [talk to me] 11:15, 4 April 2012 (UTC)
- I've no idea if what you are doing is the best approach, but surely you can just use that relation with z=1? 130.88.73.65 (talk) 13:27, 4 April 2012 (UTC)
- Thanks. I realised that using the relation , which in this case leads to , and then making the appropriate rearrangements and using the reflection formula for the gamma function leads to the answer. meromorphic [talk to me] 14:21, 4 April 2012 (UTC)
Billiard ball accident
Instead of a car being rear-ended, with its complicating factors like friction and crumple zones, suppose there is a stationary billiard ball A with mass ma in a frictionless vacuum, and suppose it receives a direct hit at time t=0 from billiard ball B with mass mb and pre-collision velocity v>0. Suppose both balls are perfectly hard, totally incompressible.
(1) Am I right that regardless of the relative masses and regardless of B's prior velocity, A will move forward and henceforth have a positive velocity?
(2) For any t>0, am I right that the post-collision velocity of A is constant? If so, then must there be a discontinuity in its velocity at time t=0 -- that is, a moment of infinite acceleration?
(3) If the mass of B is much greater than the mass of A, then it seems to me that the post-collision velocity of B remains positive; but if the mass of B is sufficiently small then the post-collision velocity of B must be negative. Is this right? If so, then what relationship between the relative masses and the original velocity of B characterizes the in-between situation in which B ends up with zero velocity? Duoduoduo (talk) 15:37, 4 April 2012 (UTC)