Wikipedia:Reference desk/Mathematics: Difference between revisions
→algebra: Comment |
|||
Line 28: | Line 28: | ||
:Probably because there were not many or any of the kinds of automatic machines where control system theory was needed. [[Special:Contributions/92.15.13.42|92.15.13.42]] ([[User talk:92.15.13.42|talk]]) 20:32, 23 November 2010 (UTC) |
:Probably because there were not many or any of the kinds of automatic machines where control system theory was needed. [[Special:Contributions/92.15.13.42|92.15.13.42]] ([[User talk:92.15.13.42|talk]]) 20:32, 23 November 2010 (UTC) |
||
::Correct, but the kinds where it ''was'' needed (military, power utilities, telecoms) were among the wealthiest clients of the time, even in the worst of the Depression. There was real money and real willingness to invest it into working knowledge. [[User talk:East of Borschov|East of Borschov]] 09:15, 24 November 2010 (UTC) |
::Correct, but the kinds where it ''was'' needed (military, power utilities, telecoms) were among the wealthiest clients of the time, even in the worst of the Depression. There was real money and real willingness to invest it into working knowledge. [[User talk:East of Borschov|East of Borschov]] 09:15, 24 November 2010 (UTC) |
||
:::Two reasons I can think of: 1) electronics (such as radio) was not widespread before at least the 1930s, so feedback did not have that embodyment; 2) the management theory of that time and before was about a dominant authoritarian strength of will forcing others to submit (and still is for a lot of unenlightened bosses and organisations). The influential French management theorist [[Henri Fayol]] had part of his text mistranslated by the translator in to English, so that the 'controlling' part omitted the idea of feedback. I thought there was mention of this in one of the articles, but that seems to have gone. [[Special:Contributions/92.15.15.224|92.15.15.224]] ([[User talk:92.15.15.224|talk]]) 12:04, 24 November 2010 (UTC) |
|||
== Fourier Transform == |
== Fourier Transform == |
Revision as of 12:04, 24 November 2010
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
November 17
Function
Is it correct (or rigorous, I should say) to say that functions are not in fact rules (defined by elementary functions such as log, power, multiplication, addition, etc., for a random example f(x)=3x^2+ln(x)) but infinite collections of points, which on certain intervals can sometimes be generated by a rules? I'm supposed to be doing a presentation project and this would help clear a certian point up. 24.92.78.167 (talk) 01:04, 17 November 2010 (UTC)
- No, I would say that neither statement is rigorous. A function need not have a rule that we can easily write down; when there is a such a rule, we would say it is an elementary function. (The idea of elementary functions is not actually that useful for most mathematicians; and when it is used, may give different lists for the building blocks used to construct elementary functions. The article gives a reasonable definition, but not the only possible one.) Your second idea would more often be called piecewise elementary functions. (The idea is that if we take pieces of elementary functions and glue them together, we get a new function.)
- The rigorous definition of a function is fairly simple, but abstract (which means it won't seem simple to anyone not comfortable with abstract language). You seem to be thinking about functions from (some subset of) the set of real numbers to the set of real numbers, which is the kind of function most people think about. (But you can define functions from any set to any other set.) This Math.Stackexchange discussion gives the rigorous definition, and several nice ways to explain it. I really like the "function monkey" idea. 140.114.81.55 (talk) 02:18, 17 November 2010 (UTC)
- The reason I say neither statement is rigorous is that there are many functions that don't fit either definition. There are functions so messy that no finite description with the language we have can give the "rule" for the function. Suppose you find a black box lying on the street some day. You can give it any real number, and it gives you back a real number. It doesn't matter if anyone ever understands the pattern of how the black box works: as long as it is consistent with itself, it is a function. (I.e., it always gives you the same result any time you give it 1, or pi, or -24.32325.)
- One other comment: when you say "an infinite sequence of points," it's not quite clear to me what you mean. "Sequence" usually means a countable list, like {1,2,3,4,5,...}. If you are thinking of something like all real numbers from 0 to 1, I wouldn't call that a sequence because it's not a countable set.140.114.81.55 (talk) 02:32, 17 November 2010 (UTC)
oops, sorry I didn't mean all functions. I fixed it to say certain intervals.—Preceding unsigned comment added by 24.92.78.167 (talk) 02:41, 17 November 2010 (UTC)
- The rigorous definition of function that I learned is, IIRC, this: A function from set to set is defined as a subset of satisfying that for each there's precisely one element of whose first term is . Not a rule, not a monkey, not a black box, just a subset of a product. (Then you need to know the definition of . IIRC that can most readily be defined as the set of where ranges over and over ; the "first term" referred to above is then the in .)—msh210℠ 06:29, 17 November 2010 (UTC)
- I think that should be , so that it contains exactly one singleton set and one doubleton (unless a = b) See Ordered pair. AndrewWTaylor (talk) 14:34, 17 November 2010 (UTC)
- Not necessarily. See Ordered pair#Variants.—Emil J. 14:41, 17 November 2010 (UTC)
- Right, AndrewWTaylor, sorry. The "first term" is then anyway.—msh210℠ 16:46, 17 November 2010 (UTC)
- I think that should be , so that it contains exactly one singleton set and one doubleton (unless a = b) See Ordered pair. AndrewWTaylor (talk) 14:34, 17 November 2010 (UTC)
ring problem
Hey guys. Just a quick question with a problem on rings. Here's the problem I need to solve/prove:
For any arbitrary element x of a given ring <R, +, •>, it satisfies x•x=x. Prove that for any arbitrary element x, x+x=0
A simple enough problem, yet I'm stuck and can't go any further. The problem is that all I know about rings is from the little I could find on wikipedia. Sure, I've learnt about operators and identities and inverses at school (I'm currently in 11th grade), but that's about all I know in this field. Anyways, despite my laughable amount of knowledge on rings, I managed to reach this conclusion: if we denote e as the additive identity of this ring, then x+x=e for any arbitrary element x.
(Here's how I reached this conclusion: Since x•x=x, we can say that (x+x)•(x+x)=(x+x). But since the distributive law holds, (x+x)•(x+x)= x•x+x•x+x•x+x•x = x+x+x+x. Therefore, x+x=x+x+x+x. Hence, x+x=e.)
After that, I tried to prove that e=0, but to no avail. So my question is this: in this particular problem (or generally speaking), is 0 denoted as the additive identity, or the actual real number 0?
And if 0 isn't the additive identity, how can I go about finishing off this proof?Johnnyboi7 (talk) 14:33, 17 November 2010 (UTC)
- 0 denotes the additive identity in this context.—Emil J. 14:36, 17 November 2010 (UTC)
- Use your common sense, how can 0 be the real number 0? It's not necessarily in the set R. This is called abuse of notation, and as you progress in math you'll find many authors abuse notations to the limit. Money is tight (talk) 19:36, 17 November 2010 (UTC)
- Ah, see how unfamiliar I am with the concept of rings here? I actually thought that the set R denoted the set of real numbers R, since the problem didn't specify what R was. And so naturally, my initial guess was that 0 denotes the real number 0. My bad. Now I can see that the set R is called R in reference to the "Ring" it is related to. Thanks EmilJ and Money is tight.Johnnyboi7 (talk) 03:19, 18 November 2010 (UTC)
- In general (though certainly not always) the real numbers are while an arbitrary ring is just R. 67.158.43.41 (talk) 04:10, 18 November 2010 (UTC)
- Ah, see how unfamiliar I am with the concept of rings here? I actually thought that the set R denoted the set of real numbers R, since the problem didn't specify what R was. And so naturally, my initial guess was that 0 denotes the real number 0. My bad. Now I can see that the set R is called R in reference to the "Ring" it is related to. Thanks EmilJ and Money is tight.Johnnyboi7 (talk) 03:19, 18 November 2010 (UTC)
- Use your common sense, how can 0 be the real number 0? It's not necessarily in the set R. This is called abuse of notation, and as you progress in math you'll find many authors abuse notations to the limit. Money is tight (talk) 19:36, 17 November 2010 (UTC)
Integer
Is 0 an integer? —Preceding unsigned comment added by 24.92.78.167 (talk) 22:32, 17 November 2010 (UTC)
- Yes. Algebraist 22:33, 17 November 2010 (UTC)
- Zero is always an integer and an even number. It is neither positive nor negative. The only debatable category is whether or not it is a natural number. Dbfirs 08:46, 18 November 2010 (UTC)
- Well, it depends on which zero you mean. Arguably the zero of the real numbers is a distinct object from the zero of the integers or natural numbers, even though in most cases it's expedient to elide the distinction. --Trovatore (talk) 09:58, 18 November 2010 (UTC)
- I've also seen the convention that zero is both positive and negative, rather than neither. I don't think that's very common, though. Algebraist 12:09, 18 November 2010 (UTC)
- I think that's the usual rule in French. I suppose there's nothing inherently wrong with it. I hope it doesn't diffuse into English, though; I wouldn't like to have to start doubly disambiguating, saying strictly positive on the one hand or positive or zero on the other. --Trovatore (talk) 19:45, 18 November 2010 (UTC)
- There are already common words for "positive or zero" and "negative or zero": nonnegative and nonpositive. So I don't think there need be much fear of such diffusion.—msh210℠ 20:53, 18 November 2010 (UTC)
- It can happen. Mathematicians taught in French publish in English. People translate from Bourbaki. --Trovatore (talk) 22:06, 18 November 2010 (UTC)
- There are already common words for "positive or zero" and "negative or zero": nonnegative and nonpositive. So I don't think there need be much fear of such diffusion.—msh210℠ 20:53, 18 November 2010 (UTC)
- I think that's the usual rule in French. I suppose there's nothing inherently wrong with it. I hope it doesn't diffuse into English, though; I wouldn't like to have to start doubly disambiguating, saying strictly positive on the one hand or positive or zero on the other. --Trovatore (talk) 19:45, 18 November 2010 (UTC)
- I've also seen the convention that zero is both positive and negative, rather than neither. I don't think that's very common, though. Algebraist 12:09, 18 November 2010 (UTC)
- Not the only debatable category, Dbfirs. Also debatable is whether it's a whole number.—msh210℠ 20:53, 18 November 2010 (UTC)
- Well, whole number is not really a mathematical term. In the United States (and for all I know maybe elsewhere) it's used in mathematics education, but not in mathematics itself. --Trovatore (talk) 22:05, 18 November 2010 (UTC)
- Right, not a term of professional math (AFAIK), but Dbfirs, to whom I was responding, didn't specify he'd been restricting himself to that domain. I guess whether it's "used in math" or not depends on whether you're a member of the AMS or the MAA.
:-)
—msh210℠ 06:46, 19 November 2010 (UTC)- Points taken, though I would have thought that there could be little debate about whether zero is a whole number. If your definition of "whole number" is "integer" or "non-negative integer" then zero is included, and if your definition is "positive integer" then it isn't. I've always taken the term to be a synonym of "integer", but perhaps some American educationalists disagree? The debate would be about what you mean by the term, not about whether zero is included. (I suppose the same could be said about "natural number", but there is much more debate about where is the natural place to start the counting numbers.) Dbfirs 09:29, 19 November 2010 (UTC)
- In the Houghton–Mifflin books used in my junior high and high schools, a whole number was what I now call a natural number (that is, including zero) whereas the natural numbers excluded zero. They also had some weird notations that are hardly ever seen in research mathematics. I think the whole numbers were W, which is natural enough, and maybe the naturals (without zero) were N, but the integers were J, I think? Hard to say how that came about. Maybe they wanted to use I, but worried that it would be confused with the numeral 1? --Trovatore (talk) 09:58, 19 November 2010 (UTC)
- The textbook I use for high school now adopts J for the integers. This practice is quite mystifying, given the prevalence of Z and the fact that J is sometimes used to represent the irrational numbers. —Anonymous DissidentTalk 12:12, 19 November 2010 (UTC)
- Table of mathematical symbols uses Z for integers.—Wavelength (talk) 04:44, 20 November 2010 (UTC)
- In the Houghton–Mifflin books used in my junior high and high schools, a whole number was what I now call a natural number (that is, including zero) whereas the natural numbers excluded zero. They also had some weird notations that are hardly ever seen in research mathematics. I think the whole numbers were W, which is natural enough, and maybe the naturals (without zero) were N, but the integers were J, I think? Hard to say how that came about. Maybe they wanted to use I, but worried that it would be confused with the numeral 1? --Trovatore (talk) 09:58, 19 November 2010 (UTC)
- Points taken, though I would have thought that there could be little debate about whether zero is a whole number. If your definition of "whole number" is "integer" or "non-negative integer" then zero is included, and if your definition is "positive integer" then it isn't. I've always taken the term to be a synonym of "integer", but perhaps some American educationalists disagree? The debate would be about what you mean by the term, not about whether zero is included. (I suppose the same could be said about "natural number", but there is much more debate about where is the natural place to start the counting numbers.) Dbfirs 09:29, 19 November 2010 (UTC)
- Right, not a term of professional math (AFAIK), but Dbfirs, to whom I was responding, didn't specify he'd been restricting himself to that domain. I guess whether it's "used in math" or not depends on whether you're a member of the AMS or the MAA.
- Well, whole number is not really a mathematical term. In the United States (and for all I know maybe elsewhere) it's used in mathematics education, but not in mathematics itself. --Trovatore (talk) 22:05, 18 November 2010 (UTC)
- Well, it depends on which zero you mean. Arguably the zero of the real numbers is a distinct object from the zero of the integers or natural numbers, even though in most cases it's expedient to elide the distinction. --Trovatore (talk) 09:58, 18 November 2010 (UTC)
- Zero is always an integer and an even number. It is neither positive nor negative. The only debatable category is whether or not it is a natural number. Dbfirs 08:46, 18 November 2010 (UTC)
- My high school algebra class (and book?) used whole numbers to mean either positive integers or nonnegative integers, I can't remember which. I think they went with "whole" for "counting" and so started them at 1, with naturals starting at 0. For my money, I go with much of the time anymore to avoid possible confusion, though I haven't found a simple-to-write, unambiguous symbol for the nonnegative integers ( is gross). 67.158.43.41 (talk) 03:14, 20 November 2010 (UTC)
- I use ω. Unambiguously includes zero. Unfortunately it might not be understood outside of a set-theory context. --Trovatore (talk) 00:56, 21 November 2010 (UTC)
- [1] uses for the natural numbers without zero and for the natural numbers with zero. I personally think this is a very clean convention. JamesMazur22 (talk) 14:30, 20 November 2010 (UTC)
- Except that there isn't really all that much use for the natural numbers without zero. The modern trend is to assume that the natural numbers, represented by N (or the bbb equivalent) do include zero. Of course both conventions still exist, so in the (fairly rare) case that it actually matters, you do need to clarify. --Trovatore (talk) 00:19, 21 November 2010 (UTC)
- My high school algebra class (and book?) used whole numbers to mean either positive integers or nonnegative integers, I can't remember which. I think they went with "whole" for "counting" and so started them at 1, with naturals starting at 0. For my money, I go with much of the time anymore to avoid possible confusion, though I haven't found a simple-to-write, unambiguous symbol for the nonnegative integers ( is gross). 67.158.43.41 (talk) 03:14, 20 November 2010 (UTC)
- Except that automated proof checker programs I've looked at seem to have the natural numbers starting with 1 rather than 0. For some reason that seems to be more useful for them. Dmcq (talk) 10:29, 21 November 2010 (UTC)
Flavor
Is chocolate a flavor? —Preceding unsigned comment added by 128.62.81.128 (talk) 23:08, 17 November 2010 (UTC)
- This is the mathematics reference desk, and the question is not a mathematical one. Bo Jacoby (talk) 23:42, 17 November 2010 (UTC).
- No.—msh210℠ 01:59, 18 November 2010 (UTC)
November 18
A financial math problem
Dear Wikipedians:
I've worked out part (a) of the following question, but feels that part (b) is more difficult and I'm not sure how to proceed
A manufacturer finds that when 8 units are produced, the average cost per unit is $64, and the marginal cost is $18. (a) Calculate the marginal average cost to produce 8 units (b) Find the cost function, assuming it is a quadratic function and the fixed cost is $400.
So my solution for (a) is
when q = 8,
for part (b), I know that the final form of the function looks something like
where a0 is equal to 400. But I don't know how to get the other coefficients from the given information. So I need your help.
Thanks,
70.29.24.19 (talk) 01:33, 18 November 2010 (UTC)
- You have since you have . You also have . 67.158.43.41 (talk) 04:22, 18 November 2010 (UTC)
- Thanks! Got it! 70.29.24.19 (talk) 04:38, 18 November 2010 (UTC)
Simple extremal subgraph query
Hi, is there an explicit function in (n,s) for the maximal number of edges a graph on n vertices may have such that the degree of every vertex is less than s? WLOG assuming ns - I thought perhaps it was to do with the floor function, certainly you can obtain , by just splitting the vertices up into classes of size at most s. However, is this necessarily an upper bound? What is the form of such a function? Estrenostre (talk) 04:39, 18 November 2010 (UTC)
- The upper bound is if every vertex has degree exactly s-1, in which case you've got n(s-1)/2 edges. Obviously if n and s-1 are both odd, that value is not possible since it's not integer, so it needs to be . I'm pretty sure this value is obtainable. Rckrone (talk) 11:01, 18 November 2010 (UTC)
- It is indeed. Let d = s − 1. If d is even, define a graph on {0,...,n − 1} by connecting each vertex a by an edge with (a + i) mod n, where i = −d/2,...,−1,1,...,d/2. If d is odd, use the same vertex set, connect a with (a + i) mod n for i = −(d + 1)/2,...,−2,2,...,(d + 1)/2, and moreover, connect each odd a with a − 1 (and therefore each even a < n − 1 with a + 1).—Emil J. 13:09, 18 November 2010 (UTC)
November 19
Question about polynomial rings
I'm kind of suck on a question where I have to prove that a common divisor of the highest degree of two arbitrary elements f(x) and g(x) in the polynomial ring, lets call it h(x). I have to prove that the greatest common divisor, d(x) (which is defined as a monic polynomial) is an associate of h(x). So far I'm trying to argue that since h(x) is a common divisor of the highest degree. What property should I use to show that they have the same degree? —Preceding unsigned comment added by 142.244.143.26 (talk) 02:44, 19 November 2010 (UTC)
- To show that d and h are associates is equivalent to showing that each divides the other (or to showing that one divides the other, and they are the same degree). How exactly you do that depends on what definitions you are using (since apparently you mean "common divisor of the highest degree" and "greatest common divisor" to mean different things). If you tell us what definitions you are using, and where you are getting stuck on the proof, we may be able to help you better. (I'll be away from the internet for a while so it'll have to be someone else, though.) Also please be more careful with your grammar, it can be hard to understand you. Good luck. Eric. 82.139.80.197 (talk) 23:43, 19 November 2010 (UTC)
poles problem
I saw one of those puzzles online today, I'll try to describe it. There are two poles of equal height with pegs at the same position on each of them. A rope of length 10 m is hung on these pegs such that the lowest point on the rope is 5 m below the pegs in the y-direction. How far apart are the poles? Intuitively I thought it would be 5 m because 10-5=5. I'd like to know how to do this with the arc length integrals and stuff to check my answer. How would I? Is my answer right? Thanks. 24.92.78.167 (talk) 04:09, 19 November 2010 (UTC)
- You don't need any arc lengths. Just draw a picture. 69.111.192.233 (talk) 05:32, 19 November 2010 (UTC)
- The rope should take the shape of a catenary if left hanging under uniform gravity, if I've understood your setup. Take the rope as y(x), the center of the system at x=0, and the poles at x=+r or -r. Say that y(0)=0 and y(r)=y(-r)=5. Translating the catenary vertically so its lowest point is at x=0, we have y(x) = a cosh(x/a) + b and y(0) = a + b = 0 so a = -b. Also, y(r) = 5 = a (cosh(r/a) - 1). The length of y from -r to r is given by the usual arc length integral, . Since , the integrand is just for our purposes, so the arc length is . Combining this with the above and asking Wolfram Alpha to solve the system of two equations in two unknowns gives no solutions. Either I've made a mistake or these constants don't work out. Slightly different constants--replacing 10 with 15, for instance, gives a = 3.125 and r ~= 5.02949, so that the poles must have been 2r ~= 10.05898 m apart. 67.158.43.41 (talk) 06:08, 19 November 2010 (UTC)
- No solutions with the given constants makes sense to me. The rope has to go down 5m from the left peg to the lowest point and up 5m from there to the right peg, only considering vertical distance. Assuming it wastes no distance moving left or right, it's at least 10m long. At precisely 10m of length, this forces r=0, so any solution would satisfy y(0)=0=5=y(r), a contradiction. 67.158.43.41 (talk) 06:18, 19 November 2010 (UTC)
- There's not exactly a contradiction, r=0 is the obvious and necessary answer. The pegs are touching each other. 69.111.192.233 (talk) 09:03, 19 November 2010 (UTC)
- I disagree. r=0 would force the "left" half and "right" half of the rope to occupy the same space, even in the infinitely thin idealization the question seems to be asking about. In that case its length would be 5. By my "contradiction" above, I mostly meant the catenary equation I used would produce a contradiction if it were solvable with the given constants. Sorry I wasn't clearer. 67.158.43.41 (talk) 09:10, 19 November 2010 (UTC)
- If you consider it as a set, yes, but not if you consider it as a curve. -- Meni Rosenfeld (talk) 10:02, 21 November 2010 (UTC)
- I disagree. r=0 would force the "left" half and "right" half of the rope to occupy the same space, even in the infinitely thin idealization the question seems to be asking about. In that case its length would be 5. By my "contradiction" above, I mostly meant the catenary equation I used would produce a contradiction if it were solvable with the given constants. Sorry I wasn't clearer. 67.158.43.41 (talk) 09:10, 19 November 2010 (UTC)
- There's not exactly a contradiction, r=0 is the obvious and necessary answer. The pegs are touching each other. 69.111.192.233 (talk) 09:03, 19 November 2010 (UTC)
- No solutions with the given constants makes sense to me. The rope has to go down 5m from the left peg to the lowest point and up 5m from there to the right peg, only considering vertical distance. Assuming it wastes no distance moving left or right, it's at least 10m long. At precisely 10m of length, this forces r=0, so any solution would satisfy y(0)=0=5=y(r), a contradiction. 67.158.43.41 (talk) 06:18, 19 November 2010 (UTC)
differentiate x^x
Hello. I am trying to differentiate y = x^x. I get literally within ±1 of the correct answer, which WolframAlpha says is x^x(ln x + 1). I used the derivative formula, and when I get to I use binomial expansion. This reduces it to and by L'Hopital's rule I find Where will the +1 come from? 24.92.78.167 (talk) 23:15, 19 November 2010 (UTC)
- Rewrite as what it is, namely . Then take the derivative. — Carl (CBM · talk) 23:24, 19 November 2010 (UTC)
- Doesn't use of l'Hopital's rule to evaluate that limit operate under the assumption that you already know the derivative of ? --Kinu t/c 23:42, 19 November 2010 (UTC)
- No, because x is constant in that limit. The derivative is taken wrt Δx. -- Meni Rosenfeld (talk) 15:59, 20 November 2010 (UTC)
- facepalm*... ah, but of course. Thank you. --Kinu t/c 23:06, 20 November 2010 (UTC)
- No, because x is constant in that limit. The derivative is taken wrt Δx. -- Meni Rosenfeld (talk) 15:59, 20 November 2010 (UTC)
- The error was in dropping the second term in the binomial expansion, so the limit should be
- The limit of the second term is easy to work out and you can use l'Hopital's rule on the first term since it's the derivative of a different expression. Not the easiest way to do it but it does work.--RDBury (talk) 07:04, 20 November 2010 (UTC)
The binomial expansion was incorrectly done here. The following is correct:
but that's not the binomial expansion—you would still have to expand after that.
However, you get the answer more easily by logarithmic differentiation, thus:
and so on..... Michael Hardy (talk) 03:10, 21 November 2010 (UTC)
November 20
November 21
Control systems: A conceptual question
Researching the bio of Harold L. Hazen, an associate of Vannevar Bush and later the MIT EE department head, I ran into the history of what now seems to be a fairly pedestrian math subject - history of control systems science. What could be simpler than a Bode plot or the Nyquist stability criterion? dislaimer: I'm a former electronic engineer, indoctrinated in things like phase margin - these were perceived as rock-solid, centuries-old foundation when I studied them ...
As I understand it, the need for a clear, unified theory of control arose in the early 1920s, when former small regional power grids were being consolidated into larger systems (which turned out unstable and unmanageable, which called for research in control systems ...). Then, in the 1930s, re-militarization of the navies and bomber threat demanded new fire control systems (ship-to-ship and flak). Then there was the war; a unified systems did not emerge until after WW2. Military jobs were classified, but civilian applications were public. Throughout the 1930s Bode, Nyqyist, Bush, Hazen, Gordon Brown et al. developed and shared competing theories but no one grasped the whole subject yet (and it's only a snapshot of American situation - the Germans and Soviets followed their own, parallel and different routes to the same objectives). Some operated in time domain, others in frequency domain; some, like Hazen, developed control systems without ever using the concept of feedback, etc. Today, it seems almost like blind men and an elephant.
So, beat me if it's a stupid question: how come that such a straightforward, mathematically simple subject took so long to develop? Mathematicians of the 18th and the 19th century put forward mind-bogging, highly abstract concepts that were far ahead of engineering needs (like the contribution of Riemann). The Fourier analysis was known well before it became widely appied in engineering. But when it came to applying already existing math apparatus to a real-world problem (with all the GE and AT&T money behind it) - it took a quarter of a century to formulate properly. Why?
And how did it happen that a subject (correct me if I'm wrong) was left to engineers alone (and most of them corporate engineers, not academics)? Where were the mathematicians? East of Borschov 21:00, 21 November 2010 (UTC)
- I'm not a mathmatician but ..... didnt Norbert Weiner "discover" feedback when studying gun control systems during wartime, and when no longer secret published in a popular account in his book Cybernetics? A lot of ego-led organisations have still not discovered the idea of feedback. 92.15.6.86 (talk) 13:01, 23 November 2010 (UTC)
- Wiener followed the path already beaten by the Bell Labs group (Nyquist, Bode et al.). It is true that wartime secrecy prevented publication of what was deemed sensitive, but then (a) a lot of knowledge was published before the war, and my question was really about the 1930s (b) during the war, American experts worked in a coordinated fashion and shared knowledge among themselves. For example, censorship banned publication of a 1940 paper by Gordon S. Brown but then the classified paper was distributed to all experts in the field. East of Borschov 09:21, 24 November 2010 (UTC)
- Probably because there were not many or any of the kinds of automatic machines where control system theory was needed. 92.15.13.42 (talk) 20:32, 23 November 2010 (UTC)
- Correct, but the kinds where it was needed (military, power utilities, telecoms) were among the wealthiest clients of the time, even in the worst of the Depression. There was real money and real willingness to invest it into working knowledge. East of Borschov 09:15, 24 November 2010 (UTC)
- Two reasons I can think of: 1) electronics (such as radio) was not widespread before at least the 1930s, so feedback did not have that embodyment; 2) the management theory of that time and before was about a dominant authoritarian strength of will forcing others to submit (and still is for a lot of unenlightened bosses and organisations). The influential French management theorist Henri Fayol had part of his text mistranslated by the translator in to English, so that the 'controlling' part omitted the idea of feedback. I thought there was mention of this in one of the articles, but that seems to have gone. 92.15.15.224 (talk) 12:04, 24 November 2010 (UTC)
- Correct, but the kinds where it was needed (military, power utilities, telecoms) were among the wealthiest clients of the time, even in the worst of the Depression. There was real money and real willingness to invest it into working knowledge. East of Borschov 09:15, 24 November 2010 (UTC)
Fourier Transform
Hi. I'm trying to find the Fourier Transform of , given by but am having trouble with the integration. I am not meant to do this using any techniques from Complex Analysis, which I haven't studied yet, and so the only tool I have available to me is real integration. There is clearly some trick I'm missing on how to do it but I just can't spot it. Can someone give me a hint? Thanks asyndeton talk 23:35, 21 November 2010 (UTC)
- 1 + w2 = (1 − iw)(1 + iw) by the difference of two squares. This will allow you to cancel the numerator and save yourself a lot of work. —Anonymous DissidentTalk 09:37, 22 November 2010 (UTC)
- ....and next, think about integrating by parts..... Michael Hardy (talk) 23:51, 22 November 2010 (UTC)
November 22
1/3 Anglo-Saxon
At the pub the other day the allegation was made that someone was 1/3rd Anglo-Saxon. This led to a discussion of whether this was possible or not. Is there a proof (possibly using mathematical induction) that it is possible or that it is impossible? Joaq99 (talk) 00:30, 22 November 2010 (UTC)
- This depends, of course, on how one is defining fractions of ethnicity. If one takes the fairly standard approach of going back to ancestors of clear and unambiguous ethnicity, and assigning fractions based on that, then the only fractions possible are the dyadic rationals. 1/3 is not dyadic, so it's not possible. Algebraist 00:33, 22 November 2010 (UTC)
- Assuming that ancestry can be traced as far back as one likes and that the Anglo-Saxons existed as long ago as one looks (neither of which assumptions is true, of course), one can have a 1/3 Anglo-Saxon ancestry in a limit. Otherwise, the best "1/3" can be is an approximation.—msh210℠ 19:57, 22 November 2010 (UTC)
- Added a title heading for the question. – b_jonas 11:39, 22 November 2010 (UTC)
- Gilgamesh is 1/3 part mortal and 2/3 part divine. – b_jonas 11:44, 22 November 2010 (UTC)
But what if one has only three grandparents? Suppose you're the product of an incestuous mating of half-siblings. Michael Hardy (talk) 23:53, 22 November 2010 (UTC)
- Then one of those grandparents is your grandparent twice-over, so what I take to be the standard approach would make that grandparent responsible for half of your ethnicity and your other grandparents responsible for a quarter each. Algebraist 00:00, 23 November 2010 (UTC)
- OK..... Michael Hardy (talk) 00:04, 23 November 2010 (UTC)
Someone at the pub spoke 1/3rd Anglo-Saxon#Language. Perhaps he or she spoke English? Bo Jacoby (talk) 00:15, 23 November 2010 (UTC).
IID Random variables with uniformly distributed difference
Hello everyone,
What would be the best way to show there do not exist independent identically distributed random variables X, Y such that X-Y ~ U([-1,1])? I am comfortable with what I believe would be all the relevant probability/measure theoretical concepts, and I expect there's probably a very clever concise way of doing this problem which takes almost no time at all - is there such a solution? I can't think of an easy way to get at it, despite the fact it certainly looks like there should be one.
Thankyou,
Otherlobby17 (talk) 06:52, 22 November 2010 (UTC)
- Compute explicitly the distribution of X-Y. That depends on four parameters: the upper and lower bounds of X and Y. – b_jonas 11:38, 22 November 2010 (UTC)
- ... and the actual distribution of X and Y, which the OP didn't say should be uniform. -- Meni Rosenfeld (talk) 12:38, 22 November 2010 (UTC)
- Sorry, you're right. I answered the wrong question. – b_jonas 20:23, 23 November 2010 (UTC)
- ... and the actual distribution of X and Y, which the OP didn't say should be uniform. -- Meni Rosenfeld (talk) 12:38, 22 November 2010 (UTC)
- Consider the cumulants of X and of −Y. Compute the cumulants of X−Y. Compare to the cumulants of the continuous uniform distribution. Bo Jacoby (talk) 12:20, 22 November 2010 (UTC).
- You don't know what the distributions of X and Y are so how do you get their upper and lower bounds or the cumulants? It's easy to see that if X-Y ~ U([-1,1]) then the distributions of X and Y must be contained in and interval of length 1. If the distributions are continuous and densities bounded then it's not hard to do by playing with inequalities. What happens if, for example X and Y are distributed like the x-coordinate of a random point on a circle of radius 1/2 (or sphere, etc.)?--RDBury (talk) 18:21, 22 November 2010 (UTC)
- Bo's point was to show that for any choice of cumulants of X (and Y), you won't get the uniform cumulants for their difference. If X and Y were required to be uniform, b_jonas' method would be easy - for any choice of bounds, the difference won't be uniform.
- Taking the distribution of the x-coordinate on a circle, which is , doesn't work. -- Meni Rosenfeld (talk) 19:30, 22 November 2010 (UTC)
- You don't know what the distributions of X and Y are so how do you get their upper and lower bounds or the cumulants? It's easy to see that if X-Y ~ U([-1,1]) then the distributions of X and Y must be contained in and interval of length 1. If the distributions are continuous and densities bounded then it's not hard to do by playing with inequalities. What happens if, for example X and Y are distributed like the x-coordinate of a random point on a circle of radius 1/2 (or sphere, etc.)?--RDBury (talk) 18:21, 22 November 2010 (UTC)
log
today I participated in a math competition and one of the problems was express log 2 and log 5 in terms of k if k=log 5 * log 2 (log here denoting the common log). I put the obvious answer k/log 5 and k/log 2 but I got marked wrong! What is the correct answer? 24.92.78.167 (talk) 21:38, 22 November 2010 (UTC)
- I suspect the problem was so phrased that you're not allowed to use log 2 and log 5 explicitly in the answer. Michael Hardy (talk) 23:56, 22 November 2010 (UTC)
OK, so log 5 + log 2 = log 10 = 1, and hence
where
Then
This is a quadratic equation in u. So
The smaller of the two solutions is log 2 and the larger is log 5. Michael Hardy (talk) 00:02, 23 November 2010 (UTC)
- Should be -- Meni Rosenfeld (talk) 09:12, 23 November 2010 (UTC)
- I've changed that now---thanks. Michael Hardy (talk) 21:31, 23 November 2010 (UTC)
- Putting Michael Hardy's method another way, if we set and , then , and (by definition), so and are the roots of , and the rest follows from the quadratic formula. (Is this "theory of equations" stuff still taught these days?) AndrewWTaylor (talk) 10:03, 23 November 2010 (UTC)
November 23
Area of a surface of revolution
I have looked in several calculus books and they all give formulas for the area of a surface of revolution (if x and y are defined parametrically on the interval [a, b]) as:
if it is rotated around the -axis
and
if it is rotated around the -axis
What about when the curve is rotated about some other line (including things such as y = x)? I realize the formula only holds if the plane curve does not cross the axis of rotation. Is it just
where measures the distance from the axis of rotation to the plane curve (along lines perpendicular to the axis of rotation)? Thanks. StatisticsMan (talk) 01:57, 23 November 2010 (UTC)
- Yes, the formula is where the curve is parameterized by the arc length s You'd get your formulas after change of variable (Igny (talk) 03:40, 23 November 2010 (UTC))
dot product
your article says a dot product is dot product is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors) and returns a single number obtained by multiplying corresponding entries and adding up those products. But how are vectors sequences of numbers? I thought vectors were just defined as a segment with a magnitude and direction, or x and y hat components ? (I'm mostly familiar with them from a vector standpoint. —Preceding unsigned comment added by 24.92.78.167 (talk) 04:22, 23 November 2010 (UTC)
- The article Cartesian coordinate system explains how a vector is described by coordinates. Bo Jacoby (talk) 07:39, 23 November 2010 (UTC).
- And for a concise answer, a vector with x and y components can be described by the sequence of two numbers . A 3D vector can be described as . -- Meni Rosenfeld (talk) 09:06, 23 November 2010 (UTC)
- There are two concepts that need to be distinguished here. There is the algebraic operation
- and then there is the geometric operation
- If the components of a and b are co-ordinates relative to an orthonormal basis then these two operations give the same result. Otherwise (if, for example, a and b are represented in polar co-ordinates) then the operations are not the same. Gandalf61 (talk) 16:42, 23 November 2010 (UTC)
- If you know how to compute the square and the sum and the difference and the half, then you can always compute the product by the formula
- If and are vectors, then you know how to compute the sum and the difference and the half, and the square is
- ,
- so the product is
- Thus the dot product definition is motivated. If two vectors and are at right angle to one another, then the two diagonals and are equal, and so the dot product is zero.
- .
- Bo Jacoby (talk) 01:53, 24 November 2010 (UTC).
Groups presentations
Say I have a group with presentation . I think it's pretty obvious that in this group, x has order 11, but I'm having a hard time showing this with any sort of rigour. It seems so clear that the second relation can't possibly affect the order of x (and that, in fact, if we had a presentation with just the second relation, x and y would have infinite order), but I just can't see how to prove it. I want to use von Dyck's theorem to say that there exists a group such that x is order 11 and the second relation is satisfied, and inject my group in there, forcing the order of x to be 11, but again, I've not really been rigorous about showing that such a group does definitely exist. I suppose I'm also just wondering in general how to show that relations in a presentation don't interact with each other in an 'interesting' way (or if they do...). Any assistance would be appreciated, Icthyos (talk) 17:01, 23 November 2010 (UTC)
- Rewrite the second relation as xy = x−1. Then it is clear that it is a presentation of the semidirect product , where
- with x being the generator of the C11, and y the generator of the Z. In particular, the order of x is indeed 11. (Of course, you don't need to know that the semidirect product has exactly this presentation, you only need to know that it satisfies the given relations. For this purpose, the simpler product would also work, giving the dihedral group D11.)—Emil J. 17:14, 23 November 2010 (UTC)
- I don't think there is any method which would work in general. It is even algorithmically undecidable whether a given finite presentation defines a nontrivial group. However, looking for semidirect products often helps.—Emil J. 17:23, 23 November 2010 (UTC)
- I see, thank you. I tend to forget about semidirect products, as it seems like quite a mysterious construction to me (I've not really met them that much). I'm always quite surprised when I learn that a group can be expressed as a semidirect product, it's a bit like a black box to me, for now at least. Is it fair to say that as soon as we add another relation on top of x8 = 1 into our presentation (apart from something like y4=1), we can no longer immediately read off what the order of x is...even if it seems intuitively obvious? We need to search for something like a semidirect product, as above, to determine what the group actually is? Thanks again, Icthyos (talk) 20:24, 23 November 2010 (UTC)
A somewhat ugly equation
Can someone solve this assuming n and p are known and I'm looking for x (I'd appreciate even a solution for n=1 in case the above equation too impractical to solve). It's effectively an extension of Kelly criterion and expected utility, I'd just like to know how far can I go before the bets become -EU. Thanks 78.0.251.166 (talk) 17:51, 23 November 2010 (UTC)
- There's no closed-form elementary solution to this. You need to solve it numerically. -- Meni Rosenfeld (talk) 19:04, 23 November 2010 (UTC)
- x=0 seems to give 1.166.137.10.83 (talk) 20:28, 23 November 2010 (UTC)
- Of course, I was talking about the root the OP was interested in. -- Meni Rosenfeld (talk) 07:23, 24 November 2010 (UTC)
- x=0 seems to give 1.166.137.10.83 (talk) 20:28, 23 November 2010 (UTC)
- Yes, x=0 is one solution. If p is a positive integer then the other p−1 solutions satisfy the polynomial equation
- Bo Jacoby (talk) 02:35, 24 November 2010 (UTC).
- For there are no real solutions (other than 0), so I think the OP wants which is not an integer. -- Meni Rosenfeld (talk) 07:23, 24 November 2010 (UTC)
Line integral
I'm no mathematician - and I'm useless when it comes to calculus. I have a 3D volume (let's say it's a unit sphere, centered on the origin) containing some fluid of variable opacity. Let's say that the density varies at each point according to some simple function like maybe:
density = 1 - sqrt(x2+y2+z2)
...a spherical blob that's utterly opaque in the middle and utterly transparent at the edges...kinda like an idealized raincloud. I want to know - for an arbitary line passing through the unit sphere (like a laser beam), how much of the light would be occluded by the fluid...(forgetting anything messy and real-worldish like scattering and re-radiation). Let's suppose the line is described by a point somewhere on the surface of the unit sphere - and a vector describing its direction.
I think I need a path integral - integrating the density function along the line...but I have absolutely no idea how to do that.
Part two of the question (assuming there is a relatively easy answer to part 1) is: If I wanted to approximate the density of a real fluid - what kinds of density functions over the unit sphere produce relatively, simple functions for the line integral?
If I happen to have picked a particularly difficult function for my example - ignore it and pick something easier.
216.136.51.242 (talk) 18:02, 23 November 2010 (UTC)
- To the best of my knowledge, opacity doesn't work that way. There's no "utterly opaque" - no matter how dense the substance is, light will still go through if the layer it has to pass is thin enough. The opaqueness at a point can take any nonnegative value, and the portion of the light that passes is where A is the line integral of the density along the path taken by the light.
- So to answer your question about computing line integrals along straight lines, you first need to find the points where the light enters and leaves, , and the length of the segment between them, . Then the line integral will be . Putting in your particular f will give you a normal one-dimensional integral which may or may not be easy to solve. If it can't be solved analytically you can always find the integral numerically. With your example density function, taking endpoints , you get . -- Meni Rosenfeld (talk) 19:22, 23 November 2010 (UTC)
Research Papers by James Stewart of Textbook Fame
I saw the unsourced statement at James Stewart: "Stewart's research focuses on harmonic analysis and functional analysis." which I don't doubt at all, but I can't find links to actual research or articles in refereed journals actually done by him. The bio at stewartcalculus.com likewise states his prior activity in research without any links (which is understandable, if not helpful- its purpose being PR and not reference). He seems to be a guest professor at best because U. of Torronto doesn't have a faculty page (that I can find) for him. I just want to see a detailed CV and publication history of this man who has made millions from his textbooks. 20.137.18.50 (talk) 18:52, 23 November 2010 (UTC)
- I did some sleuthing; his advisor, Lionel Cooper (mathematician), worked on operator-theory for quantum mechanics. I found a few papers in computational chemistry by a James J. P. Stewart, published in the late 1980s, such as Calculation of the nonlinear optical properties of molecules, Optimization of parameters for semiempirical methods. These sound like applied mathematics - numerical optimization and function analysis applied to computational chemistry. It's possible they're the same Stewart (though one article shows an affiliation with USAF Academy, seemingly incongruous with the textbook Stewart). Clearly, the man's most famous work is Calculus - which is an excellent calculus textbook; but I'm surprised that he'd have published his work in applied computational-chemistry. It doesn't seem like the standard career-track for a renowned mathematics education expert. Nimur (talk) 22:42, 23 November 2010 (UTC)
In search of a function
For some time, I've been trying to come up with a function from to that satisfies these conditions:
- The function is strictly increasing, in other words,
- The function must asymptotically approach 1, in other words,
(Note: includes 0.) The only function I've come up with is or trivial variations thereof. What are other such functions? JIP | Talk 20:48, 23 November 2010 (UTC)
- How about ln([1 + 1/x]^x)? —Anonymous DissidentTalk 20:55, 23 November 2010 (UTC)
- You may want to consider 2*arctan(x)/pi.--Eliokim (talk) 21:24, 23 November 2010 (UTC)
- You'll could do well with variations on sigmoid functions. Simply shift and scale to make them meet your requirements. e.g. the logistic function can be modified to, e.g. , which, for most a, meets your requirements. -- 140.142.20.229 (talk) 21:36, 23 November 2010 (UTC)
- In fact, I should mention the shift-and-scale approach can be applied to almost any function that monotonically approaches a finite limit from below as x->inf: Pick a point (a,b) on the function f(x) for which the curve is monotonic for all x>a. Find h, the limit as x->inf for g(x) = f(x-a) - b. Your origin-including, inf+ asymptote=1 function is k(x) = 1/h * ( f(x-a) - b ). You can even use functions which approach the limit from above - just use -f(x) in the procedure above. Functions with finite limits as x->-inf? Use f(-x). If you're clever, even finite limits as x->0 can be used by employing 1/f(x) as above. (All this providing you can calculate the limits of the shifted function, of course. For this computer algebra systems like WolframAlpha are quite handy.) -- 140.142.20.229 (talk) 21:54, 23 November 2010 (UTC)
- Another fundamentally different function is . --Tardis (talk) 22:14, 23 November 2010 (UTC)
- This can be generalized, of course: for any strictly increasing unbounded f with . I used and you used the successor function S, but you can use , etc., etc. Then you can compose with any strictly increasing unbounded g with : , etc., etc. --Tardis (talk) 22:36, 23 November 2010 (UTC)
- "" are not other words for "asymptotically approaches 1". For example, . -- Meni Rosenfeld (talk) 07:19, 24 November 2010 (UTC)
- What do you consider "trivial variations"? If f is a function like you describe, then take and you get . So if a monotonous transformation of the variable is considered trivial, then all such functions are trivial variations of . Also, all such functions are for some monotonous h. -- Meni Rosenfeld (talk) 07:39, 24 November 2010 (UTC)
November 24
[cis (x)]^n = cis (nx)?
I've recently proved that [cis (x)]^n = cis (nx) (where cis(x) = cos(x)+isin(x)) in class. Does the equation [f(x)]^n = f(nx) hold for any other trigonometric function? What is the point of doing the proof anyway? How is this helpful? 24.92.78.167 (talk) 03:56, 24 November 2010 (UTC)
It is helpful in that exponential identities are algebraically much simpler than trigonometric identities. You'll see this if you study Fourier series in any depth. The equation holds for (for example) f(x) = cis(6x), etc. But whether there are any others besides these trivial modifications is a subtler question. I think one can show that there are no others that are continuous, and the ones that are not continuous are freaks of no particular interest. Michael Hardy (talk) 04:12, 24 November 2010 (UTC)
- You need the axiom of choice to exhibit a noncontinuous linear function. Robinh (talk) 08:24, 24 November 2010 (UTC)
- The fact that cis(x)^n = cis(nx) is de Moivre's theorem, and is of use when considering powers and roots of complex numbers. For example, computing (1 + i√3)21 is simple with de Movire's theorem, but a nightmare with the binomial theorem. (In answer to your question "How is this helpful?") —Anonymous DissidentTalk 08:44, 24 November 2010 (UTC)
algebra
if a^2+2b=7, b^2+4c=-7, & c^2+6a=-14 then value of (a^2+b^2+c^2)is ?????????// —Preceding unsigned comment added by Shubham81095 (talk • contribs) 05:23, 24 November 2010 (UTC)
- Can you tell us what you've tried so far, or where you're stuck? Also, I'm curious what the source of the problem is, I've seen it online but with no attribution. Eric. 82.139.80.252 (talk) 08:22, 24 November 2010 (UTC)
- I get a trial and error solution of a=-3, b=-1, c=-2 giving a2+b2+c2=14. I don't know if that's the only solution though.--RDBury (talk) 09:43, 24 November 2010 (UTC)
- Wolfram Alpha thinks it is (assuming we're staying within the real numbers). —Anonymous DissidentTalk 09:54, 24 November 2010 (UTC)
- The above is the only solution in real numbers, there should be up to 6 complex solutions though.--RDBury (talk) 09:58, 24 November 2010 (UTC)
- Never mind, I see Alpha gives the complex roots as well. Looks like a2+b2+c2 is different in each case, three pairs of conjugate values. So the answer appears to be 14 but only if you assume real numbers.--RDBury (talk) 10:18, 24 November 2010 (UTC)
- The above is the only solution in real numbers, there should be up to 6 complex solutions though.--RDBury (talk) 09:58, 24 November 2010 (UTC)
- Wolfram Alpha thinks it is (assuming we're staying within the real numbers). —Anonymous DissidentTalk 09:54, 24 November 2010 (UTC)
- I get a trial and error solution of a=-3, b=-1, c=-2 giving a2+b2+c2=14. I don't know if that's the only solution though.--RDBury (talk) 09:43, 24 November 2010 (UTC)