Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Damelch (talk | contribs)
Line 429: Line 429:
:<math>\int_a^b \frac{\cos (\theta t) - \cos(t)}{t} dt</math>
:<math>\int_a^b \frac{\cos (\theta t) - \cos(t)}{t} dt</math>
is uniformly bounded as well in terms of <math>\theta</math>, and converges to an expression involving <math>\log (\theta)</math>, which is also mysterious to me. Can anybody shed light on this? Thanks, [[User:RayAYang|Ray]] ([[User talk:RayAYang|talk]]) 12:55, 30 December 2008 (UTC)
is uniformly bounded as well in terms of <math>\theta</math>, and converges to an expression involving <math>\log (\theta)</math>, which is also mysterious to me. Can anybody shed light on this? Thanks, [[User:RayAYang|Ray]] ([[User talk:RayAYang|talk]]) 12:55, 30 December 2008 (UTC)
:The first equation is usually called the sinc integral or the sine integral. This webpage should be able to help you. [http://mathworld.wolfram.com/SineIntegral.html Mathworld Link] As you can see the integration is somewhat complex, which is probably why they did not give you the proof of this. You can also look at [[Trigonometric integral]] for assistance. From the trigonometric integral page we can see that the cosine part of the integrals will also be bounded, however I am unsure why they converge to <math>\log (\theta)</math>. I hope this helps. --[[User:Damelch|Damelch]] ([[User talk:Damelch|talk]]) 17:56, 30 December 2008 (UTC)


== Equations for Π<sup>n</sup><sub>i=a</sub>f(x) and Σ<sup>n</sup><sub>i=a</sub>f(x) ==
== Equations for Π<sup>n</sup><sub>i=a</sub>f(x) and Σ<sup>n</sup><sub>i=a</sub>f(x) ==

Revision as of 17:56, 30 December 2008

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


December 20

Boubaker polynomials (2)

To user Eric

The hint you gave is wonderful!

The trouble you evoked can be avoided by calculating the sum

So that :

By the way if any one can give the Chebyshev or Dikson polynomials exponential generating fonction it will help a lot.


Duvvuri.kapur (talk) 09:40, 20 December 2008 (UTC)[reply]

You would like to pass from the "ordinary generating function" say of the Chebyshev polynomials of the first kind, , to an "exponential generating function", that is one of the form . This involves applying a suitable linear operator on the OGF, that takes to , for all n, and that is continuous enough to transform power series term by term. There are several functional settings where you can do that; here the simplest for your needs is the one of formal series in t (say with coefficients that are functions of x). You can then observe that your transformation also sends to ; here the OGF to transform can be written indeed as sum (in case, have a look to the examples of partial fraction decomposition). The same method works as well with the , with the Dickson polynomials and with the Boubaker's, for all of them have OGF that are rational function of t. Is it clear? PMajer(talk) 11:59, 20 December 2008 (UTC)[reply]


Yes it is clear. But one can not find in the existing litterature the already established Chebyshev and Dikson polynomials exponential generating fonctions, this is strange ..Duvvuri.kapur (talk) 12:50, 20 December 2008 (UTC)[reply]


Well, the existing mathematics literature is so huuuuge that it is often hard to find something. I have no references for the EGF of your polynomials, but I am as sure that it's there, as I'm sure that I'm alive (say in practice 100%). At least, in your case you know exactly where to look for it, and it shouldn't be that difficult. But there are situations in which, to get a clue for where to look for a result, you first have to give a proof of it: only then you'll understand which is its AMS collocation and which is the keyword to put in google. Research in mathematics is becoming a search engine of the Babele library, whose catalog is just a copy of it. Mathematicians are mostly Platonists, so in principle it should not make a great difference to do research in a real or in an ideal library :) PMajer(talk) 13:49, 20 December 2008 (UTC)[reply]

List of exact values of trig functions

It's possible to find exact (albeit irrational) values for e.g. sine of 3 degrees using identities as indicated on Exact trigonometric constants. It's also possible to find values for sine that is a multiple of 5 degrees, e.g. (Sorry if I messed up the math.) Is there some place where many of these are listed? Most webpages I've seen only have a few, like these: [1] [2] [3] I'm looking for a big collection of these. If one doesn't exist, I'd like to make it. :) Pointers would be appreciated. Thanks. 4.242.147.223 (talk) 05:27, 20 December 2008 (UTC)[reply]

If you allow nth roots of complex numbers like , then is one of the n complex nth roots of 1. Then you can get the cosine, for example, as (ζ + ζ-1)/2. So really, you're going to have to specify what kinds of nth roots of complex numbers you're allowing. Otherwise any sine or cosine of a rational angle can be expressed easily in terms of for some n, and some choice of root. Joeldl (talk) 11:28, 20 December 2008 (UTC)[reply]
This thread from a couple of years ago may be of interest to you. --NorwegianBlue talk 15:46, 20 December 2008 (UTC)[reply]
OK, good. Thanks for the quick and excellent responses (that's basically what I was looking for). Am I correct in saying that it's impossible to get an exact value of sines of irrational angles? 4.242.108.230 (talk) 16:12, 20 December 2008 (UTC)[reply]
No. There's an angle θ, probably irrational, with sin θ = 0.123456, for example. Joeldl (talk) 16:30, 20 December 2008 (UTC)[reply]
Good point. However, is there an exact value for e.g. sine of (square root of 5)? 4.242.108.230 (talk) 16:59, 20 December 2008 (UTC)[reply]
Well, if your angle "square root of 5" is in radians, then the sine is a transcendental number by the Hermite-Lindemann theorem. Joeldl (talk) 17:10, 20 December 2008 (UTC)[reply]
Thanks. 4.242.108.187 (talk) 17:50, 20 December 2008 (UTC)[reply]
On the flip side of the coin, if a triangle has integer side lengths, the angles are irrational. So it is possible to sin for many irrational angles using Pythagorean triples. Thenub314 (talk) 20:28, 20 December 2008 (UTC)[reply]
Yes, irrational in radians. I'm making this point because in an earlier part of the discussion, I used "rational angle" to mean a rational multiple of π. Joeldl (talk) 05:01, 21 December 2008 (UTC)[reply]

On/Off Notation

In proving the solution to a problem algebraically, is there a standard way of showing whether something is on or off. The problem involves lights being on or off and the turning of them on or off, I solved it, but I'm stumped as to how to write on/off algebraically. Any ideas appreciated! Thanks. Harland1 (t/c) 22:24, 20 December 2008 (UTC)[reply]

There isn't a standard mathematical notation for "on" and "off". Many people are comfortable with using "1" and "0" or "true" and "false" as place-holders for "on" and "off", but if you choose to do that, you should clearly explain the meaning of your words so that the reader is not confused.
However, I'm not sure what you mean by "algebraically". Can you explain? If you are planning on performing some kind of algebraic manipulations with the states of lights, then you should use the notation from the appropriate algebra. For example, if you want to represent the expression "Light C is on exactly when either light A or light B is on", then in the notation of boolean algebra, you might write , where c is true or false according to whether light C is on or off, etc. As another example, if each light had a brightness (either 0 or 1, or maybe any positive integer, etc.), and you wanted to represent the expression "The brightness of light C is the sum of the brightnesses of lights A and B", then the notation would be appropriate, where c is an integer denoting the brightness of light C, etc. As one last example, if you are dealing with statements like "Light C is on if Fred enters the room first", then you probably aren't performing algebraic manipulations at all, and there won't be a standard notation for the situation. Eric. 68.18.63.75 (talk) 22:47, 20 December 2008 (UTC)[reply]
If you were tackling the lights out puzzle, then it would be reasonable to use integers modulo 2 to represent whether each light is on or off. Then, adding "1" to the light toggles the state of that light. Pressing a button is represented by adding 1 to certain lights (which lights depends on which button). The usefulness of this representation, is that it makes two facts about the puzzle clear: it doesn't matter what order buttons are pressed (since addition is commutative), and pressing any one button twice has no effect (since 1 + 1 = 0 in modulo 2 arithmetic). Eric. 68.18.63.75 (talk) 22:59, 20 December 2008 (UTC)[reply]


December 21

Feynman Restaurant Problem

(This problem and its solution were invented by Michael Gottlieb based on a story told to him by Ralph Leighton. It can be found posted at www.feynmanlectures.info.)

Assume that a restaurant has N dishes on its menu that are rated from worst to best, 1 to N (according to your personal preferences). You, however, don't know the ratings of the dishes, and when you try a new dish, all you learn is whether it is the best (highest rated) dish you have tried so far, or not. Each time you eat a meal at the restaurant you either order a new dish or you order the best dish you have tried so far. Your goal is to maximize the average total ratings of the dishes you eat in M meals (where M is less than or equal to N).

The average total ratings in a sequence of meals that includes n "new" dishes and b "best so far" dishes can be no higher than the average total ratings in the sequence having all n "new" dishes followed by all b "best so far" dishes. Thus a successful strategy requires you to order some number of new dishes and thereafter only order the best dish so far. The problem then reduces to the following:

Given N (dishes on the menu) and M <= N (meals to be eaten at the restaurant), how many new dishes D should you try before switching to ordering the best of them for all the remaining (M–D) meals, in order to maximize the average total ratings of the dishes consumed?

Honestly I have no idea how you're supposed to answer this...any ideas? —Preceding unsigned comment added by 65.92.236.87 (talk) 03:51, 21 December 2008 (UTC)[reply]

A very similar, though simpler, problem is the secretary problem. Perhaps you will find something helpful there. Eric. 68.18.63.75 (talk) 05:03, 21 December 2008 (UTC)[reply]
A google search for "Feynman's restaurant problem" turns up a page with an expression for D in terms of M (I assume the OP has already seen this, as they state the problem with exactly the same words and notation). However, it doesn't say how this expression is derived. Also, as the given expression does not involve N, I am wondering if it is only an asymptotic solution for M << N. Gandalf61 (talk) 12:48, 21 December 2008 (UTC)[reply]
Wasn't getting very far with the discrete version of the problem as originally posed, so I thought let's assume M<<N and transition to a continuous version. So assume each dish has a "score" which is a real number normalised to be in the range [0,1], and when you choose a new dish, the score of this dish is uniformly distributed in [0,1]. You choose a new disk on the first D visits, then choose the best dish eaten so far on the remaining M-D visits. You don't know the scores of the dishes - you only know which dish out of those eaten so far has scored highest. Objective is to choose D so as to maximise the expected total score across all M visits.
After choosing new dishes on the first D visits, the expected total score so far is D/2. Suppose the expected score of the best dish after sampling k different dishes is bk. Then you want to choose D so as to maximise (D/2) + (M - D)bD.
I couldn't find a closed form for bk, but I did find that
which is sufficient to set up a spreadsheet model. For M = 10, 50, 100, 200 I get D = 4, 11, 16, 24. The expression in the link I referred to above gives D = 3.7, 9.1, 13.2, 19.0. So either I am over-estimating D or the link solution is under-estimating D. Gandalf61 (talk) 18:12, 21 December 2008 (UTC)[reply]
Okay, I think I am over-estimating D, as a numerical simulation shows better agreement with the values given by the link solution. I think my recursive expression for bk must be wrong. Does anyone know what the expected value of the maximum of k values each drawn from a U(0,1) distribution is ? Gandalf61 (talk) 15:19, 22 December 2008 (UTC)[reply]
I've looked at this and if you try to find an exact expression of the function of M you're optimizing, you get a sum of expressions involving factorials, and I don't really know where to go from there. Namely, unless I've miscalculated, you need to pick M to maximize:
  • [This was a mistake - see below.]
Anybody know how to do this?
On the other hand, I think you can approximate the problem this way for large N: You get to eat N dishes. Each time you try a new dish, its quality is a random real number between 0 and 1. How many times should you try a new dish before settling?
Unless I've made a mistake, this problem can be solved almost exactly. By "almost" I mean you get M within 1.
The probability, after M tries, that the best dish had quality ≤ t is tM. Thus you get a probability density function by differentiating: MtM - 1. The expectancy of the best dish after M tries is the integral over [0,1] of t times this function, which gives M / (M + 1). So the thing you want to maximize is (1 / 2)M + (N - M)M / (M + 1). A little calculus (please correct me if I've miscalculated) shows that this function of a real variable M increases until it attains a maximum at the real number sqrt(2N + 2) - 1, and then decreases. Therefore the maximum is one of the two integers nearest sqrt(2N + 2) - 1.
If I haven't made a mistake, I imagine that with a lot of extra work, this could be turned into a proof that in the actual problem the optimal M is asymptotically equivalent to sqrt(2N). Joeldl (talk) 16:01, 22 December 2008 (UTC)[reply]

It seems I misinterpreted the problem. What was called D above, I called M, and for some reason I remembered the problem as having M and N identical. So in what I wrote, replace N with M and M with D, and N is sort of infinite. Also, the exact formula I gave first is for a special case. Sorry - I shouldn't have worked from memory. Joeldl (talk) 16:08, 22 December 2008 (UTC)[reply]

Okay, this might be a correct formula to maximize in terms of D:

  • [Correction: The formula should have been ]
The sum can be simplified
Bo Jacoby (talk) 23:41, 22 December 2008 (UTC).[reply]
Uh, except that that's not correct. Take N = 4 and D = 2 for an example. Joeldl (talk) 05:37, 23 December 2008 (UTC)[reply]

Uh, how embarassing. Thank you. I ment to write

Your function becomes

Bo Jacoby (talk) 09:41, 23 December 2008 (UTC).[reply]

Joeldl, can you explain how you derived your expression
Is it exact, or is it an approximation for M<<N ? Gandalf61 (talk) 09:21, 23 December 2008 (UTC)[reply]
It's exact. I'll get back to you in a bit, Gandalf.
Bo Jacoby, I think it's still wrong, but thanks for the idea, because I think the sum is
  • .
I get it: you count the number of D + 1 element subsets of {1,...,N + 1} by counting for each k how many of those subsets have k + 1 as their maximum. Joeldl (talk) 10:07, 23 December 2008 (UTC) . (Joeldl, you are right again. Bo Jacoby (talk) 22:47, 23 December 2008 (UTC).)[reply]


Actually, I screwed up the formula too. What I did was say that the thing we're trying to maximize is

where EN,D is the expectancy of the maximum of a randomly chosen D-element subset of {1,...,N}. The number of sets with maximum equal to k is , so

and this can be further simplified, as we now know. So my formula was off. Joeldl (talk) 10:35, 23 December 2008 (UTC)[reply]

Actually, ED,N simplifies to (N + 1)D / (D + 1). There must be a simpler argument for this! Joeldl (talk) 10:43, 23 December 2008 (UTC)[reply]

This means that the function to be maximized is

  • .

This shows that the optimal D depends only on M and not on N, and is one of the two integers closest to sqrt(2M + 2) - 1.

Now we need to figure out a simple argument for why this didn't depend on N. Joeldl (talk) 10:49, 23 December 2008 (UTC)[reply]

The preceding calculations may be intimidating, so let's reformulate the question. Basically, the question is this: Choose a D-element subset of {1,...,N} at random. Prove in a simple way that the maximum of the subset will have as its average value (N + 1)D / (D + 1). Joeldl (talk) 21:12, 23 December 2008 (UTC)[reply]

Okay, I think I've got it. Take a D-element subset T with maximum k. The average value of an element of a subset T of this kind, other than k itself, is the average of 1,...,k - 1, namely k / 2. So for a subset T with maximum k, the sum of all the elements of T will, on average, be (D - 1)k / 2 + k = (D + 1)k / 2. Thus, for an arbitrary random D-element subset T, the average sum of its elements will be (D + 1) / 2 times its average maximum. But the average sum of the elements is D times the average value of an element, which is (N + 1)/2. So the average maximum is D(N + 1) / 2 ÷ (D + 1) / 2 = (N + 1)D / (D + 1). This computes EN,D above and leads to a solution. Joeldl (talk) 21:44, 23 December 2008 (UTC)[reply]

Btw, the answer is D = sqrt(2(M+1)) - 1... —Preceding unsigned comment added by 65.92.236.87 (talk) 06:45, 25 December 2008 (UTC)[reply]

Not exactly. sqrt(2M + 2) - 1 isn't always an integer, so that can't be the answer. As mentioned above, D will be one of the two integers closest to sqrt(2M + 2) - 1 if this last number is < N (and it would take some extra work to find out which one it would be for a given M). Otherwise D = N. Joeldl (talk) 20:38, 25 December 2008 (UTC)[reply]

Socks in drawer problem

I'm working through some probability problems and would appreciate help with this one: "There are 10 red socks and 20 blue socks in a drawer. If socks are pulled out at random, how many must be done for it to be more likely than not that all of the red ones have been chosen?" I reasoned that if n (>= 10) socks in all were to be chosen and 10 of them were to be red, then the remaining ones had to be blue which could be done in 20 C n-10 ways. The number of ways of choosing n socks from 30 is 30 C n, so the probability of picking all 10 red ones in n draws is p(n) = (20 C n-10)/(30 C n). Is this right so far?

Putting the expression another way, I got that p(n) = (20.19.18...)/(30.29.28...), with 30-n terms in numerator and denominator. Is this right? To get this to > 0.5 required n=29, as I calculated that p(28)=38/87 and p(29)=2/3. Is this right, too, and is there a better way of getting the required value of n than arithmetic evaluation?→86.132.165.135 (talk) 17:16, 21 December 2008 (UTC)[reply]

Sooner or later arithmetic evaluation would be necessary, but there is a slightly easier approach for this particular problem. Consider: what is the most number of socks that you can draw out of the drawer, so that there is more than a 1/2 chance of drawing only blue socks? Eric. 68.18.63.75 (talk) 18:33, 21 December 2008 (UTC)[reply]

Using the binomial coefficient notation: The probability to draw red socks out of socks in the sample is . So the probability to draw red socks out of socks in the sample is

Due to cancellation of factors it is easy to evaluate

,

Bo Jacoby (talk) 03:16, 22 December 2008 (UTC).[reply]

PS. The approximation makes it possible to solve analytically: This is perhaps "a better way of getting the required value of n than arithmetic evaluation". Bo Jacoby (talk) 10:59, 22 December 2008 (UTC).[reply]

Thanks for the replies - it appears that my analysis of the problem was OK. →86.160.104.208 (talk) 20:16, 23 December 2008 (UTC)

Surely, your result is correct, but somehow your analysis didn't quite convince you ! Note that the function p(n) is characterized by the following properties

  1. p(n) = 0 for n = 0,1,2,3,4,5,6,7,8,9 and p(30) = 1.
  2. p is a polynomial of degree 10.

The first property is shown without algebra while the second property is perhaps less evident. The solution to the 10th degree equation p(n) = 1/2 cannot be expressed in closed form, while the solution to (3/2)n−30 = 1/2 is elementary. Bo Jacoby (talk) 23:29, 23 December 2008 (UTC).[reply]

Elegant... as to the solution of p(n) = 1/2, or better of , I trust more the explicit arithmetic evaluation. As far as I understand the symbol here has only a qualitative meaning and does not tell us how close is n to the solution of the approximated equation. Making the computation rigorous would take more work than the arithmetic evaluation...--PMajer (talk) 14:43, 24 December 2008 (UTC)[reply]

Thanks for showing me the \scriptstyle feature! The problem is solved, and it seems pointless to keep beating a dead horse. However it may be nice to be able to find the solution to the equation . The equation of degree 10 in

can due to symmetry be reduced to an equation of degree 5 in ,

where . Note that

The derivation of the simplification is

Bo Jacoby (talk) 15:02, 25 December 2008 (UTC).[reply]

Nice... well, in my experience no dead horse is so dead that sooner or later he raises and give me back everything. So I will be thinking a little more to your equation p(x)=y, staying ready to run ;) --PMajer (talk) 21:33, 26 December 2008 (UTC)[reply]

I simplified (and corrected!) my simplification above a little. (The horse was not quite dead!) Note that if you can solve the degree 10 equation right away, then the simplification is not necessary, and if you cannot solve the degree 5 equation anyway, then the simplification is not sufficient. However, the precision of standard floating point computation may sometimes be sufficient for evaluating a degree 5 polynomial but not for evaluating a degree 10 polynomial. So simplification is good strategy. Bo Jacoby (talk) 08:58, 27 December 2008 (UTC).[reply]

Assuming that you want the solution expressed by formulas that can be evaluated by pre-computer tools such as pencil and slide rule. Then an approximation is called for. The degree 5 polynomial satisfy and . That means that approximately where is chosen between and , say . The solution to is giving the fine result . Bo Jacoby (talk) 14:25, 31 December 2008 (UTC).[reply]

Boubaker polynomials (3): Mr. Boubaker, please come and give your personal definition

I'm not an expert in special functions and orthogonal polynomials, and I confess that I never heard about "Boubaker polynomials" before. However, I suspect that there is something to rectify in the article. It states that the are linear combinations of the Chebyshev polynomials; precisely, . If so, the recursive relation should be the same as the and , thus ; but it's reported as . The generating function also does not match: if the denominator is really then the stable linear recursive relation should be . Finally, an explicit formula is quoted, giving as linear combination of m-th powers of and ; in this case we would have . Discouraged, I made no further experiments... Well, I've just had a glance to the list of colors, there are so beautiful ones that I am tempted to revise the whole arcticle. But let us give an end to this Carnival... so, which one is the right formula? PMajer (talk) 23:06, 21 December 2008 (UTC)[reply]

You may want to ask User:Luoguozhang who appears to be largerly responsible for this fairly new article Nil Einne (talk) 18:26, 22 December 2008 (UTC)[reply]

You may also want to mention this at talk:Boubaker polynomials. Michael Hardy (talk) 23:42, 22 December 2008 (UTC)[reply]

Done. Personally, I think that definitions in mathematics non sint multiplicanda praeter necessitatem, but it is right that people be free to give names to any object if they wish.--PMajer (talk) 12:57, 23 December 2008 (UTC)[reply]


December 22

What is the factorial of a fraction and a fractionth triangular number?

What is the factorial of a fraction and a fractionth triangular number? In other words I mean what is x and y when x=Πni=1i and y=∑ni=1i where n is not an integer or n≠xINT1. Please help me!----The Successor of Physics 08:25, 22 December 2008 (UTC) —Preceding unsigned comment added by Superwj5 (talkcontribs)

Neither of these functions are defined for fractional arguments, but there are fairly natural ways of extending them if for some reason you want to. The nth triangular number is n(n+1)/2, which gives you a definition that works on fractions, while the gamma function is the natural extension of factorial. Algebraist 08:31, 22 December 2008 (UTC)[reply]
I'd have said neither of these is defined. (Except that of course one can define then by using the Gamma function, although usually the language of factorials is not use then.) Michael Hardy (talk) 23:41, 22 December 2008 (UTC)[reply]
Yes, and also note that some people define a factorial function as , for real or complex x. PMajer --78.13.139.66 (talk) 08:51, 22 December 2008 (UTC)[reply]
What does n≠xINT1 mean? —Tamfang (talk) 17:27, 27 December 2008 (UTC)[reply]
I think it's a compsci way of saying n is not an integer. Algebraist 14:02, 29 December 2008 (UTC)[reply]
Weird if true. I might say n != int(n). —Tamfang (talk) 00:38, 30 December 2008 (UTC)[reply]

Refs for group theory: realizability as derived or frattini subgroup

I remember there being a fair number of results about which groups can and cannot be the X-subgroup of a Y-group for various values of X and Y, but I don't remember really where all the results are proven or even just gathered. Values of X might include "derived", "frattini", "nilpotent residual", "solvable residual", and values of Y might include "any", "finitely generated", "finite", "solvable", "nilpotent".

As an example it is easy to show that the nonabelian group of order six is not the derived subgroup of any group (complete group), nor can a nonabelian group of order eight be the Frattini subgroup of any finite group (contradiction when looking in the automorphism group).

Can someone find a reasonable survey of these types of results? I am happy if X and Y are fixed in the survey, or if the survey covers multiple values of X and Y.

I am particularly interested in complete classifications for X=derived, Y=finite, and X=frattini, Y=finite. JackSchmidt (talk) 15:47, 22 December 2008 (UTC)[reply]

One place to start would be B. Eick's "The converse of a theorem of W. Gaschutz on Frattini subgroups" in Math Z. #224. I don't have access to a copy right now, but it and the references it cites might be helpful. 86.15.139.246 (talk) 23:13, 22 December 2008 (UTC)[reply]
Excellent, thanks! This is probably the main source I was remembering. I read this one a few years ago, but not recently. It shows that a finite group is isomorphic to a subgroup of the Frattini group of a Y-group if and only if it is a nilpotent Y-group (where Y is "any" or "finite" or "p" etc.). The condition of Gaschütz itself is a little hard to use abstractly, but for a specific example is probably reasonable enough, and it (now) characterizes the finite groups that are isomorphic to the Frattini subgroup of a Y-group (where Y is "any" or "finite" or "solvable" or "nilpotent"). Section 4 works out a similar idea for the derived subgroup, but the result is only one way (if the group is isomorphic to a derived subgroup, then it satisfies this condition). JackSchmidt (talk) 01:15, 23 December 2008 (UTC)[reply]

Probability

I have set set of words, T, containing x number of words. The teacher picks a subset of T, containing y number of words (I don't know which words). I must know the definitions of z number of words of the y number of words the teacher picked. How would I set this up to know the least number of words whose definitions I must memorize to be sure to know the z number of words from y? TIA, Ζρς ι'β' ¡hábleme! 21:56, 22 December 2008 (UTC)[reply]

The number of words you needn't know is . So the neumber of words you do need to know is —Preceding unsigned comment added by Philc 0780 (talkcontribs) 22:13, 22 December 2008

December 23

1 + 1 = 0?

Long ago when I was in middle school I remember jotting down the following "proof" to my mild amusement:

Of course this can not be right, but can someone please point out the problem? 124.214.131.55 (talk) 01:21, 23 December 2008 (UTC)[reply]

Well, look at the lines, and see which is the first false one, and that ought to tell you where the error is. Is your first line true? Yes. What about the second? --Trovatore (talk) 01:26, 23 December 2008 (UTC)[reply]
I would assume that the premise (first line) is fine. The second line should logically be fine too as taking the log (or whatever) of both sides retains the equality. 124.214.131.55 (talk) 01:31, 23 December 2008 (UTC)[reply]
You're not taking Trovatore's advice. It's easier to determine whether a statement is true than whether reasoning is valid. Which line is the first false statement? Algebraist 01:40, 23 December 2008 (UTC)[reply]
Mathematical truth should be determined by valid, logical reasoning, not by preconceptions of what should or should not be. Thus I am hesitant on evaluating truth in the absence of logic. However, I will give it a try. Expanding the 1st line results in the identity 1 = 1. If I remember correctly, the second line becomes . The real parts match, but the imaginary parts are not equal. Knowing that this is false still does not answer why. If the premise is valid, then doing an operation on an identity should maintain the identity. What is it about the logarithm that breaks this and why? 124.214.131.55 (talk) 02:32, 23 December 2008 (UTC)[reply]
Michael Hardy answers below, but we can get to the root of your trouble in a simpler way. OK, you agree that the second line is false. We certainly agree that logarithms of equal things must be equal. So what's left that could break? What manipulation have you done, in getting from line 1 to line 2, besides taking the logarithm of both sides? --Trovatore (talk) 02:51, 23 December 2008 (UTC)[reply]
See List of logarithmic identities#Powers for a version of that works when the involved logarithms have no real values, and that explains the extra . -- Jao (talk) 16:36, 23 December 2008 (UTC)[reply]
Thanks Jao! That was exactly what I was looking for. I had been operating under the premise that , regardless of real vs. complex, which is apparently not correct. 124.214.131.55 (talk) 23:37, 26 December 2008 (UTC)[reply]

Taking the log of both sides preserves equality if there is a log to take. But your second line presumes there is a logarithm of −1. That's where it first gets problematic.

Now if we construe "log" to mean an extended version of the log function, for which "logarithms" of negative numbers are defined, then we hit another problem: Is it true that if log a = log b, then a = b? In other words, is this extended logarithm function a one-to-one function? Consider, for example, the fact you mentioned, that 12 = (−1)2. Why not go straight from there to the conclusion that 1 = −1? The answer is that the squaring function is not one-to-one: two different numbers can both have the same square. It is true that two different numbers cannot have the same logarithm in the conventional sense of "logarithm", but that conventional sense says there is no logarithm of −1. For "logarithms" in this somewhat more unconventional sense, the logarithm function is not one-to-one and you have the same problem as with the squaring function. Michael Hardy (talk) 02:12, 23 December 2008 (UTC)[reply]

We have a pretty good article on invalid proofs if you want to see other examples. --MZMcBride (talk) 02:15, 23 December 2008 (UTC)[reply]

Latex with WinShell, regular expressions

I could not find any instructions in their table to replace \xy@ with \abc@ where @ is any non-letter. Be grateful if somebody can help. Thank you in advance. twma 08:41, 23 December 2008 (UTC)[reply]

Determinants of a Non-Square Matrix!!!

One day I was thinking about determinants. Determinants are defined as the magnitude of a matrix, and the magnitude of a vector(of course) is defined as the magnitude of a vector. Suddenly, I thought that (if determinant of matrix= magnitude of vector)if not only vectors of 1x1 dimension, that is the vector has the same number of rows and columns can have magnitudes then singular matrices can also have determinants!!! I worked out that you can get the determinant of a singular matrix like this. First see are there more columns or rows. treat the extra rows or columns as vectors and take the magnitudes of each of the vectors(columns if there are more columns and rows if there are more rows). Then the singular matrix will turn into a non-singular matrix and you can take the determinant of the square matrix!!! How can this be??? Is this possible???----The Successor of Physics 14:43, 23 December 2008 (UTC) —Preceding unsigned comment added by Superwj5 (talkcontribs)

I suggest that you check your definitions of determinant and of singular matrix. It seems that what you mean by "Determinant of a Singular Matrix" has to be translated into: "norm of a non-square matrix", which is something definitely not worth an exclamation mark. I quote from the wikipedia article: "..determinant is also sometimes denoted by |A|. This notation can be ambiguous since it is also used for certain matrix norms and for the absolute value"!--PMajer (talk) 17:09, 23 December 2008 (UTC)[reply]

--

The determinant of a matrix is not its "absolute magnitude" if that term is taken to imply something that cannot be negative.

Sorry, I was mixed up! I corrected it.----The Successor of Physics 10:49, 24 December 2008 (UTC)

And "non-singular matrix" is usually defined to be a square matrix whose determinant is zero. So non-singular matrices certainly do have determinants by conventional definitions.

Sorry, I was mixed up(again)! I corrected it(again).----The Successor of Physics 10:49, 24 December 2008 (UTC)

You're really vague about how you're computing these determinants. You say "treat the extra rows as vectors", which of course is what they are, and take their magnitudes, which is no problem, and then you assert that somehow you get a determinant out of this. Specifically how you do that, you make no attempt to say. Michael Hardy (talk) 20:47, 23 December 2008 (UTC)[reply]

If you take away the extra rows/columns you get a square matrix, and there are bunches of methods saying how to get the determinants so I have no need to specify how you need to get a determinant out of this.----The Successor of Physics 10:49, 24 December 2008 (UTC)
The other key point is that not only do you need to say how you calculate them but also how they are useful. Determinants are very useful numbers (they are invarient under lots of very useful transformations, they are multiplicative, etc.), what properties would your "determinant" have that would make it useful? --Tango (talk) 01:13, 24 December 2008 (UTC)[reply]

Superwj5, you seem to be saying that you can define a sort of determinant of a non-square matrix (this is different from "singular," by the way) by eliminating rows and columns from it so that you get a square one, and then taking the determinant of that. Is that right? Joeldl (talk) 11:14, 24 December 2008 (UTC)[reply]

Correct! That's what I meant!----The Successor of Physics 12:47, 25 December 2008 (UTC) —Preceding unsigned comment added by Superwj5 (talkcontribs)
Then you will get many different possibilities, depending on which rows or columns you eliminate. The determinants that you get this way are called minors of your matrix. There is nothing particularly special, really, about choosing to eliminate the extra rows all the way at the bottom or the extra columns all the way to the right, rather than a different choice of rows or columns. The remaining rows/columns don't even need to be consecutive in order for this idea to make sense. (For example, a 5 × 7 matrix has 21 5 × 5 minors.) You can also get minors corresponding to smaller square submatrices if you eliminate both rows and columns. (For example, 3 × 5 matrix has 30 2 × 2 minors.) See Minor (linear algebra) for more on this. Joeldl (talk) 20:52, 25 December 2008 (UTC)[reply]

Angles of points on surface of sphere

Let us consider the points on the surface of a sphere centered at the origin. There are three axes, and each point has an angle of rotation about each axis (-pi to +pi). If we plot the points on the surface of the sphere in 3-d, with each axis being one of the angles of rotation, we get a surface bound by the cube centered at the origin with side 2*pi. This surface remains the same even if any two of the axes are interchanged. Could someone describe this surface, provide an equation, provide a plot, etc.? Thanks! --Masatran (talk) 16:10, 23 December 2008 (UTC)[reply]

Well, let's choose the angles α, β, γ to be the angular coordinate of the projections of the point onto the yz, zx and xy planes respectively. Note that this is not well-defined for the six "poles." If we first exclude any points where one of the coordinates is zero (and deal with those separately later), then we get z/y = tan α, etc. Your surface will be contained in the surface S with equation (tan α)(tan β)(tan γ) = 1. I don't think you get all of S. The octant x, y, z > 0 corresponds to that portion of S with 0 < α,β,γ < π/2. You'll have to play around with it to see where the other seven pieces are. Joeldl (talk) 17:07, 23 December 2008 (UTC)[reply]

See also directional statistics --Point-set topologist (talk) 11:59, 24 December 2008 (UTC)[reply]

December 24

Derivation details

How was the equation

from Orbit#Analysis_of_orbital_motion arrived at? I understand that all terms of the very first equation in that section were divided by , but why does yield ? --Bowlhover (talk) 00:13, 24 December 2008 (UTC)[reply]

See Kepler's law#Deriving Kepler's first law. Bo Jacoby (talk) 09:30, 24 December 2008 (UTC).[reply]

Möbius strip

Hi. I added more info to the article on Möbius strip#Properties. Please make sure it is correct. Also, I could not figure out an additional property of an odd-number-of-half-twisted-strip when cut down the centre lengthwise: if the strip has been twisted once, the resulting strip will have four half-twists, or two full twists. However, if that strip is cut in half again, it will be two separate strips. However, here's the problem. When I cut a strip with three half-twists in half lengthwise, it produced a single strip, but when I cut that strip in half again, it produced yet another single strip, indicating that the strip formed when I cut the original three-half-twisted strip in half also had had an odd number of half-twists (actually, it was supposed to have had three half-twists plus an overhand knot when unravelled, which makes sense, and I wrote it in, but it also doesn't make sense)! So, did I do something wrong, or is there not a rule on this, or is it alternating, or based on the number of half-twists, or something else altogether? Thanks. ~AH1(TCU) 01:04, 24 December 2008 (UTC)[reply]

Your edits appear to be original research. -hydnjo talk 02:31, 24 December 2008 (UTC)[reply]
I would not say it's original research in that it is a well known subject of divulgation in maths (for instance GooglNo(moebius,scissors)>600,000 ). Maybe one could find some classic reference (Martin Gardner for sure). As to your problem (how many strips after a number of cuttings), it is nothing more than asking what happens to the discrete set when you identify each with (and of course we may observe that cutting e.g. twice a strip in half, is the same as cutting it in four). So, paper strips and scissors make a nice scenography, but the underlying fact is just very simple finite combinatorics. If you then ask, how the resulting strips are linked together, this requires a bit of formalism from knot theory.--PMajer (talk) 08:16, 24 December 2008 (UTC)[reply]

Did you know that the Moebius strip is a quotient space? —Preceding unsigned comment added by Point-set topologist (talkcontribs) 12:00, 24 December 2008 (UTC)[reply]

Annuity

Seasons Greetings! What is the formula for the accumulated amount and the present value of a savings account annuity if I contribute more often than it compounds and if I earn interest on the deposits made between compounding periods? Thanks in advance. --Mayfare (talk) 19:07, 24 December 2008 (UTC)[reply]

I recommend Stephen G. Kellison's book called The Theory of Interest. Chapter 3 and 4 are especially relevant, but the whole book is enjoyable and helps put all of these different models into perspective.
For your specific question, I'll try to help here, but probably the book is better. I myself have been very surprised at the number of methods actually in use for calculating interest, and so would be nervous giving a specific formula lest I did not communicate the subtleties correctly. For instance, do you deposit regularly? If so, then when does the pay-out begin? Does the lending agency know you will be depositing regularly, and if so how will they be calculating the partial period interest (both single and compound are commonly used within a single period). You may also be interested in what are called sinking funds which are savings accounts where you deposit regularly, and then expect a single pay out at the end. The amount of the single pay-out may in fact be exactly the "accumulated amount" you are looking for. For the present value, I suspect you just want to depreciate the ending accumulated amount by whatever the time period is.
To be honest though, I may have misunderstood the situation you are trying to describe. It seems strange to me that one would setup an annuity to pay-out money while also paying-in money. I am assuming that the situation goes through two distinct stages: the initial stage where you slowly build up the fund (the sinking fund stage), and then the final stage where the fund becomes an annuity that pays-out regularly, but is never increased except through interest on the remaining amount. If this two stage idea is what you are talking about, then probably you just need formulas for sinking funds during the first stage, and annuities at the second. Of course, you'll need to know exactly how the savings institution plans on paying out interest for partial periods (and probably very importantly, whether the interest rates are fixed are not; I would assume not during the building stage). JackSchmidt (talk) 22:09, 24 December 2008 (UTC)[reply]

I never knew that there are so many types of annuities. Sorry for the misunderstanding. For simplicity, assume a fixed interest rate sinking fund in which I deposit cash at regular intervals between and on compounding periods and withdraw the entire investment lump sum. Thank you for recommending the book. I will read it once I have a hold on it. --Mayfare (talk) 20:56, 25 December 2008 (UTC)[reply]

If the contribution between time t and t+dt is called x(t)dt, and the rate of interest is r(t,y), the present value y(t) satisfies the ordinary differential equation dy = (r y + x) dt. Bo Jacoby (talk) 10:01, 26 December 2008 (UTC).[reply]

Stirling's approximation and asymptotics

I am studying a few particular positive integer valued sequences, and am trying to write down nice "asymptotic expressions" for them. I suspect understanding positive integer valued sequences is utterly impossible for the human mind, but I would expect to have gotten a little farther than I currently have.

I would like something written in a textbook form which describes the various approximations of a nice function like n! with some emphasis given to why the previous approximations have been improved by the new ones. Does anyone know of such a textbook?

Trying to construct such a thing has led me along this path:

  • (n-1)*log(2) < log(n!) < n*log(n) is the naive estimate
  • n*log(n) - n + log(n)/2 + O(1) < log(n!) < n*log(n) - n + log(n)/2 + O(1) is the DeMoivre result
  • n*log(n) - n + log(n)/2 + log(2π)/2 + o(1) < log(n!) < n*log(n) - n + log(n)/2 + log(2π)/2 + o(1) is the Stirling result
  • n*log(n) - n + log(n)/2 + log(2π)/2 + 1/(12n+1) < log(n!) < n*log(n) - n + log(n)/2 + log(2π)/2 + 1/(12n) is a more explicit Stirling result

Now up to here it seems ok: there is not much pattern to the madness, but it is not too hard to see an improvement at each stage. However, then Stirling's approximation takes two infinite journeys and suddenly nothing is clear any more.

  • There is some sequence (the Stirling sequence) a_k such that n*log(n) - n + log(n)/2 + log(2π)/2 + Sum( a_k/n^k,k=1..K ) + o(1/n^K) < log(n!) < n*log(n) - n + log(n)/2 + log(2π)/2 + Sum( a_k/n^k,k=1..K ) + o(1/n^K), but the little oh term has some fairly pathological behavior that is only hinted at in an image in the article.
  • There is some other sequence of approximations that is not as pathological

In particular, in the first case in many ways the more "precise" estimates are actually worse, and somehow the second sequence tries to fix this.

So I am a little worried that there are two "right answers", and I tried to consider how unique an answer could be, and how to tell if an answer was any good. I spent a little time trying to figure out if the complex logarithm was defined and or analytic at infinity, and if so if using its Taylor expansion would allow me to remove the n*log(n) term and instead write the estimates as Laurent polynomials. Using just basic facts about the logarithm made it look somewhat hopeless, and so I wondered what sort of terms *should* be allowed. There is an article on asymptotic scales, but it doesn't exactly address the basic question of why they exist in human discourse.

It seems possible to me that asymptotics were just omitted form the famous "lies, damned lies, and asymptotics" quote, so that perhaps one just chooses an asymptotic scale to suit one's nefarious purposes, but I was hoping for something a little more definitive since I don't actually have any such nefarious things (or at least they were satisfied long ago). In other words, I have this function f(n), and I personally am happy with poly1(n) < log(f(n)) < poly2(n) where poly1 and poly2 are more or less unspecified, but I suspect people want something more precise, like poly1(n) + o(1) < log(f(n)) < poly1(n) + o(1). Maybe I can get to that point and maybe I can't. How do I tell how close I am, or whether my estimate is "better"? What if like Stirling's approximation, people want "better" ones; what is the shape of the next better one?

Is there some standard sequence of expressions that are the asymptotic approximations of a function? I've read through asymptotic expansion, but didn't get anything out of it. I would have liked it to be "Laurent polynomials of lower degree k as k goes to infinity", but I think log(n!) has no such approximation, since it needs that pesky n*log(n) term. What if it needed a 1/n^13*log(n) term, or a log(n)/(n^5*log(log(n))) term? This isn't completely outrageous, as one of my "other" estimates has log(n)/log(log(n))*n^3 in it, and I just have no idea how it compares to my polynomial bounds. Have I improved the bound, or just made the expression longer and harder to use?

At any rate, I basically just need some standard introduction to this stuff. I hope there is some mathematical answer, but if the answer is just a cultural tradition of choices of asymptotic scale, then I at least would like that tradition spelled out somewhat clearly in a citable source. JackSchmidt (talk) 21:55, 24 December 2008 (UTC)[reply]

Textbooks: I'm not a specialist, but if I understand your needs, Concrete Mathematics by Graham, Knuth & Patashnik may be a very interesting and enjoyable reading, for it describes by examples various techniques and tools; recent Analytic Combinatorics by Flajolet & Sedgewick has a more foundational setting, and there is a whole part on deriving asymptotics via complex analysis (by the way, you can download this book for free from the authors' homepage till the end of this year if I remember well). One powerful technique for deriving asymptotics for an integer sequence is working directly on its generating function: the idea is that you can get asymptotics on the coefficients of a power series with few information on it, e.g. it's functional or differential equations. If you think, the simple radius of convergence already may be seen as a first asymptotics for the coefficients; one beautiful classical example is Schur's theorem in combinatorics.
As to the Stirling approximation, or more generally, asymptotic expansions, the idea is to get , meaning e.g. that for any n one has as ; therefore for fixed n the approximation is better and better as x becomes larger. Nevertheless, for a fixed range of there is possibly an optimal n giving the best approximation there, and as a matter of fact, the series is possibly divergent as (this is the case of the Stirling one indeed), so larger n just give a worse approximation, in that range. However I would not call pathological this phenomenon; it is similar in nature to the non-convergence of the Taylor expansion of a , not analytic function. The asymptotic expansion at a finite point or at infinity just involves a notion of higher contact or tangency; the convergence of the series is an additional fact that holds under additional hypotheses. So the choice of the approximation really depends on the use we want to do with it. I have the impression I have not answered most of your questions however. (I've in mind some other more specific texts, where it is also treated in general the problem of existence and unicity of asymptotic expansion of a function by means of a given asymptotic sequence , but in this moment I remember none precisely :)ah yes the ones quoted in the article of course)--PMajer (talk) 11:56, 25 December 2008 (UTC)[reply]

See also here Count Iblis (talk) 00:39, 29 December 2008 (UTC)[reply]

Thanks for the replies. There are definitely helpful.
Unfortunately, I think they do not answer my basic question of "how does one compare two asymptotic approximations." In particular, I don't really need help deriving asymptotics for a sequence whose terms are known, but for judging various asymptotics for a sequence whose terms will never be know exactly by humans.
The book of Flajolet and Sedgewick is interesting, and somewhat helpful. I am not counting pretty things, and there will not be a generating function. What I am doing however is finding pretty things that give over or under counts, and so bounding the true thing between. Being able to more quickly estimate some of the pretty things will be helpful, though I have not have any trouble so far just looking them up.
Which references discuss existence and uniqueness of asymptotic expansions? There are lots of wikipedia articles and lots of references in them, virtually none of which were annotated or used for relevant inline citations. I would be happy to be able to answer clearly the following two questions:
Is there exactly one sequence of real numbers c_k (k>0) such that for every positive integer K, there are two real numbers d_K,e_K > 0 such that for all n
d_K/n^(K+1) < log(n!) - n*log(n) - n + log(n)/2 + log(2π)/2 + sum( c_k/n^k, k=1..K) < e_K/n^(K+1)
Is there exactly one sequence of real numbers c_k (k>-2) such that for every integer K > -2, there are two real numbers d_K,e_K > 0 such that for all n
d_K/n^(K+1) < log(n!) - n*log(n) - sum( c_k/n^k, k=-1..K) < e_K/n^(K+1)
The hoped for answers are yes for the first and no for the second.
The article of Boyd had my hopes up because it asks many of the same questions I ask in much the same language, but unfortunately doesn't exactly answer them. It did however explain some fairly reasonable situations where my Laurent series (his power series) could not possibly answer reasonable questions. It wasn't clear to me if this was suggesting there cannot be one true asymptotic scale, or whether no asymptotic scale would work. I think it does partially answer one of the bolded questions: power series are definitely the standard asymptotic scale in engineering and physics. It's heuristic treatment of error terms for the "optimally truncated" series was also helpful in explaining why people might find infinitely many approximations to the same function useful (I would summarize it as "so we can accelerate convergence"), though it did not particularly explain the more fundamental question of why they might find two approximations useful.
Thanks again to both for the references. The Boyd article was an enjoyable read, and I am still having a good time reading the F and S book, mostly from the standpoint of learning basic combinatorics. Thanks in advance for existence/uniqueness/comparison references. JackSchmidt (talk) 03:14, 29 December 2008 (UTC)[reply]
As to the unicity issue, I would say yes. For (e.g. in the first) if you have such another sequence , together with its and , consider the least number such that . Substracting you get:
,
that cannot be true for all if . Notice that this is in fact the argument for unicity of the asimptotic expansion for any given . By linearity everything reduces to unicity for the expansion of 0; one just look at the first nonzero coefficient, if any. I hope this is what you needed. Another two trivial remarks: in general even if for a fixed asymptotic sequence you do have unicity of the coefficients , there are other possible variations of the sequence that may make loose the unicity in a wifer context (but as an example of non-uniqueness of asymptotic expansion it's cheating). Also: it is true that power sequences are the most common asymptotic scales; in fact very often one has forms like , and the asymptotic for f is then written dividing everything by the common factor , thus taking the form of an asymptotic of by means of plain powers. --PMajer (talk) 08:06, 29 December 2008 (UTC)[reply]

December 25

No questions were asked.

December 26

inequality involving a sequence

Hello. Consider the following sequence of natural numbers given by ordering the elements in the following set: . Here c is some fixed natural number. Then prove that for any two elements and of B, . I want to prove this mathematically. I'll be grateful for any help--Shahab (talk) 09:05, 26 December 2008 (UTC)[reply]

Suppose
Now show that
and so
and then
so
I'll let you take it from there. Gandalf61 (talk) 11:03, 26 December 2008 (UTC)[reply]
Thanks for the help--Shahab (talk) 04:48, 28 December 2008 (UTC)[reply]

Relation between trigonometric functions and exponential / multiplicative reciprocal functions

Hi. A while ago, I stumbled upon an interesting problem. Consider a circle of radius r, and a chord c = 1 unit of length. Let a be the (smaller) arc marked by the chord. The goal of the problem is to develop a function f(n)=r, so that it calculates the minimum length of r, for which a - c <= 10-n. In other words, find a function that tells you the smallest radius for which the arc and the chord get so close to each other in length, that within an error margin of 10-n, they are practically equal.

Using the Law of Cosines for an isoceles triangle and the base-10 logarithm, I came to the equation:

with the angle expressed in radians. The only problem now is solving it for r, but since r is both inside and outside of the arccosine function, it's tricky. Not even a CAS will solve it algebraically. I noticed, though, that when graphed, is similar to a reciprocal function with an even exponent (like ) (axis symmetry, two asymptotes, etc...). Can an arccosine function with that type of fraction as an argument be re-written as a reciprocal function? Also, the whole formula above yields, when graphed, a square-root-function-like shape (exponential with negative exponent). Or it could be logarithmic... Any clues as to solve the above equation for r algebraically would be greatly appreciated. (I've tried regression, and came up with weird numbers that I'd like to mean something...) Thanks, sfaefaol 12:30, 26 December 2008 (UTC)[reply]

The chord is The condition is or Substituting gives where is the unknown. Define a new variable by and get the equation Use the Taylor series approximation to get the quadratic equation Solve: So your result is Bo Jacoby (talk) 15:55, 26 December 2008 (UTC).[reply]

acromegly

What are the chances that 2 brother's in law that did not grow up in same city or live in same city deveolop acromegly/pituitay tumors? —Preceding unsigned comment added by CAElick (talkcontribs) 18:44, 26 December 2008 (UTC)[reply]

Maybe you better redirect the question to the Reference desk/Science for more precise information. Anyway, since there is no kinship between them and they come from different places, with no other information I would say the probability are independent. That is, the information that one of them develop a tumor should not affect the probability that the other one will, which remains the same as for any other person.--PMajer (talk) 21:21, 26 December 2008 (UTC)[reply]
It sounds like they are independent events, unless there is some commonality which wasn't mentioned, like being exposed to the same chemical while visiting each other. If they are independent events, then just multiply the probabilities of each event to find the probability of both happening simultaneously. However, if you consider the probability that the two would both develop the same disease from the rather large list of rare diseases, not just this specific one, then the chances are much higher. StuRat (talk) 21:29, 26 December 2008 (UTC)[reply]

maths quiz

Determine all 3 digit numbers N which are divisible by 11 and where N/11 is equal to the sum of the squares of the digits of NDon deepan (talk) 19:04, 26 December 2008 (UTC)[reply]

OK, I've done that. FYI, there are exactly two solutions. Algebraist 19:19, 26 December 2008 (UTC)[reply]
This only requires a couple of lines in Mathematica 122.107.203.230 (talk) 22:18, 28 December 2008 (UTC)[reply]
For[n=110,n<1000,n+=11,
  a=Mod[Quotient[n,100],10];
  b=Mod[Quotient[n,10],10];
  c=Mod[Quotient[n,1],10];
  If[n/11==a^2+b^2+c^2,Print[n]]
  ]

maths quiz

Given positive real numbers a, b, and c such that a + b + c = 1, show that aabbcc + abbcca + acbacb <=1Don deepan (talk) 19:06, 26 December 2008 (UTC)[reply]

AM-GM inequality. Algebraist 19:08, 26 December 2008 (UTC)[reply]
have you attempted to solve this problem yet? Deathgleaner 00:37, 27 December 2008 (UTC)[reply]
Please define what you mean by aabbcc etc. If you mean a*a*b*b*c*c then all 3 of your terms are identical, so you are asking if aabbcc <= 1/3. If it means something else what? -- SGBailey (talk) 00:20, 28 December 2008 (UTC)[reply]

December 27

Square numbers

I got a 202-digit square number:

9....9 z 0....0 9

First there are 100 times a nine, then an unknown cipher called z, then 100 times a zero and then at last again a nine. So the 102nd number from right is not readable. How can you get this number? (Sry for my a bit broken English.) --85.178.8.133 (talk) 22:27, 27 December 2008 (UTC)[reply]

Take a look at 972, 9972, 99972, 999972 ... Spot the pattern. Prove it continues. (Hint: expand (10n-3)2). Gandalf61 (talk) 23:28, 27 December 2008 (UTC)[reply]
(and just to make sure that no other z would work, one may observe: in order to produce a 202-digit square, 10101 is too large; in order to get also the 100 initial nines, 10101- 6 is too small; and between these two the only number that produces the last nine when squared is the one written by Gandalf61. So the solution is unique even hiding all zeros...) PMajer (talk) 09:35, 28 December 2008 (UTC)[reply]

December 28

math question

If I gave 114 million families 114 million dollars per family, how much money would I have spent???? —Preceding unsigned comment added by 71.31.249.200 (talk) 13:27, 28 December 2008 (UTC)[reply]

Well, 114 000 000*114 000 000=114*114*1000000*1000000. Or just use a calculator. Taemyr (talk) 13:31, 28 December 2008 (UTC)[reply]
I would suggest you to try with this--PMajer (talk) 14:13, 28 December 2008 (UTC)[reply]
I don't really think a calculator is nessecary for this, but it makes everything a lot faster. If I didn't have a calculator, I would split the number into parts. First, I would break the equation into 1142(1,000,0002). I know that the square of 1 million, or 1x106 is one "North American" trillion, or 1x1012. Next, I would deal with the 1142. First I'd simplicify it to 100x100, then 114x100, then finally 114x114. After I get the result, I would then multiply that number by 1,000,000,000,000. If you are allowed to use a calculator, remember that many calculators don't fit that many digits, and most scientific calculators display the result as a number multiplied by ten to the power of another number. Hope this helps. Thanks. ~AH1(TCU) 17:41, 28 December 2008 (UTC)[reply]
Dealing with 1142 can be done even easier. I know 112=121, so I can take 1142=(110+4)2, and we know then that this is equal to 1102+2*110*4+42. Taemyr (talk) 19:27, 28 December 2008 (UTC)[reply]
You can easily calculate how many dollars have been spent, but money is supposed to be something that can be exchanged for goods and services. The dollar would be drastically devalued by such extravagance so you'd have spent the money but they wouldn't have received most of it. Dmcq (talk) 19:25, 28 December 2008 (UTC)[reply]
Exact, what's the point of giving me the 144 millions dollars if you also give them to my neighbours? --PMajer (talk) 22:21, 28 December 2008 (UTC)[reply]
Incidentally, the answer is more money than there is on Earth. StuRat (talk) 10:29, 29 December 2008 (UTC)[reply]

May I ask what is the point of this question, apart from finding the square of a number. 122.107.203.230 (talk) 22:28, 28 December 2008 (UTC)[reply]

Jet interpolation, or what?

Consider the following interpolation problem: find the minimal degree polynomial having prescribed derivatives in r prescribed complex points , for . In other words one looks for the polynomial with prescribed -jets at each point . Thus it generalizes both the Taylor expansion and the Lagrange interpolation. What is the name of this interpolation problem and the corresponding interpolation polynomial? I thought it was the Hermite's, but it seems that Hermite interpolation problem is bounded to prescribing only the derivatives up to the first order. By the way, the problem can be equivalently posed as a system of r simultaneous congruences , and in fact it has unique solution with by the Chinese remainder theorem in , an application that could be mentioned in that arcticle --I would do it myself, if only I knew what I'm talking about :) --PMajer (talk) 17:18, 28 December 2008 (UTC)[reply]

Are you sure you are not talking about the general hermite interpolation? Nodal values are given for all derivatives upto k orders giving a polynomial of order r(k+1)-1--Shahab (talk) 18:06, 28 December 2008 (UTC)[reply]
Thank you Shahab! I was missing the "general", thus. We don't need to ask the same number of derivatives at each point, of course. What is the base of polynomials, analog to Lagrange's  ? Do you have an on-line reference? (I'm isolated in this moment!)--PMajer (talk) 18:37, 28 December 2008 (UTC)[reply]
Surprisingly I couldn't find a online-reference. This is the best I got. Maybe it would be best if you looked it up in some good numerical analysis book. Note that it is possible to set up specialized Hermite interpolation polynomials which do not include all functional and/or derivative values at all nodes. There may be some missing functional or derivative values at certain nodes which I think is your case. This lowers the degree of the interpolating polynomial. Cheers--Shahab (talk) 15:28, 29 December 2008 (UTC)[reply]

December 29

MY DOUBTS

1.

There are three brokers (X,Y,Z) who are involved in stock market trading. X follows the tactic of buying shares in the opening and selling it on the closing. Y follows the tactic of getting qual number of shares every hour and Z follows the tactic of dividing the total sum into equal amounts and gets shares every hour (EXAMPLE- at 11am he will divide his current amount and gets the shares).The trading starts at 10am and closes at 3pm.(Y and Z get shares every hour). All the shares bought are sold at the close of the day ie 3pm. NOTE-> All the three start the day with equal amount.


My questions, a. On a day where the prices of the shares are increasing linearly who will have the maximun return and who will have minimun return(maximum loss).Please give me the explanation.

b.On a day the prices of the shares are fluctuating (going up ,coming down) .on this sort of day who is on the safer side ie who has the chance of getting maximun returns and why?

c.Out of the above mentioned tactics which involves minimum risk?


2.

In a problem a function f(x) = ax^2 + bx+ c; There is a relation given as f(2)=f(5) and f(0)=5; Its mentioned that constant a not equal to 0(a != 0).

a.Is it possible to find the values of all the three constants(a,b,c) using the above details?

please help me.

Thank you... —Preceding unsigned comment added by 220.227.68.5 (talk) 13:06, 29 December 2008 (UTC)[reply]

1) I didn't really follow your descriptions of methods Y and Z, but you may be talking about dollar cost averaging, which is a good strategy in the long run.
2) Based on f(0) = 0 we get that c = 0. From f(2) = f(5) we get a(2)2 + b(2) + 0 = a(5)2 + b(5) + 0. This reduces to 4a + 2b = 25a + 5b. This becomes -21a = 3b, which simplifies to -7a = b. So, it looks like any equation where the b value is -7 times the a value, and c = 0, will satisfy those conditions. StuRat (talk) 13:27, 29 December 2008 (UTC)[reply]
Correction: I used f(0) = 0, when you said f(0) = 5. That changes c from 0 to 5, but otherwise my answer is still correct. StuRat (talk) 21:17, 29 December 2008 (UTC)[reply]
Your questions look very much like homework - have you actually tried to solve these yourself? -mattbuck (Talk) 13:33, 29 December 2008 (UTC)[reply]
It didn't look like homework to me, especially the first part, because it would have been more clearly stated if it was. StuRat (talk) 13:39, 29 December 2008 (UTC)[reply]

If ƒ(2) = ƒ(5) then the axis of the parabola is half-way between 2 and 5, i.e. at 7/2 = 3.5. And since ƒ(0) = c, you need c = 5. So

The coefficient a can be any number except 0 and it will satisfy the conditions you gave. All pretty routine. Michael Hardy (talk) 18:24, 29 December 2008 (UTC)[reply]

Summing 1's and 2's

In how many ways can I sum 1's and 2's to a given number, while only using an amount of 1's and 2's divisible by two? I see that without the divisibility restriction I would have f(n)=f(n-1)+(n-2) ways of doing it, ie some fibonacci number as the series begins with a f(1)=1 and f(2)=2, but now I have no idea. --88.194.224.35 (talk) 13:35, 29 December 2008 (UTC)[reply]

A useful strategy for this sort of thing is to work out the first few values (up to 10, say) by hand and (either notice that the pattern is now obvious or) look it up on Sloane's. Algebraist 13:57, 29 December 2008 (UTC)[reply]

Nice link you have. Assuming my code is correct:

#include <stdio.h>
int g;
int f(int n, int s) 
{
    if (s > g) {
      return 0;
    } else if (s == g) {
        if (n % 2) {
            return 0;
        } else {
            return 1;
        }
    } else {
        return f(n+1,s+1) + f(n+1,s+2);
    }
}
int main(int argc, char **argv)
{
    sscanf(argv[1], "%d", &g);
    printf("%d\n", f(0,0));
    return 0;
}

...the relevant entry seems to be A094686. Thanks! --88.194.224.35 (talk) 14:55, 29 December 2008 (UTC)[reply]

acromagaly chances

what is the chance of 2 ex brother's-in-law developing acromegaly or the same type of pituitary tumor at the same time. These men live in different cities & were married to twin sisters, who are very close. The chances of developing acromegaly is 1 in 20,000+. Could foul play be involved? —Preceding unsigned comment added by CAElick (talkcontribs) 14:50, 29 December 2008 (UTC)[reply]

How many different ways do you have to spell the word in question? 3 so far, in this question and in the near-identical one you asked on 26 December. Did you trouble to read the answers given then? →86.132.165.199 (talk) 16:42, 29 December 2008 (UTC)[reply]

on analytical geometry

whats the difference(in both the formulae and the concept) between : perimeter of a rectangle-in 2D geometry & surface area a rectangular box in 3D geometry —Preceding unsigned comment added by 61.2.228.116 (talk) 14:55, 29 December 2008 (UTC)[reply]

The difference between 2(a+b) and 2(ab+bc+ca) is fairly obvious. As concepts, one gives a length and is linear, one gives an area and is quadratic.→86.132.165.199 (talk) 16:49, 29 December 2008 (UTC)[reply]

inverting pixels with convolution matrix

I'm a math dummy trying to use PHP's imageconvolution function to modify some images on the fly (it is much, much faster than trying to manipulate the pixels manually in PHP). I'd like to invert the image (if a pixel has a brightness of 255, then it should be 0; if it has 0, it should be 255, and so on)—what convolution matrix/offset/etc. should I apply? This seems like it should be fairly obvious but I'm struggling, and Google has really been of no help. I thought I'd ask here since this seems like a mathy question more than a computery question (it doesn't require knowledge of the particular language, or any language, to answer—it requires knowing which matrix to apply to get the desired results). --98.217.8.46 (talk) 21:49, 29 December 2008 (UTC)[reply]

Nevermind, figured it out on my own... do a matrix of all 0s with -1 in the center, have a divisor of 1 and an offset of 255... sigh... --98.217.8.46 (talk) 22:20, 29 December 2008 (UTC)[reply]

Composition of polynomials is associative

This seemed as if it should be a triviality, but it appears there's actually a little more going on than I thought at first. Taking the usual definition for polynomials in algebra, as sequences with finitely many nonzero terms and addition and multiplication defined as usual, one can form composite polynomials: if

and

then

is well-defined. How then to show that for polynomials one has

 ?

If one attempts to verify this relation directly in terms of coefficients one quickly runs into a combinatorial explosion. On the other hand, composition of functions in general is associative, so can't we just get away with saying

and that's that? The problem seems to be that our 'functions' are actually sequences and one can't compose sequences directly.

If, however, we choose a nice infinite field then we can conclude that two polynomials are equal iff their evaluation maps are equal, and these can be composed. Then we do get the result from the general associativity of functions.

Clearly my algebra is lacking, but can it really be the case that this is what's needed? It seems very strange!  — merge 22:47, 29 December 2008 (UTC)[reply]

If polynomials can be put into one-to-one correspondence with polynomial functions in such a way that composition of either corresponds to composition of the other, then the fact that composition of functions is associative does the job. That certainly works if the coefficients are real, or if they're complex. I never thought about this one if the coefficients are in rings in general. It seems to me that maybe failure of the "one-to-one"ness mentioned about might be the difficulty that prevents it from being that simple in general. Just thinking out loud—no actual answer yet........ Michael Hardy (talk) 00:52, 30 December 2008 (UTC)[reply]
Consider the polynomials as maps from R[X] into itself, and use the associativity of these maps at X. 24.8.62.102 (talk) 01:45, 30 December 2008 (UTC)[reply]
I'd begun to think along these lines. It seems almost too easy, but I guess this is the right way! If so this question has given me a new appreciation for the non-obvious relationships that can hide behind the seemingly simple property of associativity, and the power of the apparently trivial generic result for composition of functions. I also find it a bit odd that none of the algebra books I have to hand discuss this.  — merge 03:41, 30 December 2008 (UTC)[reply]

I agree with you, merge, I had the same feelings in very similar circumstances. There are operations and related properties that are clear and obvious for polynomials when you can see them as functions; but as soon as you get out the realm of functions, they require quite a long work of technical and somehow tedious verifications; and if are good and you do it you still remain with the doubt that maybe the work was unnecessary for some reason: that's no fair... For instance, take the ring of finite order formal series ; you have there a sequence of operations sum, product, composition, inverse, reciprocal, derivative, residue; to define each of them and to prove the whole list of relations between them is a pain. Of course, one can reduce himself in some ways to polynomials, but this sounds somehow artificious, doesn'it? I think it is a useful exercise to write the formal proofs, for you are forced to invent a clean method of treating indices in sums and products. In your case, also, one thing is to use the linearity of the composition in the first argument, which reduces some of the polynomials to treat to monomials. --PMajer (talk) 11:04, 30 December 2008 (UTC)[reply]

Interesting thoughts! I'm of two minds about this. On the one hand I can see the point of view that it's a good exercise to develop the properties of polynomials purely. On the other, the lazy bastard and the pragmatist in me says that if we can get the results more simply, why not do it? More subtly, if we can do this, doesn't this tell us something important about our mathematics? For instance, since polynomials-as-sequences are something we've abstracted from polynomials-as-functions to imitate them, isn't it a bit silly in some sense to force ourselves to prove functional properties using the sequence representation instead of falling back on the function representation whenever that's possible? And where does 24.8.62.102's method fit into this picture? It seems like a perfectly marvelous bit of trickery to me. Does it work for other things?
As it turns out I ran into this situation not through algebra but via complex analysis, in trying to work out the properties of formal power series just as you mention, which happen to be developed better here than in any algebra book I've seen. Lang's main point in that section is that operations on formal power series can be reduced to operations on polynomials, so he at least seems to view that particular reduction as the right way to go (and I have a healthy respect for his algebra cred). In the bit there where he treats composition of power series he reduces it to the polynomial case and then dismisses it, saying that it then follows from 'the ordinary theory of polynomials'—and thinking about that was what led to this question.  — merge 12:58, 30 December 2008 (UTC)[reply]

Plain computing:

What is the problem, gentlemen? Bo Jacoby (talk) 14:54, 30 December 2008 (UTC).[reply]

That was a lot of TeX just to write .  ;)  — merge 15:35, 30 December 2008 (UTC)[reply]

I used to expand everything writing the formula for the coefficient of , that is in any case important and has a combinatorial interpretation; then it is not that difficult to check the associativity; but this leaved me staring the screen for a good while! --PMajer (talk) 15:52, 30 December 2008 (UTC)[reply]

December 30

Factorization

Can someone factor this number?

549756338177

--Melab±1 01:02, 30 December 2008 (UTC)[reply]

73369 × 7493033. You can use http://www.alpertron.com.ar/ECM.HTM another time. PrimeHunter (talk) 01:12, 30 December 2008 (UTC)[reply]

Hypersudoku question

I read that with ordinary Sudoku at least 17 numbers had to be given to get a viable puzzle, although that conjecture has not been formally proved. Now I am interested in Hyper Sudoku, and wondering what the minimum number of givens would be. I imagine it would be less than 17. Any ideas? Myles325a (talk) 01:09, 30 December 2008 (UTC)[reply]

Polynomial

Can a polynomial have modulus operator, floor function, or ceiling function in it? --Melab±1 01:31, 30 December 2008 (UTC)[reply]

I am unsure if I understand your question, but I will try to answer it as best I can. In a polynomial we allow the coefficents and exponents to be fixed, that is they will not be a function of your input x. If your function is f(x), and a modulus operator or ceiling/floor function operates on x, that is to say you take x modulo a number, or similar action, then your function is not a polynomial. Please see polynomial for more information. -Damelch —Preceding unsigned comment added by Damelch (talkcontribs) 02:21, 30 December 2008 (UTC)[reply]

Haven't you asked this before? The answer is that although they are not commonly used if you wish such operators can be defined and they will correspond intutively to our understanding of the floor, ceiling functions. Cheers--Shahab (talk) 05:08, 30 December 2008 (UTC)[reply]

Question concerning two integrals

A textbook I was reading said that

is uniformly bounded for positive numbers a,b. This was presented as an elementary fact, and I'm not seeing it. Am I missing something obvious? A similar statement was made that

is uniformly bounded as well in terms of , and converges to an expression involving , which is also mysterious to me. Can anybody shed light on this? Thanks, Ray (talk) 12:55, 30 December 2008 (UTC)[reply]

The first equation is usually called the sinc integral or the sine integral. This webpage should be able to help you. Mathworld Link As you can see the integration is somewhat complex, which is probably why they did not give you the proof of this. You can also look at Trigonometric integral for assistance. From the trigonometric integral page we can see that the cosine part of the integrals will also be bounded, however I am unsure why they converge to . I hope this helps. --Damelch (talk) 17:56, 30 December 2008 (UTC)[reply]

Equations for Πni=af(x) and Σni=af(x)

How do you change an equation of Πni=af(x) into an equation without the Π and how do you change an equation of Σni=af(x) into an equation without the Σ?----The Successor of Physics 14:42, 30 December 2008 (UTC) —Preceding unsigned comment added by Superwj5 (talkcontribs)

With difficulty, in general. If it's really important, one might invent a new notation, such as factorial. Did you have any particular f in mind? Algebraist 16:59, 30 December 2008 (UTC)[reply]

Extension of Σni=af(x) and Πni=af(x) for n≠xINT1 or n≠an integer

How do you do Σni=af(x) and Πni=af(x) for n≠xINT1 or n≠an integer?----The Successor of Physics 14:48, 30 December 2008 (UTC) —Preceding unsigned comment added by Superwj5 (talkcontribs)

You simply do not do it this way, unless you are trying to give some mistic sense to it. There are nevertheless situations where has a natural extension to non integer . For instance: if f(x) is defined for all real , and f(x)=o(1) as then a natural extension of S(n) is the unique increasing solution of the functional equation S(0)=0, S(x+1)=S(x)+f(x) . I suggest that you try and prove existence and uniqueness for it and your spirit will be placate! --PMajer (talk) 16:55, 30 December 2008 (UTC)[reply]

In Collatz_conjecture#Experimental_evidence, it states that: "It is also known that {4,2,1} is the only cycle with fewer than 35400 terms". I don't understand this. If you start with the number 2, you only have two terms {2,1}, right ? Do they mean something else by "terms" ? StuRat (talk) 14:52, 30 December 2008 (UTC)[reply]

If you start with the number 2, you get the cycle {2,1,4} – the same as {4,2,1}, just shifted a little. -- Jao (talk) 15:15, 30 December 2008 (UTC)[reply]
No, because, under the Collatz conjecture, you stop when you get to 1. StuRat (talk) 17:06, 30 December 2008 (UTC)[reply]
That doesn't seem to be stated in our article (which I have not read carefully). Certainly for the purposes of that sentence it is not true. Algebraist 17:09, 30 December 2008 (UTC)[reply]
If you don't stop at 1, then you always get an infinite number of terms, as once you get to 1, you then get {1,4,2,1,4,2,1,4,2,1,...}. StuRat (talk) 17:45, 30 December 2008 (UTC)[reply]
Sure, but you're stuck in a cycle, which has a length of 3. If the Collatz conjecture is false, then either there is some starting value for which the resulting sequence grows without bound, or else there is some cycle other than 1, 2, 4, 1, 2, 4, …. In the second case it is known that such a cycle cannot have a length less than 35400. —Bkell (talk) 17:51, 30 December 2008 (UTC)[reply]