Jump to content

Talk:Stirling's approximation: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Line 102: Line 102:
::::::: The use of `in mathematics' in the lead clearly demarcates this article as an article about mathematics.
::::::: The use of `in mathematics' in the lead clearly demarcates this article as an article about mathematics.
Pure mathematics is a subject driven by aesthetics, and computer science may be driven by applications.
Pure mathematics is a subject driven by aesthetics, and computer science may be driven by applications.
Perhaps there should be a separate article for computer science.
Perhaps there should be a separate article for computer science.[[User:Lost-n-translation|Lost-n-translation]] ([[User talk:Lost-n-translation|talk]]) 12:59, 3 December 2022 (UTC)

Revision as of 12:59, 3 December 2022

Template:Vital article

WikiProject iconMathematics C‑class Mid‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
CThis article has been rated as C-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-priority on the project's priority scale.

Gosper's approx

I saw this on the OEIS wiki, it appears superior to Stirling's. Ref Bill Gosper and OEIS article. Its form is nearly identical.--Billymac00 (talk) 15:44, 15 August 2014 (UTC)[reply]

That just moves (an approximation of) the factor into the square root, doesn't it? See Stirling's approximation#Speed of convergence and error estimates. — Chrisahn (talk) 11:50, 2 August 2021 (UTC)[reply]

Deriving n! from log(n!)

Could please someone clarify on why is the "next term in the O(ln(n)) is (1/2)ln(2πn)"? If that is the case, then, trivially, the stated equivalence can be obtained by taking the exponential of both sides of the equation, but why do we choose this particular term, (1/2)ln(2πn), out of the O(ln(n)) class? — DmitriyZotikov (talk) 15:18, 22 January 2015 (UTC)[reply]

The line you quoted is in the lead section article, which is supposed to be a concise summary of things explained in more detail later in the article. Your question is answered in the "Derivation" section which immediately follows the lead. —David Eppstein (talk) 17:41, 22 January 2015 (UTC)[reply]
Oh, I see, thank you. The derivation looked a little dense so I didn't read it at first. — DmitriyZotikov (talk) 18:40, 22 January 2015 (UTC)[reply]

\sim vs. =

An editor has recently replaced two copies of asymptotic equality with actual equality in non-convergent asymptotic series. Since the series don't converge, surely = is wrong; but somehow \sim misses the point, too (it is a much weaker statement). What are the usual notations used for asymptotic series like this? --JBL (talk) 00:46, 29 April 2015 (UTC)[reply]

Graphic quality

A number of the images on this page are very "grainy" while others are quite clear. Any particular reason for this? 2620:117:C080:520:5E26:AFF:FEFE:8DBC (talk) 19:31, 16 May 2015 (UTC)[reply]

Reference for Convergent Version

Can we add a reference or a proof for the convergence of the convergent version of Stirling's series given using inverted rising factorials? -Chinju (talk) 17:39, 30 July 2015 (UTC)[reply]

Faster convergence for 'integral' Derivation of ln(n!) by switching bounds

The "Derivation" section uses the integral:

Would it be useful noting that changing the bounds to gives an even tighter fit? In other words

(See plot below)

I realize this makes no difference for larger n, but it makes a difference for smaller n.

Approximation of sterling integral

Prheenan (talk) 15:41, 14 January 2016 (UTC)[reply]

New figure for relative error

Glosser.ca recently added this figure showing the relative error of Stirling's approximation, replacing this older figure. Obviously the new figure is more attractive, except for one issue: there are "kinks" in two of the curves. This suggests, falsely, that these two approximations get abruptly better, then worse again, and this is not the case. I want to suggest that the figure should be changed to avoid this issue. Glosser.ca, are you willing to do this?

(The cause of the kinks is that the figure actually shows relative error as an approximation of Gamma(n + 1). Some of the approximating series begin as under-estimates and later become over-estimates, and so by continuity they actually are equal to the gamma function at some point. Thus, the relative error at those points is 0, and the log relative error goes to negative infinity, and we see kinks. I think the "right" solution is to interpolate the relative error, rather than interpolating the factorial function and then taking the relative error of that. I am open to other opinions.)

--JBL (talk) 16:24, 25 April 2016 (UTC)[reply]

Joel_B._Lewis is exactly right and I had considered this when I constructed the new figure (though not your interpolation solution). Because the (continuous) approximation is largely an approximation *to* the (continuous) gamma function which conveniently also fits factorials, I'm inclined to say we keep the kinks, comment on them in the figure caption, and re-work the corresponding section to use the gamma function instead of factorials. Alternatively, interpolating the relative error should be really straight forward, or I can just make the plot discrete due to the discreteness of factorials (though this may not look as nice as the interpolation thing). Glosser.ca (talk) 17:51, 26 April 2016 (UTC)[reply]
Thanks for your response. I am happy with your suggestion (implemented by David Eppstein) to keep the kinks, with a short explanation. (I should have thought to do this myself.) I do not think it is neccesary to rewrite the section, as the relationship with the gamma function is treated in the immediately following section. --JBL (talk) 22:31, 26 April 2016 (UTC)[reply]

Big-oh to big-theta in intro

The intro starts with the most common form used in many applications,

It would be more precise to use $\Theta(\ln n)$ here instead of $O$, although both are correct. Any objections before I go ahead and change this? — Preceding unsigned comment added by Cstanford.math (talkcontribs) 17:21, 5 November 2020 (UTC)[reply]

Seeing as there was no objection, done now. Caleb Stanford (talk) 00:52, 8 November 2021 (UTC)[reply]

valid-for-all-n bounds

user 2a02:ab88:370b:4d80:aac3:4ef7:a57f:df51 @Chrisahn: Bringing this discussion here. To whoever made the recent edit, can you please make an account and not edit anonymously?

I happen to agree with including this, actually. I think it is useful to comment on the fact that you can have a bound valid for all n and not just asymptotically. The included bound is indeed valid for (including real n). We could, however, add a comment that this is slightly less precise, particularly the upper bound as it has instead of . We could also instead include the Robbins bound and place the discussion of derived/simpler bounds there. Caleb Stanford (talk) 16:33, 2 November 2021 (UTC)[reply]

Why can't we get a valid-for-all-n bound that at least has the correct leading coefficient? This one is too inaccurate. —David Eppstein (talk) 17:02, 2 November 2021 (UTC)[reply]
User:David Eppstein Mentioning the full bound with the correct leading coefficient sounds good to me, but that factor is definitely annoying if I want something simpler. Is there a bound valid for all n that doesn't have the additional factor? is not valid for , so the simplest solution I can think of is the one proposed, with replaced by . Caleb Stanford (talk) 20:49, 2 November 2021 (UTC)[reply]
@Caleb Stanford and David Eppstein: My reasons for removing the formula: For numerical applications, it's probably too imprecise. It may be useful in some proofs, but there are many other formulas that may be just as useful (depending on the proof). Which ones should we include in the article? If this particular formula is actually used in the literature, we could probably include it, but if it's only used occasionally, it shouldn't be in the lead, and if it's rarely found in the literature, it shouldn't be in the article at all. — Chrisahn (talk) 19:33, 7 November 2021 (UTC)[reply]
@Chrisahn: Thanks! I fully agree with that criteria for whether or not to include a bound. Based on the discussion I included the Robinson bound in the lead as an example of a valid-for-all-n bound, and added a small note to the "Speed of convergence" section, thoughts on this? Would also be happy with not mentioning the bound in the intro, but only mentioning that valid-for-all-n bounds do exist. Caleb Stanford (talk) 22:18, 7 November 2021 (UTC)[reply]

Does the log-base-2 version really belong in the lead?

As long as we're discussing the lead: does the log-base-2 version really belong here? It follows completely trivially from the previous formula and doesn't add any new information. I would be in favor of moving this note + the comment about comparison sort somewhere else in the article. Caleb Stanford (talk) 22:21, 7 November 2021 (UTC)[reply]

The log-base-2 version is the one that all computer science students learn, because it is the version that is relevant for comparison sorting. For them, at least, it is the natural log that is much less relevant, and you could make the same argument that the natural-log version follows trivially from the log-base-2 version. —David Eppstein (talk) 22:37, 7 November 2021 (UTC)[reply]
I'm not sure about "all computer scientists". The natural log version does happen to be slightly cleaner in this case. My main point is that the two formulas contain the same information so they dilute the information density of the lead. What about just stating: "The change-of-base formula can be used to give a version using the base-2-log, for instance in the worst-case lower bound for comparison sorting." "version using the base-2-log" could link to a section on variations if that would help reach consensus. Caleb Stanford (talk) 23:10, 7 November 2021 (UTC)[reply]
Cleaner, maybe, because of the nicer constant on the linear term, but is there a significant application where a base-specific logarithm formula is necessary and the base is not 2? —David Eppstein (talk) 23:24, 7 November 2021 (UTC)[reply]
To that question, my guess is the answer is no. But IMO it does not justify including essentially the same formula twice. Caleb Stanford (talk) 23:43, 7 November 2021 (UTC)[reply]
Yes, but to my mind if we are to keep only one and the choice is aesthetics versus an important application, the application wins. You have not made any argument beyond aesthetics for keeping the natural log in the lead. —David Eppstein (talk) 00:04, 8 November 2021 (UTC)[reply]
I agree with that assessment, aesthetics is my main argument. Given the choice I would prefer having only log-base-2 to having both. Somehow I expect most mathematicians would be upset with only including the log-base-2 one though. Waiting for a weigh-in from a third party. Caleb Stanford (talk) 00:36, 8 November 2021 (UTC)[reply]
The use of `in mathematics' in the lead clearly demarcates this article as an article about mathematics.
       Pure mathematics is a subject driven by aesthetics, and computer science may be driven by applications.
       Perhaps there should be a separate article for computer science.Lost-n-translation (talk) 12:59, 3 December 2022 (UTC)[reply]