Jump to content

Lehmann–Scheffé theorem: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
OAbot (talk | contribs)
m Open access bot: add pmc identifier to citation with #oabot.
Line 28: Line 28:
== Example for when using a non-complete minimal sufficient statistic ==
== Example for when using a non-complete minimal sufficient statistic ==


An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is '''not complete''', was provided by Galili and Meilijson in 2016.<ref>{{cite journal|title= An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator | authors = Tal Galili & Isaac Meilijson | date = 31 Mar 2016 | journal = The American Statistician | volume = 70 | issue = 1 | url = http://www.tandfonline.com/doi/abs/10.1080/00031305.2015.1100683?journalCode=utas20 | pages = 108–113 |doi=10.1080/00031305.2015.1100683}}</ref> Let <math>X_1, \ldots, X_n</math> be a random sample from a scale-uniform distribution <math>X \sim U ( (1-k) \theta, (1+k) \theta),</math> with unknown mean <math>\operatorname{E}[X]=\theta</math> and known design parameter <math>k \in (0,1)</math>. In the search for "best" possible unbiased estimators for <math>\theta</math>, it is natural to consider <math>X_1</math> as an initial (crude) unbiased estimator for <math>\theta</math> and then try to improve it. Since <math>X_1</math> is not a function of <math>T = \left( X_{(1)}, X_{(n)} \right)</math>, the minimal sufficient statistic for <math>\theta</math> (where <math>X_{(1)} = \min_i X_i </math> and <math>X_{(n)} = \max_i X_i </math>), it may be improved using the Rao–Blackwell theorem as follows:
An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is '''not complete''', was provided by Galili and Meilijson in 2016.<ref>{{cite journal|title= An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator | authors = Tal Galili & Isaac Meilijson | date = 31 Mar 2016 | journal = The American Statistician | volume = 70 | issue = 1 | url = http://www.tandfonline.com/doi/abs/10.1080/00031305.2015.1100683?journalCode=utas20 | pages = 108–113 |doi=10.1080/00031305.2015.1100683| pmc = 4960505 }}</ref> Let <math>X_1, \ldots, X_n</math> be a random sample from a scale-uniform distribution <math>X \sim U ( (1-k) \theta, (1+k) \theta),</math> with unknown mean <math>\operatorname{E}[X]=\theta</math> and known design parameter <math>k \in (0,1)</math>. In the search for "best" possible unbiased estimators for <math>\theta</math>, it is natural to consider <math>X_1</math> as an initial (crude) unbiased estimator for <math>\theta</math> and then try to improve it. Since <math>X_1</math> is not a function of <math>T = \left( X_{(1)}, X_{(n)} \right)</math>, the minimal sufficient statistic for <math>\theta</math> (where <math>X_{(1)} = \min_i X_i </math> and <math>X_{(n)} = \max_i X_i </math>), it may be improved using the Rao–Blackwell theorem as follows:


:<math>\hat{\theta}_{RB} =\operatorname{E}_\theta[X_1\mid X_{(1)}, X_{( n)}] = \frac{X_{(1)}+X_{(n)}} 2.</math>
:<math>\hat{\theta}_{RB} =\operatorname{E}_\theta[X_1\mid X_{(1)}, X_{( n)}] = \frac{X_{(1)}+X_{(n)}} 2.</math>

Revision as of 19:14, 15 May 2018

In statistics, the Lehmann–Scheffé theorem is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation.[1] The theorem states that any estimator which is unbiased for a given unknown quantity and that depends on the data only through a complete, sufficient statistic is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé, given their two early papers.[2][3]

If T is a complete sufficient statistic for θ and E(g(T)) = τ(θ) then g(T) is the uniformly minimum-variance unbiased estimator (UMVUE) of τ(θ).

Statement

Let be a random sample from a distribution that has p.d.f (or p.m.f in the discrete case) where is a parameter in the parameter space. Suppose is a sufficient statistic for θ, and let be a complete family. If then is the unique MVUE of θ.

Proof

By the Rao–Blackwell theorem, if is an unbiased estimator of θ then defines an unbiased estimator of θ with the property that its variance is not greater than that of .

Now we show that this function is unique. Suppose is another candidate MVUE estimator of θ. Then again defines an unbiased estimator of θ with the property that its variance is not greater than that of . Then

Since is a complete family

and therefore the function is the unique function of Y with variance not greater than that of any other unbiased estimator. We conclude that is the MVUE.

Example for when using a non-complete minimal sufficient statistic

An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is not complete, was provided by Galili and Meilijson in 2016.[4] Let be a random sample from a scale-uniform distribution with unknown mean and known design parameter . In the search for "best" possible unbiased estimators for , it is natural to consider as an initial (crude) unbiased estimator for and then try to improve it. Since is not a function of , the minimal sufficient statistic for (where and ), it may be improved using the Rao–Blackwell theorem as follows:

However, the following unbiased estimator can be shown to have lower variance:

And in fact, it could be even further improved when using the following estimator:

See also

References

  1. ^ Casella, George (2001). Statistical Inference. Duxbury Press. p. 369. ISBN 0-534-24312-6.
  2. ^ Lehmann, E. L.; Scheffé, H. (1950). "Completeness, similar regions, and unbiased estimation. I.". Sankhyā. 10 (4): 305–340. JSTOR 25048038. MR 0039201.
  3. ^ Lehmann, E.L.; Scheffé, H. (1955). "Completeness, similar regions, and unbiased estimation. II". Sankhyā. 15 (3): 219–236. JSTOR 25048243. MR 0072410.
  4. ^ "An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator". The American Statistician. 70 (1): 108–113. 31 Mar 2016. doi:10.1080/00031305.2015.1100683. PMC 4960505. {{cite journal}}: Unknown parameter |authors= ignored (help)