Jump to content

James–Stein estimator: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Monkbot (talk | contribs)
m Task 18 (cosmetic): eval 9 templates: del empty params (1×); hyphenate params (1×);
It's basically the same formula, you do not need a citation.
 
(24 intermediate revisions by 18 users not shown)
Line 1: Line 1:
{{Short description|Biased estimator for Gaussian random vectors, better than ordinary least-squared-error minimization}}{{technical|date=November 2017}}
{{Short description|Biased estimator for Gaussian random vectors, better than ordinary least-squared-error minimization}}{{technical|date=November 2017}}
The '''James–Stein estimator''' is a [[Bias of an estimator|biased]] [[estimator]] of the [[mean]], <math>\boldsymbol\theta</math>, of (possibly) [[Correlation and dependence|correlated]] [[Normal distribution|Gaussian distributed]] [[random vector]]s <math>Y = \{Y_1, Y_2, ..., Y_m\}</math> with unknown means <math>\{\boldsymbol\theta_1, \boldsymbol\theta_2, ..., \boldsymbol\theta_m\}</math>.
The '''James–Stein estimator''' is a [[Bias of an estimator|biased]] [[estimator]] of the [[mean]], <math>\boldsymbol\theta</math>, of (possibly) [[Correlation and dependence|correlated]] [[Normal distribution|Gaussian distributed]] [[random variable]]s <math>Y = \{Y_1, Y_2, ..., Y_m\}</math> with unknown means <math>\{\boldsymbol\theta_1, \boldsymbol\theta_2, ..., \boldsymbol\theta_m\}</math>.


It arose sequentially in two main published papers, the earlier version of the estimator was developed by [[Charles Stein (statistician)|Charles Stein]] in 1956,<ref name="stein-56">{{Citation|last=Stein|first=C.|title=Proc. Third Berkeley Symp. Math. Statist. Prob.|url=http://projecteuclid.org/euclid.bsmsp/1200501656|volume=1|pages=197–206|year=1956|contribution=Inadmissibility of the usual estimator for the mean of a multivariate distribution|mr=0084922|zbl=0073.35602|author-link=Charles Stein (statistician)}}</ref> which reached a relatively shocking conclusion that while the then usual estimate of the mean, or the sample mean written by Stein and James as <math>{\boldsymbol\hat\theta}(Y_i) = {\boldsymbol\theta}</math>, is [[Admissible decision rule|admissible]] when <math>m \leq 2</math>, however it is [[Admissible decision rule|inadmissible]] when <math>m \geq 3</math> and proposed a possible improvement to the estimator that [[Shrinkage (statistics)|shrinks]] the sample means <math>{\boldsymbol\theta_i}</math> towards a more central mean vector <math>\boldsymbol\nu</math> (which can be chosen [[A priori and a posteriori|a priori]] or commonly the "average of averages" of the sample means given all samples share the same size), is commonly referred to as '''[[Stein's example|Stein's example or paradox]]'''. This earlier result was improved later by Willard James and Charles Stein in 1961 through simplifying the original process.<ref name="james-stein-61">{{Citation|last=James|first=W.|title=Proc. Fourth Berkeley Symp. Math. Statist. Prob.|url=http://projecteuclid.org/euclid.bsmsp/1200512173|volume=1|pages=361–379|year=1961|contribution=Estimation with quadratic loss|mr=0133191|last2=Stein|first2=C.|author2-link=Charles Stein (statistician)}}</ref>
It arose sequentially in two main published papers. The earlier version of the estimator was developed in 1956,<ref name="stein-56">{{Citation|last=Stein|first=C.|title=Proc. Third Berkeley Symp. Math. Statist. Prob.|url=http://projecteuclid.org/euclid.bsmsp/1200501656|volume=1|pages=197–206|year=1956|contribution=Inadmissibility of the usual estimator for the mean of a multivariate distribution|mr=0084922|zbl=0073.35602|author-link=Charles Stein (statistician)}}</ref> when [[Charles Stein (statistician)|Charles Stein]] reached a relatively shocking conclusion that while the then-usual estimate of the mean, the [[sample mean]], is [[Admissible decision rule|admissible]] when <math>m \leq 2</math>, it is [[Admissible decision rule|inadmissible]] when <math>m \geq 3</math>. Stein proposed a possible improvement to the estimator that [[Shrinkage (statistics)|shrinks]] the sample means <math>{\boldsymbol\theta_i}</math> towards a more central mean vector <math>\boldsymbol\nu</math> (which can be chosen [[A priori and a posteriori|a priori]] or commonly as the "average of averages" of the sample means, given all samples share the same size). This observation is commonly referred to as [[Stein's example|Stein's example or paradox]]. In 1961, [[Willard D. James|Willard James]] and Charles Stein simplified the original process.<ref name="james–stein-61">{{Citation|last1=James|first1=W.|title=Proc. Fourth Berkeley Symp. Math. Statist. Prob.|url=http://projecteuclid.org/euclid.bsmsp/1200512173|volume=1|pages=361–379|year=1961|contribution=Estimation with quadratic loss|mr=0133191|last2=Stein|first2=C.|author2-link=Charles Stein (statistician)}}</ref>


It can be shown that the James–Stein estimator [[dominating decision rule|dominates]] the "ordinary" [[least squares]] approach, meaning the James-Stein estimator has a lower or equal [[mean squared error]] than the "ordinary" least square estimator.
It can be shown that the James–Stein estimator [[dominating decision rule|dominates]] the "ordinary" [[least squares]] approach, meaning the James–Stein estimator has a lower or equal [[mean squared error]] than the "ordinary" least square estimator.

Similar to the [[Hodges' estimator]], the James-Stein estimator is [[superefficient]] and [[regular estimator|non-regular]] at <math>\theta=0</math>.<ref>Beran, R. (1995). THE ROLE OF HAJEK’S CONVOLUTION THEOREM IN STATISTICAL THEORY</ref>


== Setting ==
== Setting ==
Line 15: Line 17:
In real-world application, this is a common situation in which a set of parameters is sampled, and the samples are corrupted by independent [[Gaussian noise]]. Since this noise has mean of zero, it may be reasonable to use the samples themselves as an estimate of the parameters. This approach is the [[least squares]] estimator, which is <math>\widehat{\boldsymbol \theta}_{LS} = {\mathbf y}</math>.
In real-world application, this is a common situation in which a set of parameters is sampled, and the samples are corrupted by independent [[Gaussian noise]]. Since this noise has mean of zero, it may be reasonable to use the samples themselves as an estimate of the parameters. This approach is the [[least squares]] estimator, which is <math>\widehat{\boldsymbol \theta}_{LS} = {\mathbf y}</math>.


Stein demonstrated that in terms of [[mean squared error]] <math>\operatorname{E} \left[ \left\| {\boldsymbol \theta}-\widehat {\boldsymbol \theta} \right\|^2 \right]</math>, the least squares estimator, <math>\widehat{\boldsymbol \theta}_{LS}</math>, is sub-optimal to a shrinkage based estimators, such as the '''James–Stein estimator''', <math>
Stein demonstrated that in terms of [[mean squared error]] <math>\operatorname{E} \left[ \left\| {\boldsymbol \theta}-\widehat {\boldsymbol \theta} \right\|^2 \right]</math>, the least squares estimator, <math>\widehat{\boldsymbol \theta}_{LS}</math>, is sub-optimal to shrinkage based estimators, such as the '''James–Stein estimator''', <math>
\widehat{\boldsymbol \theta}_{JS}
\widehat{\boldsymbol \theta}_{JS}
</math>.<ref name="stein-56"/> The paradoxical result, that there is a (possibly) better and never any worse estimate of <math>\boldsymbol\theta</math> in mean squared error as compared to the sample mean, became known as [[Stein's phenomenon]].
</math>.<ref name="stein-56"/> The paradoxical result, that there is a (possibly) better and never any worse estimate of <math>\boldsymbol\theta</math> in mean squared error as compared to the sample mean, became known as [[Stein's example]].


== Formulation ==
== The James–Stein estimator ==
[[Image:MSE of ML vs JS.png|thumb|right|350px|MSE (R) of least squares estimator (ML) vs. James–Stein estimator (JS). The James–Stein estimator gives its best estimate when the norm of the actual parameter vector θ is near zero.]]
[[Image:MSE of ML vs JS.png|thumb|right|350px|MSE (R) of least squares estimator (ML) vs. James–Stein estimator (JS). The James–Stein estimator gives its best estimate when the norm of the actual parameter vector θ is near zero.]]
If <math>\sigma^2</math> is known, the James–Stein estimator is given by
If <math>\sigma^2</math> is known, the James–Stein estimator is given by
Line 28: Line 30:
</math>
</math>


James and Stein showed that the above estimator [[dominating decision rule|dominates]] <math>\widehat{\boldsymbol \theta}_{LS}</math> for any <math>m \ge 3</math>, meaning that the James–Stein estimator always achieves lower [[mean squared error]] (MSE) than the [[maximum likelihood]] estimator.<ref name="james-stein-61"/><ref name="lehmann-casella-98">{{Citation
James and Stein showed that the above estimator [[dominating decision rule|dominates]] <math>\widehat{\boldsymbol \theta}_{LS}</math> for any <math>m \ge 3</math>, meaning that the James–Stein estimator always achieves lower [[mean squared error]] (MSE) than the [[maximum likelihood]] estimator.<ref name="james–stein-61"/><ref name="lehmann-casella-98">{{Citation
| first = E. L. | last = Lehmann
| first1 = E. L. | last1 = Lehmann
| first2 = G. | last2 = Casella
| first2 = G. | last2 = Casella
| year = 1998
| year = 1998
Line 38: Line 40:
}}</ref> By definition, this makes the least squares estimator [[admissible decision rule|inadmissible]] when <math>m \ge 3</math>.
}}</ref> By definition, this makes the least squares estimator [[admissible decision rule|inadmissible]] when <math>m \ge 3</math>.


Notice that if <math>(m-2) \sigma^2<\|{\mathbf y}\|^2 </math> then this estimator simply takes the natural estimator <math>\mathbf y</math> and shrinks it towards the origin '''0'''. In fact this is not the only direction of [[Shrinkage (statistics)|shrinkage]] that works. Let '''''ν''''' be an arbitrary fixed vector of length <math>m</math>. Then there exists an estimator of the James-Stein type that shrinks toward '''''ν''''', namely
Notice that if <math>(m-2) \sigma^2<\|{\mathbf y}\|^2 </math> then this estimator simply takes the natural estimator <math>\mathbf y</math> and shrinks it towards the origin '''0'''. In fact this is not the only direction of [[Shrinkage (statistics)|shrinkage]] that works. Let '''''ν''''' be an arbitrary fixed vector of dimension <math>m</math>. Then there exists an estimator of the James–Stein type that shrinks toward '''''ν''''', namely


:<math>
:<math>
Line 45: Line 47:
</math>
</math>


The James–Stein estimator dominates the usual estimator for any '''''ν'''''. A natural question to ask is whether the improvement over the usual estimator is independent of the choice of '''''ν'''''. The answer is no. The improvement is small if <math>\|{\boldsymbol\theta - \boldsymbol\nu}\|</math> is large. Thus to get a very great improvement some knowledge of the location of '''''θ''''' is necessary. Of course this is the quantity we are trying to estimate so we don't have this knowledge [[A priori and a posteriori|a priori]]. But we may have some guess as to what the mean vector is. This can be considered a disadvantage of the estimator: the choice is not objective as it may depend on the beliefs of the researcher.
The James–Stein estimator dominates the usual estimator for any '''''ν'''''. A natural question to ask is whether the improvement over the usual estimator is independent of the choice of '''''ν'''''. The answer is no. The improvement is small if <math>\|{\boldsymbol\theta - \boldsymbol\nu}\|</math> is large. Thus to get a very great improvement some knowledge of the location of '''''θ''''' is necessary. Of course this is the quantity we are trying to estimate so we don't have this knowledge [[A priori and a posteriori|a priori]]. But we may have some guess as to what the mean vector is. This can be considered a disadvantage of the estimator: the choice is not objective as it may depend on the beliefs of the researcher. Nonetheless, James and Stein's result is that ''any'' finite guess '''''ν''''' improves the expected MSE over the maximum-likelihood estimator, which is tantamount to using an infinite '''''ν''''', surely a poor guess.


== Interpretation ==
== Interpretation ==
Seeing the James–Stein estimator as an [[empirical Bayes method]] gives some intuition to this result: One assumes that '''''θ''''' itself is a random variable with [[Prior probability|prior distribution]] <math>\sim N(0, A)</math>, where ''A'' is estimated from the data itself. Estimating ''A'' only gives an advantage compared to the [[Maximum likelihood|maximum-likelihood estimator]] when the dimension <math>m</math> is large enough; hence it does not work for <math>m\leq 2</math>. The James–Stein estimator is a member of a class of Bayesian estimators that dominate the maximum-likelihood estimator.<ref>{{cite journal | last1 = Efron | first1 = B. | last2 = Morris | first2 = C. | year = 1973 | title = Stein's Estimation Rule and Its Competitors—An Empirical Bayes Approach | journal = Journal of the American Statistical Association | volume = 68 | issue = 341 | pages = 117–130 | publisher = American Statistical Association | doi = 10.2307/2284155| jstor = 2284155 }}</ref>
Seeing the James–Stein estimator as an [[empirical Bayes method]] gives some intuition to this result: One assumes that '''''θ''''' itself is a random variable with [[Prior probability|prior distribution]] <math>\sim N(0, A)</math>, where ''A'' is estimated from the data itself. Estimating ''A'' only gives an advantage compared to the [[Maximum likelihood|maximum-likelihood estimator]] when the dimension <math>m</math> is large enough; hence it does not work for <math>m\leq 2</math>. The James–Stein estimator is a member of a class of Bayesian estimators that dominate the maximum-likelihood estimator.<ref>{{cite journal | last1 = Efron | first1 = B. | last2 = Morris | first2 = C. | year = 1973 | title = Stein's Estimation Rule and Its Competitors—An Empirical Bayes Approach | journal = Journal of the American Statistical Association | volume = 68 | issue = 341 | pages = 117–130 | publisher = American Statistical Association | doi = 10.2307/2284155| jstor = 2284155 }}</ref>


A consequence of the above discussion is the following counterintuitive result: When three or more unrelated parameters are measured, their total MSE can be reduced by using a combined estimator such as the James–Stein estimator; whereas when each parameter is estimated separately, the least squares (LS) estimator is [[Admissible decision rule|admissible]]. A quirky example would be estimating the speed of light, tea consumption in Taiwan, and hog weight in Montana, all together. The James–Stein estimator always improves upon the ''total'' MSE, i.e., the sum of the expected errors of each component. Therefore, the total MSE in measuring light speed, tea consumption, and hog weight would improve by using the James–Stein estimator. However, any particular component (such as the speed of light) would improve for some parameter values, and deteriorate for others. Thus, although the James–Stein estimator dominates the LS estimator when three or more parameters are estimated, any single component does not dominate the respective component of the LS estimator.
A consequence of the above discussion is the following counterintuitive result: When three or more unrelated parameters are measured, their total MSE can be reduced by using a combined estimator such as the James–Stein estimator; whereas when each parameter is estimated separately, the least squares (LS) estimator is [[Admissible decision rule|admissible]]. A quirky example would be estimating the speed of light, tea consumption in Taiwan, and hog weight in Montana, all together. The James–Stein estimator always improves upon the ''total'' MSE, i.e., the sum of the expected squared errors of each component. Therefore, the total MSE in measuring light speed, tea consumption, and hog weight would improve by using the James–Stein estimator. However, any particular component (such as the speed of light) would improve for some parameter values, and deteriorate for others. Thus, although the James–Stein estimator dominates the LS estimator when three or more parameters are estimated, any single component does not dominate the respective component of the LS estimator.


The conclusion from this hypothetical example is that measurements should be combined if one is interested in minimizing their total MSE. For example, in a [[telecommunication]] setting, it is reasonable to combine [[communication channel|channel]] tap measurements in a [[channel estimation]] scenario, as the goal is to minimize the total channel estimation error. Conversely, there could be objections to combining channel estimates of different users, since no user would want their channel estimate to deteriorate in order to improve the average network performance.{{Citation needed|date=February 2013}}
The conclusion from this hypothetical example is that measurements should be combined if one is interested in minimizing their total MSE. For example, in a [[telecommunication]] setting, it is reasonable to combine [[communication channel|channel]] tap measurements in a [[channel estimation]] scenario, as the goal is to minimize the total channel estimation error.


The James–Stein estimator has also found use in fundamental quantum theory, where the estimator has been used to improve the theoretical bounds of the entropic uncertainty principle (a recent development of the Heisenberg [[uncertainty principle]]) for more than three measurements.<ref name="stander-17">{{Citation
The James–Stein estimator has also found use in fundamental quantum theory, where the estimator has been used to improve the theoretical bounds of the [[entropic uncertainty principle]] for more than three measurements.<ref name="stander-17">{{Citation
| last = Stander | first = M.
| last = Stander | first = M.
| title = Using Stein's estimator to correct the bound on the entropic uncertainty principle for more than two measurements
| title = Using Stein's estimator to correct the bound on the entropic uncertainty principle for more than two measurements
Line 60: Line 62:
| arxiv = 1702.02440
| arxiv = 1702.02440
| bibcode = 2017arXiv170202440S}}</ref>
| bibcode = 2017arXiv170202440S}}</ref>

An intuitive derivation and interpretation is given by the [[Francis Galton|Galtonian]] perspective.<ref>{{Cite journal|last=Stigler|first=Stephen M.|date=1990-02-01|title=The 1988 Neyman Memorial Lecture: A Galtonian Perspective on Shrinkage Estimators|journal=Statistical Science|volume=5|issue=1|doi=10.1214/ss/1177012274|issn=0883-4237|doi-access=free}}</ref> Under this interpretation, we aim to predict the population means using the [[Measurement error model|imperfectly measured sample means]]. The equation of the [[Ordinary least squares|OLS]] estimator in a hypothetical regression of the population means on the sample means gives an estimator of the form of either the James–Stein estimator (when we force the OLS intercept to equal 0) or of the Efron-Morris estimator (when we allow the intercept to vary).


== Improvements ==
== Improvements ==
The basic James–Stein estimator has the peculiar property that for small values of <math>\|{\mathbf y} - {\boldsymbol\nu} \|,</math> the multiplier on <math>{\mathbf y} - {\boldsymbol\nu}</math> is actually negative. This can be easily remedied by replacing this multiplier by zero when it is negative. The resulting estimator is called the ''positive-part James–Stein estimator'' and is given by
Despite the intuition that the James–Stein estimator shrinks the maximum-likelihood estimate <math>{\mathbf y}</math> ''toward'' <math>\boldsymbol\nu</math>, the estimate actually moves ''away'' from <math>\boldsymbol\nu</math> for small values of <math>\|{\mathbf y} - {\boldsymbol\nu} \|,</math> as the multiplier on <math>{\mathbf y} - {\boldsymbol\nu}</math> is then negative. This can be easily remedied by replacing this multiplier by zero when it is negative. The resulting estimator is called the ''positive-part James–Stein estimator'' and is given by


:<math>
:<math>
Line 82: Line 86:
== Extensions ==
== Extensions ==
The James–Stein estimator may seem at first sight to be a result of some peculiarity of the problem setting. In fact, the estimator exemplifies a very wide-ranging effect; namely, the fact that the "ordinary" or least squares estimator is often [[admissible decision rule|inadmissible]] for simultaneous estimation of several parameters.{{Citation needed|date=February 2012}} This effect has been called [[Stein's phenomenon]], and has been demonstrated for several different problem settings, some of which are briefly outlined below.
The James–Stein estimator may seem at first sight to be a result of some peculiarity of the problem setting. In fact, the estimator exemplifies a very wide-ranging effect; namely, the fact that the "ordinary" or least squares estimator is often [[admissible decision rule|inadmissible]] for simultaneous estimation of several parameters.{{Citation needed|date=February 2012}} This effect has been called [[Stein's phenomenon]], and has been demonstrated for several different problem settings, some of which are briefly outlined below.
* James and Stein demonstrated that the estimator presented above can still be used when the variance <math>\sigma^2</math> is unknown, by replacing it with the standard estimator of the variance, <math>\widehat{\sigma}^2 = \frac{1}{n}\sum ( y_i-\overline{y} )^2</math>. The dominance result still holds under the same condition, namely, <math>m > 2</math>.<ref name="james-stein-61"/>
* James and Stein demonstrated that the estimator presented above can still be used when the variance <math>\sigma^2</math> is unknown, by replacing it with the standard estimator of the variance, <math>\widehat{\sigma}^2 = \frac{1}{m}\sum ( y_i-\overline{y} )^2</math>. The dominance result still holds under the same condition, namely, <math>m > 2</math>.<ref name="james–stein-61"/>
* The results in this article are for the case when only a single observation vector '''y''' is available. For the more general case when <math>n</math> vectors are available, the results are similar:{{Citation needed|date=February 2012}}
* The results in this article are for the case when only a single observation vector '''y''' is available. For the more general case when <math>n</math> vectors are available, the results are similar:
:: <math>
:: <math>
\widehat{\boldsymbol \theta}_{JS} =
\widehat{\boldsymbol \theta}_{JS} =
\left( 1 - \frac{(m-2) \frac{\sigma^2}{n}}{\|{\overline{\mathbf y}}\|^2} \right) {\overline{\mathbf y}},
\left( 1 - \frac{(m-2) \frac{\sigma^2}{n}}{\|{\overline{\mathbf y}}\|^2} \right) {\overline{\mathbf y}},
</math>
</math>
:where <math>{\overline{\mathbf y}}</math> is the <math>m</math>-length average of the <math>n</math> observations.
:where <math>{\overline{\mathbf y}}</math> is the <math>m</math>-length average of the <math>n</math> observations, and, therefore, <math>{\overline{\mathbf y}}\sim N_m({\boldsymbol \theta}, \frac{\sigma^2}{n} I)</math>.


* The work of James and Stein has been extended to the case of a general measurement covariance matrix, i.e., where measurements may be statistically dependent and may have differing variances.<ref name="bock75"/> A similar dominating estimator can be constructed, with a suitably generalized dominance condition. This can be used to construct a [[linear regression]] technique which outperforms the standard application of the LS estimator.<ref name="bock75">{{Citation
* The work of James and Stein has been extended to the case of a general measurement covariance matrix, i.e., where measurements may be statistically dependent and may have differing variances.<ref name="bock75"/> A similar dominating estimator can be constructed, with a suitably generalized dominance condition. This can be used to construct a [[linear regression]] technique which outperforms the standard application of the LS estimator.<ref name="bock75">{{Citation
Line 119: Line 123:
* [[Hodges' estimator]]
* [[Hodges' estimator]]
* [[Shrinkage estimator]]
* [[Shrinkage estimator]]
* [[Regular estimator]]
* [[KL divergence]]


== References ==
== References ==
Line 124: Line 130:


== Further reading ==
== Further reading ==
* {{cite book |last=Judge |first=George G. |last2=Bock |first2=M. E. |title=The Statistical Implications of Pre-Test and Stein-Rule Estimators in Econometrics |location=New York |publisher=North Holland |year=1978 |isbn=0-7204-0729-X |pages=229–257 }}
* {{cite book |last1=Judge |first1=George G. |last2=Bock |first2=M. E. |title=The Statistical Implications of Pre-Test and Stein-Rule Estimators in Econometrics |location=New York |publisher=North Holland |year=1978 |isbn=0-7204-0729-X |pages=229–257 }}


{{DEFAULTSORT:James-Stein Estimator}}
{{DEFAULTSORT:James-Stein Estimator}}

Latest revision as of 14:17, 19 November 2024

The James–Stein estimator is a biased estimator of the mean, , of (possibly) correlated Gaussian distributed random variables with unknown means .

It arose sequentially in two main published papers. The earlier version of the estimator was developed in 1956,[1] when Charles Stein reached a relatively shocking conclusion that while the then-usual estimate of the mean, the sample mean, is admissible when , it is inadmissible when . Stein proposed a possible improvement to the estimator that shrinks the sample means towards a more central mean vector (which can be chosen a priori or commonly as the "average of averages" of the sample means, given all samples share the same size). This observation is commonly referred to as Stein's example or paradox. In 1961, Willard James and Charles Stein simplified the original process.[2]

It can be shown that the James–Stein estimator dominates the "ordinary" least squares approach, meaning the James–Stein estimator has a lower or equal mean squared error than the "ordinary" least square estimator.

Similar to the Hodges' estimator, the James-Stein estimator is superefficient and non-regular at .[3]

Setting

[edit]

Let where the vector is the unknown mean of , which is -variate normally distributed and with known covariance matrix .

We are interested in obtaining an estimate, , of , based on a single observation, , of .

In real-world application, this is a common situation in which a set of parameters is sampled, and the samples are corrupted by independent Gaussian noise. Since this noise has mean of zero, it may be reasonable to use the samples themselves as an estimate of the parameters. This approach is the least squares estimator, which is .

Stein demonstrated that in terms of mean squared error , the least squares estimator, , is sub-optimal to shrinkage based estimators, such as the James–Stein estimator, .[1] The paradoxical result, that there is a (possibly) better and never any worse estimate of in mean squared error as compared to the sample mean, became known as Stein's example.

Formulation

[edit]
MSE (R) of least squares estimator (ML) vs. James–Stein estimator (JS). The James–Stein estimator gives its best estimate when the norm of the actual parameter vector θ is near zero.

If is known, the James–Stein estimator is given by

James and Stein showed that the above estimator dominates for any , meaning that the James–Stein estimator always achieves lower mean squared error (MSE) than the maximum likelihood estimator.[2][4] By definition, this makes the least squares estimator inadmissible when .

Notice that if then this estimator simply takes the natural estimator and shrinks it towards the origin 0. In fact this is not the only direction of shrinkage that works. Let ν be an arbitrary fixed vector of dimension . Then there exists an estimator of the James–Stein type that shrinks toward ν, namely

The James–Stein estimator dominates the usual estimator for any ν. A natural question to ask is whether the improvement over the usual estimator is independent of the choice of ν. The answer is no. The improvement is small if is large. Thus to get a very great improvement some knowledge of the location of θ is necessary. Of course this is the quantity we are trying to estimate so we don't have this knowledge a priori. But we may have some guess as to what the mean vector is. This can be considered a disadvantage of the estimator: the choice is not objective as it may depend on the beliefs of the researcher. Nonetheless, James and Stein's result is that any finite guess ν improves the expected MSE over the maximum-likelihood estimator, which is tantamount to using an infinite ν, surely a poor guess.

Interpretation

[edit]

Seeing the James–Stein estimator as an empirical Bayes method gives some intuition to this result: One assumes that θ itself is a random variable with prior distribution , where A is estimated from the data itself. Estimating A only gives an advantage compared to the maximum-likelihood estimator when the dimension is large enough; hence it does not work for . The James–Stein estimator is a member of a class of Bayesian estimators that dominate the maximum-likelihood estimator.[5]

A consequence of the above discussion is the following counterintuitive result: When three or more unrelated parameters are measured, their total MSE can be reduced by using a combined estimator such as the James–Stein estimator; whereas when each parameter is estimated separately, the least squares (LS) estimator is admissible. A quirky example would be estimating the speed of light, tea consumption in Taiwan, and hog weight in Montana, all together. The James–Stein estimator always improves upon the total MSE, i.e., the sum of the expected squared errors of each component. Therefore, the total MSE in measuring light speed, tea consumption, and hog weight would improve by using the James–Stein estimator. However, any particular component (such as the speed of light) would improve for some parameter values, and deteriorate for others. Thus, although the James–Stein estimator dominates the LS estimator when three or more parameters are estimated, any single component does not dominate the respective component of the LS estimator.

The conclusion from this hypothetical example is that measurements should be combined if one is interested in minimizing their total MSE. For example, in a telecommunication setting, it is reasonable to combine channel tap measurements in a channel estimation scenario, as the goal is to minimize the total channel estimation error.

The James–Stein estimator has also found use in fundamental quantum theory, where the estimator has been used to improve the theoretical bounds of the entropic uncertainty principle for more than three measurements.[6]

An intuitive derivation and interpretation is given by the Galtonian perspective.[7] Under this interpretation, we aim to predict the population means using the imperfectly measured sample means. The equation of the OLS estimator in a hypothetical regression of the population means on the sample means gives an estimator of the form of either the James–Stein estimator (when we force the OLS intercept to equal 0) or of the Efron-Morris estimator (when we allow the intercept to vary).

Improvements

[edit]

Despite the intuition that the James–Stein estimator shrinks the maximum-likelihood estimate toward , the estimate actually moves away from for small values of as the multiplier on is then negative. This can be easily remedied by replacing this multiplier by zero when it is negative. The resulting estimator is called the positive-part James–Stein estimator and is given by

This estimator has a smaller risk than the basic James–Stein estimator. It follows that the basic James–Stein estimator is itself inadmissible.[8]

It turns out, however, that the positive-part estimator is also inadmissible.[4] This follows from a general result which requires admissible estimators to be smooth.

Extensions

[edit]

The James–Stein estimator may seem at first sight to be a result of some peculiarity of the problem setting. In fact, the estimator exemplifies a very wide-ranging effect; namely, the fact that the "ordinary" or least squares estimator is often inadmissible for simultaneous estimation of several parameters.[citation needed] This effect has been called Stein's phenomenon, and has been demonstrated for several different problem settings, some of which are briefly outlined below.

  • James and Stein demonstrated that the estimator presented above can still be used when the variance is unknown, by replacing it with the standard estimator of the variance, . The dominance result still holds under the same condition, namely, .[2]
  • The results in this article are for the case when only a single observation vector y is available. For the more general case when vectors are available, the results are similar:
where is the -length average of the observations, and, therefore, .
  • The work of James and Stein has been extended to the case of a general measurement covariance matrix, i.e., where measurements may be statistically dependent and may have differing variances.[9] A similar dominating estimator can be constructed, with a suitably generalized dominance condition. This can be used to construct a linear regression technique which outperforms the standard application of the LS estimator.[9]
  • Stein's result has been extended to a wide class of distributions and loss functions. However, this theory provides only an existence result, in that explicit dominating estimators were not actually exhibited.[10] It is quite difficult to obtain explicit estimators improving upon the usual estimator without specific restrictions on the underlying distributions.[4]

See also

[edit]

References

[edit]
  1. ^ a b Stein, C. (1956), "Inadmissibility of the usual estimator for the mean of a multivariate distribution", Proc. Third Berkeley Symp. Math. Statist. Prob., vol. 1, pp. 197–206, MR 0084922, Zbl 0073.35602
  2. ^ a b c James, W.; Stein, C. (1961), "Estimation with quadratic loss", Proc. Fourth Berkeley Symp. Math. Statist. Prob., vol. 1, pp. 361–379, MR 0133191
  3. ^ Beran, R. (1995). THE ROLE OF HAJEK’S CONVOLUTION THEOREM IN STATISTICAL THEORY
  4. ^ a b c Lehmann, E. L.; Casella, G. (1998), Theory of Point Estimation (2nd ed.), New York: Springer
  5. ^ Efron, B.; Morris, C. (1973). "Stein's Estimation Rule and Its Competitors—An Empirical Bayes Approach". Journal of the American Statistical Association. 68 (341). American Statistical Association: 117–130. doi:10.2307/2284155. JSTOR 2284155.
  6. ^ Stander, M. (2017), Using Stein's estimator to correct the bound on the entropic uncertainty principle for more than two measurements, arXiv:1702.02440, Bibcode:2017arXiv170202440S
  7. ^ Stigler, Stephen M. (1990-02-01). "The 1988 Neyman Memorial Lecture: A Galtonian Perspective on Shrinkage Estimators". Statistical Science. 5 (1). doi:10.1214/ss/1177012274. ISSN 0883-4237.
  8. ^ Anderson, T. W. (1984), An Introduction to Multivariate Statistical Analysis (2nd ed.), New York: John Wiley & Sons
  9. ^ a b Bock, M. E. (1975), "Minimax estimators of the mean of a multivariate normal distribution", Annals of Statistics, 3 (1): 209–218, doi:10.1214/aos/1176343009, MR 0381064, Zbl 0314.62005
  10. ^ Brown, L. D. (1966), "On the admissibility of invariant estimators of one or more location parameters", Annals of Mathematical Statistics, 37 (5): 1087–1136, doi:10.1214/aoms/1177699259, MR 0216647, Zbl 0156.39401

Further reading

[edit]
  • Judge, George G.; Bock, M. E. (1978). The Statistical Implications of Pre-Test and Stein-Rule Estimators in Econometrics. New York: North Holland. pp. 229–257. ISBN 0-7204-0729-X.