Jump to content

User:Slava3087

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Slava3087 (talk | contribs) at 19:30, 16 April 2008. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


In classical statistical decision theory, where we are faced with the problem of estimating a deterministic parameter (vector) from observations , an estimator (estimation rule) is called minimax if its maximal risk is minimal among all estimators of . In a sense this means that is an estimator which performs best in the worst possible case allowed in the problem.


Problem Setup

Consider the problem of estimating a deterministic (not Bayesian) parameter from noisy or corrupt data related through the conditional probability distribution . Our goal is to find a "good" estimatimator for estimating the parameter , which minimizes some given risk function . Here the risk function is the expectation of some loss function with respect to . A popular example for a loss function is the squared error loss , and the risk function for this loss is the mean squared error (MSE).

Unfortunatlly in general the risk can not be minimized, since it depends on the unknown parameter itself (If we knew what was the actual value of, we wouldnt need to estimate it). Therefore an aditional criteria for finding an optimal estimator in some sense are requiered. One such criteria is the minimax criteria.

Definition

Definition : An estimator is called minimax with respect to a risk function if it achievs the smallest maximum risk among all estimators, meaning it satisfies

.


Least Favorable Distribution

Logically, an estimator is minimax when it is the best in the worst case. Continuing this logic, a minimax estimator should be a Bayes estimator with respect to a prior least favorable distribution of . To demonstrate this notaion denote the avarge risk of the Bayes estimator with respect to a prior distribution as

Definition : A prior distribution is called least favorable if for any other distribution the avarge risk satisfies, .

Theorem : If , then:

1) is minimax.

2)If is a unique Bayes estimator, it is also the unique minimax estimator.

3) is least favorable.

Concludion: If an estimator has constant risk, it is minimax. Note that it is not a necessary condition.

Example: Consider the problem of estimating the mean of dimensional Gaussian white random vactor, . The Maximum likelihood (ML) estimator for in this case is simply , and it risk is

.

So the risk is constant, and therfore the ML estimator is minimax.Nonetheless, minimaxity does not always imply admissibility. Infact in this example, the ML estimator is known to be inadmissible (not admissible) whenever . The famous James-Stein estimator dominates the ML whenever . Though both estimattors have the same risk when , and they are both minimax, the James-Stein Estimator has smaller risk for any finite . This fact is illistrated in the following figure.

The reason for that is that the ML estimator is not an actual Bayes estimator, but rather the limit of such estimators.

Definition : A sequence of prior disrtributions , is called least favorable if for any other distribution ,

: Failed to parse (unknown function "\limit"): {\displaystyle \limit_{n \rightarrow \infty} r_{\pi_n} \leq r_{pi '}\,\!}

1) is minimax.

2)If is a unique Bayes estimator, it is also the unique minimax estimator.

3) is least favorable.