Jump to content

User:Slava3087: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Slava3087 (talk | contribs)
No edit summary
Slava3087 (talk | contribs)
No edit summary
Line 7: Line 7:


==Problem Setup==
==Problem Setup==
Consider the problem of estimating a deterministic (not [[Bayesian]]) parameter <math>\theta \in \Theta</math> from noisy or corrupt data <math>x \in \mathcal{X}</math> related through the conditional [[probability distribution]] <math>P(x|\theta)\,\!</math>. Our goal is to find a "good" estimatimator <math>\delta \,\!</math> for estimating the parameter <math>\theta \,\!</math>, which minimizes some given [[risk function]] <math>R(\theta,\delta) \,\!</math>. Here the risk function is the [[expected value|expectation]] of some [[loss function]] <math>L(\theta,\delta) \,\!</math> with respect to <math>P(x|\theta)\,\!</math>. A popular example for a loss function is the squared error loss <math>L(\theta,\delta)= \|\theta-\delta\|^2 \,\!</math>, and the risk function for this loss is the [[mean squared error]] (MSE).
Consider the problem of estimating a deterministic (not [[Bayesian]]) parameter <math>\theta \in \Theta</math> from noisy or corrupt data <math>x \in \mathcal{X}</math> related through the conditional [[probability distribution]] <math>P(x|\theta)\,\!</math>. Our goal is to find a "good" estimatimator <math>\delta(x) \,\!</math> for estimating the parameter <math>\theta \,\!</math>, which minimizes some given [[risk function]] <math>R(\theta,\delta) \,\!</math>. Here the risk function is the [[expected value|expectation]] of some [[loss function]] <math>L(\theta,\delta) \,\!</math> with respect to <math>P(x|\theta)\,\!</math>. A popular example for a loss function is the squared error loss <math>L(\theta,\delta)= \|\theta-\delta\|^2 \,\!</math>, and the risk function for this loss is the [[mean squared error]] (MSE).


Unfortunatlly in general the risk can not be minimized, since it depends on the unknown parameter <math>\theta \,\!</math> itself (If we knew what was the actual value of<math>\theta \,\!</math>, we wouldnt need to estimate it). Therefore an aditional criteria for finding an optimal estimator in some sense are requiered. One such criteria is the minimax criteria.
Unfortunatlly in general the risk can not be minimized, since it depends on the unknown parameter <math>\theta \,\!</math> itself (If we knew what was the actual value of<math>\theta \,\!</math>, we wouldnt need to estimate it). Therefore an aditional criteria for finding an optimal estimator in some sense are requiered. One such criteria is the minimax criteria.

Revision as of 00:07, 16 April 2008


In classical statistical decision theory, where we are faced with the problem of estimating a deterministic parameter (vector) from observations , an estimator (estimation rule) is called minimax if its maximal risk is minimal among all estimators of . In a sense this means that is an estimator which performs best in the worst possible case allowed in the problem.


Problem Setup

Consider the problem of estimating a deterministic (not Bayesian) parameter from noisy or corrupt data related through the conditional probability distribution . Our goal is to find a "good" estimatimator for estimating the parameter , which minimizes some given risk function . Here the risk function is the expectation of some loss function with respect to . A popular example for a loss function is the squared error loss , and the risk function for this loss is the mean squared error (MSE).

Unfortunatlly in general the risk can not be minimized, since it depends on the unknown parameter itself (If we knew what was the actual value of, we wouldnt need to estimate it). Therefore an aditional criteria for finding an optimal estimator in some sense are requiered. One such criteria is the minimax criteria.

Definition

An estimator is called minimax with respect to a risk function if it achievs the smallest maximum risk among all estimators, meaning it satisfies

.