User:Slava3087: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
In classical statistical [[decision theory]], where we are faced with the problem of estimating a ''deterministic''parameter (vector) <math>{\ |
In classical statistical [[decision theory]], where we are faced with the problem of estimating a ''deterministic''parameter (vector) <math>{\bf x}</math> from observations <math>y</math>, an [[estimator]] (estimation rule) is called '''minimax''' if its maximal [[risk]] is minimal among all estimators of <math>x</math>. In a sense this is an estimator which performs best in the worst possible case allowed in the problem. |
||
Revision as of 23:14, 15 April 2008
In classical statistical decision theory, where we are faced with the problem of estimating a deterministicparameter (vector) from observations , an estimator (estimation rule) is called minimax if its maximal risk is minimal among all estimators of . In a sense this is an estimator which performs best in the worst possible case allowed in the problem.
Problem Definition
Consider the problem of estimating a deterministic (not Bayesian) parameter belonging to some sete \delta^M(y)</math> is called minimax (or minmax) with respect to a loss function
we have an estimator that is used to estimate a parameter . We also assume a risk function , usually specified as the integral of a loss function. In this framework, is called minimax if it satisfies
- .
An alternative criterion in the decision theoretic framework is the Bayes estimator in the presence of a prior distribution . An estimator is Bayes if it minimizes the average risk