Likelihood-ratio test: Difference between revisions
No edit summary |
No edit summary |
||
Line 5: | Line 5: | ||
A statistical model is often a parametrized family of probability density functions or probability mass funtions ''f''<sub>θ</sub>(''x''). A null hypothesis is often stated by saying the parameter θ is in a specified subset Θ<sub>0</sub> of the parameter space Θ. The [[likelihood function]] is ''L''(&theta) = ''L''(&theta | ''x'') = ''f''<sub>θ</sub>(''x'') = a function of the parameter θ with ''x'' held fixed at the value that was actually observed, i.e., the data. The '''likelihood ratio''' is |
A statistical model is often a parametrized family of probability density functions or probability mass funtions ''f''<sub>θ</sub>(''x''). A null hypothesis is often stated by saying the parameter θ is in a specified subset Θ<sub>0</sub> of the parameter space Θ. The [[likelihood function]] is ''L''(&theta) = ''L''(&theta | ''x'') = ''f''<sub>θ</sub>(''x'') = a function of the parameter θ with ''x'' held fixed at the value that was actually observed, i.e., the data. The '''likelihood ratio''' is |
||
:<math>\Lambda(x)=\frac{\sup\{\,L(\theta\mid x):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid x):\theta\in\Theta\,\}}.</math> |
:<math>\Lambda(x)=\frac{\sup\{\,L(\theta\mid x):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid x):\theta\in\Theta\,\}}.</math> |
||
This is a function of the data ''x'', and is therefore a statistic. The '''likelihood ratio test''' rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable ("Type I error" consist of rejection of a null hypothesis that is true). |
|||
This is a function of the data ''x'', and is therefore a statistic. |
|||
If the null hypothesis is true, then -2 log Λ will be asymptotically χ<sup>2</sup> distributed with degrees of freedom equal to the difference in dimensionality of |
If the null hypothesis is true, then -2 log Λ will be asymptotically χ<sup>2</sup> distributed with degrees of freedom equal to the difference in dimensionality of Θ and &Theta<sub>0</sub>. |
||
For instance, in the case of Pearson's test, we might try to compare two coins to determine whether they have the same probability of coming up heads. Our observation can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times the coin for that row came up heads or tails. The contents of this table are our observation X. |
For instance, in the case of Pearson's test, we might try to compare two coins to determine whether they have the same probability of coming up heads. Our observation can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times the coin for that row came up heads or tails. The contents of this table are our observation X. |
Revision as of 23:21, 2 February 2003
A likelihood-ratio test is a statistical test relying on a test statistic computed by taking the ratio of the maximum probability under the constraint of the null hypothesis to the maximum probability with that constraint relaxed. If that ratio is Λ and the null hypothesis holds, then for commonly occurring families of probability distributions, -2 log λ has a particularly handy asymptotic distribution. Many common test statistics such as the Z Test, the F Test and Pearson's χ2 test can be phrased as log-likelihood ratios or approximations thereof.
Many of these approximations were quite useful when computers did not exist, but now that taking a log is really no more vexing than multiplying two numbers, other approximations may be more useful, especially in special cases were the approximations are suspect.
A statistical model is often a parametrized family of probability density functions or probability mass funtions fθ(x). A null hypothesis is often stated by saying the parameter θ is in a specified subset Θ0 of the parameter space Θ. The likelihood function is L(&theta) = L(&theta | x) = fθ(x) = a function of the parameter θ with x held fixed at the value that was actually observed, i.e., the data. The likelihood ratio is
This is a function of the data x, and is therefore a statistic. The likelihood ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable ("Type I error" consist of rejection of a null hypothesis that is true).
If the null hypothesis is true, then -2 log Λ will be asymptotically χ2 distributed with degrees of freedom equal to the difference in dimensionality of Θ and &Theta0.
For instance, in the case of Pearson's test, we might try to compare two coins to determine whether they have the same probability of coming up heads. Our observation can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times the coin for that row came up heads or tails. The contents of this table are our observation X.
Heads | Tails | |
Coin 1 | k1H | k1T |
Coin 2 | k2H | k2T |
Here ω consists of the parameters p1H, p1T, p2H, and p2T which are the probability that coin 1 (2) comes up heads (tails). The hypothesis space H is defined by the usual constraints on a distribution, pij ≥ 0, pij ≤ 1, and piH + piT = 1. The null hypothesis H0 is the sub-space where p1j = p2j. In all of these constraints, i = 1,2 and j = H,T.
The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the log-likelihood ratio to have the desired nice distribution. Since the constraint causes the two-dimensional H to be reduced to the one-dimensional H0, the asymptotic distribution for the test will be χ^2(1), the χ^2 distribution with one degree of freedom.
For the general contingency table, we can write the log-likelihood ratio test as
pij | |||
-2 log l = | Σ | k ij log | |
i,j | mij |
See also :