Confidence distribution: Difference between revisions
→Example 1: Normal mean and variance: chi-squared is not due to Student Chi-squared_distribution#History_and_name |
Citation bot (talk | contribs) Removed URL that duplicated identifier. | Use this bot. Report bugs. | #UCB_CommandLine |
||
(59 intermediate revisions by 22 users not shown) | |||
Line 3: | Line 3: | ||
In [[statistical inference]], the concept of a '''confidence distribution''' ('''CD''') has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels, and it was also commonly associated with a fiducial<ref name = "Fisher1930"/> interpretation ([[fiducial distribution]]), although it is a purely frequentist concept.<ref name="cox1958" /> A confidence distribution is NOT a probability distribution function of the parameter of interest, but may still be a function useful for making inferences.<ref name="Xie2013r" /> |
In [[statistical inference]], the concept of a '''confidence distribution''' ('''CD''') has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels, and it was also commonly associated with a fiducial<ref name = "Fisher1930"/> interpretation ([[fiducial distribution]]), although it is a purely frequentist concept.<ref name="cox1958" /> A confidence distribution is NOT a probability distribution function of the parameter of interest, but may still be a function useful for making inferences.<ref name="Xie2013r" /> |
||
In recent years, there has been a surge of renewed interest in confidence distributions. |
In recent years, there has been a surge of renewed interest in confidence distributions.<ref name="Xie2013r" /> In the more recent developments, the concept of confidence distribution has emerged as a purely [[frequentist inference|frequentist]] concept, without any fiducial interpretation or reasoning. Conceptually, a confidence distribution is no different from a [[point estimator]] or an interval estimator ([[confidence interval]]), but it uses a sample-dependent distribution function on the parameter space (instead of a point or an interval) to estimate the parameter of interest. |
||
A simple example of a confidence distribution, that has been broadly used in statistical practice, is a [[Bootstrapping (statistics)|bootstrap]] distribution.<ref name="Efron1998" /> The development and interpretation of a bootstrap distribution does not involve any fiducial reasoning; the same is true for the concept of a confidence distribution. But the notion of confidence distribution is much broader than that of a bootstrap distribution. In particular, recent research suggests that it encompasses and unifies a wide range of examples, from regular parametric cases (including most examples of the classical development of Fisher's fiducial distribution) to bootstrap distributions, [[p-value]] functions,<ref name = "Fraser1991"/> normalized [[likelihood function]]s and, in some cases, Bayesian [[prior distribution|prior]]s and Bayesian [[posterior distribution|posteriors]].<ref name="Xie2011" /> |
A simple example of a confidence distribution, that has been broadly used in statistical practice, is a [[Bootstrapping (statistics)|bootstrap]] distribution.<ref name="Efron1998" /> The development and interpretation of a bootstrap distribution does not involve any fiducial reasoning; the same is true for the concept of a confidence distribution. But the notion of confidence distribution is much broader than that of a bootstrap distribution. In particular, recent research suggests that it encompasses and unifies a wide range of examples, from regular parametric cases (including most examples of the classical development of Fisher's fiducial distribution) to bootstrap distributions, [[p-value]] functions,<ref name = "Fraser1991"/> normalized [[likelihood function]]s and, in some cases, Bayesian [[prior distribution|prior]]s and Bayesian [[posterior distribution|posteriors]].<ref name="Xie2011" /> |
||
Just as a Bayesian posterior distribution contains a wealth of information for any type of [[Bayesian inference]], a confidence distribution contains a wealth of information for constructing almost all types of frequentist inferences, including [[point estimate]]s, [[confidence interval]]s and p-values, among others. Some recent developments have highlighted the promising potentials of the CD concept, as an effective inferential tool. |
Just as a Bayesian posterior distribution contains a wealth of information for any type of [[Bayesian inference]], a confidence distribution contains a wealth of information for constructing almost all types of frequentist inferences, including [[point estimate]]s, [[confidence interval]]s, critical values, [[statistical power]] and p-values,<ref>{{Cite journal|last=Fraser|first=D. A. S.|date=2019-03-29|title=The p-value Function and Statistical Inference|journal=The American Statistician|volume=73|issue=sup1|pages=135–147|doi=10.1080/00031305.2018.1556735|issn=0003-1305|doi-access=free}}</ref> among others. Some recent developments have highlighted the promising potentials of the CD concept, as an effective inferential tool.<ref name="Xie2013r" /> |
||
== History == |
|||
== The history of CD concept == |
|||
Neyman (1937)<ref name="Neyman1937" /> introduced the idea of "confidence" in his seminal paper on confidence intervals which clarified the frequentist repetition property. According to Fraser,<ref name = "Fraser2011"/> the seed (idea) of confidence distribution can even be traced back to Bayes (1763)<ref name="Bayes1973" /> and Fisher (1930).<ref name="Fisher1930" /> Some researchers view the confidence distribution as "the Neymanian interpretation of Fisher's fiducial distributions",<ref name="Schweder2002" /> which was "furiously disputed by Fisher".<ref name="Zabell1992" /> It is also believed that these "unproductive disputes" and Fisher's "stubborn insistence"<ref name="Zabell1992" /> might be the reason that the concept of confidence distribution has been long misconstrued as a fiducial concept and not been fully developed under the frequentist framework.<ref name="Xie2011" /><ref name="Singh2011" /> Indeed, the confidence distribution is a purely frequentist concept with a purely frequentist interpretation, |
[[Jerzy Neyman|Neyman]] (1937)<ref name="Neyman1937" /> introduced the idea of "confidence" in his seminal paper on confidence intervals which clarified the frequentist repetition property. According to Fraser,<ref name = "Fraser2011"/> the seed (idea) of confidence distribution can even be traced back to Bayes (1763)<ref name="Bayes1973" /> and Fisher (1930).<ref name="Fisher1930" /> Although the phrase seems to first be used in Cox (1958).<ref>{{Cite journal|last=Cox|first=D. R.|date=June 1958|title=Some Problems Connected with Statistical Inference|url=http://projecteuclid.org/euclid.aoms/1177706618|journal=The Annals of Mathematical Statistics|language=en|volume=29|issue=2|pages=357–372|doi=10.1214/aoms/1177706618|issn=0003-4851|doi-access=free}}</ref> Some researchers view the confidence distribution as "the Neymanian interpretation of Fisher's fiducial distributions",<ref name="Schweder2002" /> which was "furiously disputed by Fisher".<ref name="Zabell1992" /> It is also believed that these "unproductive disputes" and Fisher's "stubborn insistence"<ref name="Zabell1992" /> might be the reason that the concept of confidence distribution has been long misconstrued as a fiducial concept and not been fully developed under the frequentist framework.<ref name="Xie2011" /><ref name="Singh2011" /> Indeed, the confidence distribution is a purely frequentist concept with a purely frequentist interpretation, although it also has ties to Bayesian and fiducial inference concepts. |
||
== Definition == |
== Definition == |
||
Line 22: | Line 22: | ||
Efron stated that this distribution "assigns probability 0.05 to ''θ'' lying between the upper endpoints of the 0.90 and 0.95 confidence interval, ''etc''." and "it has powerful intuitive appeal".<ref name="Efron1993" /> |
Efron stated that this distribution "assigns probability 0.05 to ''θ'' lying between the upper endpoints of the 0.90 and 0.95 confidence interval, ''etc''." and "it has powerful intuitive appeal".<ref name="Efron1993" /> |
||
In the classical literature, |
In the classical literature,<ref name="Xie2013r" /> the confidence distribution function is interpreted as a distribution function of the parameter ''θ'', which is impossible unless fiducial reasoning is involved since, in a frequentist setting, the parameters are fixed and nonrandom. |
||
To interpret the CD function entirely from a frequentist viewpoint and not interpret it as a distribution function of a (fixed/nonrandom) parameter is one of the major departures of recent development relative to the classical approach. The nice thing about treating confidence |
To interpret the CD function entirely from a frequentist viewpoint and not interpret it as a distribution function of a (fixed/nonrandom) parameter is one of the major departures of recent development relative to the classical approach. The nice thing about treating confidence distributions as a purely frequentist concept (similar to a point estimator) is that it is now free from those restrictive, if not controversial, constraints set forth by Fisher on fiducial distributions.<ref name = "Xie2011"/><ref name = "Singh2011" /> |
||
=== The modern definition === |
=== The modern definition === |
||
Line 41: | Line 41: | ||
Unlike the classical fiducial inference, more than one confidence distributions may be available to estimate a parameter under any specific setting. Also, unlike the classical fiducial inference, optimality is not a part of requirement. Depending on the setting and the criterion used, sometimes there is a unique "best" (in terms of optimality) confidence distribution. But sometimes there is no optimal confidence distribution available or, in some extreme cases, we may not even be able to find a meaningful confidence distribution. This is not different from the practice of point estimation. |
Unlike the classical fiducial inference, more than one confidence distributions may be available to estimate a parameter under any specific setting. Also, unlike the classical fiducial inference, optimality is not a part of requirement. Depending on the setting and the criterion used, sometimes there is a unique "best" (in terms of optimality) confidence distribution. But sometimes there is no optimal confidence distribution available or, in some extreme cases, we may not even be able to find a meaningful confidence distribution. This is not different from the practice of point estimation. |
||
=== A definition with measurable spaces === |
|||
⚫ | |||
⚫ | |||
A confidence distribution<ref>{{Cite journal|last=Taraldsen|first=Gunnar|date=2021|title=Joint Confidence Distributions|url=http://rgdoi.net/10.13140/RG.2.2.33079.85920|language=en|doi=10.13140/RG.2.2.33079.85920}}</ref> <math>C</math> for a [[Statistical parameter|parameter]] <math>\gamma</math> in a [[measurable space]] is a distribution [[estimator]] with <math>C(A_p) = p</math> for a family of [[confidence region]]s <math>A_p</math> for <math>\gamma</math> with level <math>p</math> for all levels <math>0 < p < 1</math>. The family of confidence regions is not unique.<ref name="Liu 1–19">{{Cite journal|last1=Liu|first1=Dungang|last2=Liu|first2=Regina Y.|last3=Xie|first3=Min-ge|date=2021-04-30|title=Nonparametric Fusion Learning for Multiparameters: Synthesize Inferences From Diverse Sources Using Data Depth and Confidence Distribution|url=https://www.tandfonline.com/doi/full/10.1080/01621459.2021.1902817|journal=Journal of the American Statistical Association|volume=117 |issue=540 |language=en|pages=2086–2104|doi=10.1080/01621459.2021.1902817|s2cid=233657455 |issn=0162-1459}}</ref> If <math>A_p</math> only exists for <math>p \in I \subset (0,1)</math>, then <math>C</math> is a confidence distribution with level set <math>I</math>. Both <math>C</math> and all <math>A_p</math> are measurable functions of the data. This implies that <math>C</math> is a [[random measure]] and <math>A_p</math> is a [[Random compact set|random set.]] If the defining requirement <math>P(\gamma \in A_p) \ge p</math> holds with equality, then the confidence distribution is by definition exact. If, additionally, <math>\gamma</math> is a real parameter, then the measure theoretic definition coincides with the above classical definition. |
|||
⚫ | |||
⚫ | |||
Suppose a [[normal distribution|normal]] sample ''X''<sub>''i''</sub> ~ ''N''(''μ'', ''σ''<sup>2</sup>), ''i'' = 1, 2, ..., ''n'' is given. |
Suppose a [[normal distribution|normal]] sample ''X''<sub>''i''</sub> ~ ''N''(''μ'', ''σ''<sup>2</sup>), ''i'' = 1, 2, ..., ''n'' is given. |
||
Line 48: | Line 53: | ||
'''(1) Variance ''σ''<sup>2</sup> is known''' |
'''(1) Variance ''σ''<sup>2</sup> is known''' |
||
Both the functions <math>H_\mathit{\Phi}(\mu)</math> and <math>H_t(\mu)</math> given by |
Let, ''Φ'' be the cumulative distribution function of the standard normal distribution, and <math> F_{t_{n-1}} </math> the cumulative distribution function of the Student <math> t_{n-1} </math> distribution. Both the functions <math>H_\mathit{\Phi}(\mu)</math> and <math>H_t(\mu)</math> given by |
||
:<math> |
:<math> |
||
Line 56: | Line 61: | ||
</math> |
</math> |
||
satisfy the two requirements in the CD definition, and they are confidence distribution functions for ''μ''. |
satisfy the two requirements in the CD definition, and they are confidence distribution functions for ''μ''.<ref name="Xie2013r" /> Furthermore, |
||
:<math> H_A(\mu) = \mathit{\Phi}\left(\frac{\sqrt{n}(\mu-\bar{X})}{s}\right)</math> |
:<math> H_A(\mu) = \mathit{\Phi}\left(\frac{\sqrt{n}(\mu-\bar{X})}{s}\right)</math> |
||
Line 64: | Line 69: | ||
'''(2) Variance ''σ''<sup>2</sup> is unknown''' |
'''(2) Variance ''σ''<sup>2</sup> is unknown''' |
||
For the parameter ''μ'', since <math>H_\mathit{\Phi}(\mu)</math> involves the unknown parameter ''σ'' and it violates the two requirements in the CD definition, it is no longer a "distribution estimator" or a confidence distribution for ''μ''. |
For the parameter ''μ'', since <math>H_\mathit{\Phi}(\mu)</math> involves the unknown parameter ''σ'' and it violates the two requirements in the CD definition, it is no longer a "distribution estimator" or a confidence distribution for ''μ''.<ref name="Xie2013r" /> However, <math>H_{t}(\mu)</math> is still a CD for ''μ'' and <math>H_{A}(\mu)</math> is an aCD for ''μ''. |
||
For the parameter ''σ''<sup>2</sup>, the sample-dependent cumulative distribution function |
For the parameter ''σ''<sup>2</sup>, the sample-dependent cumulative distribution function |
||
Line 75: | Line 80: | ||
H_{\mathit{\Phi}}(\mu) = \mathit{\Phi}\left(\frac{\sqrt{n}(\mu-\bar{X})}{\sigma}\right) </math> is optimal in terms of producing the shortest confidence intervals at any given level. In the case when the variance ''σ''<sup>2</sup> is unknown, <math> H_{t}(\mu) = F_{t_{n-1}}\left(\frac{\sqrt{n}(\mu-\bar{X})}{s}\right) </math> is an optimal confidence distribution for ''μ''. |
H_{\mathit{\Phi}}(\mu) = \mathit{\Phi}\left(\frac{\sqrt{n}(\mu-\bar{X})}{\sigma}\right) </math> is optimal in terms of producing the shortest confidence intervals at any given level. In the case when the variance ''σ''<sup>2</sup> is unknown, <math> H_{t}(\mu) = F_{t_{n-1}}\left(\frac{\sqrt{n}(\mu-\bar{X})}{s}\right) </math> is an optimal confidence distribution for ''μ''. |
||
=== Example 2: Bivariate normal correlation === |
|||
Let ''ρ'' denotes the [[Pearson product-moment correlation coefficient|correlation coefficient]] of a [[multivariate normal distribution|bivariate normal]] population. It is well known that Fisher's ''z'' defined by the [[Fisher transformation]]: |
Let ''ρ'' denotes the [[Pearson product-moment correlation coefficient|correlation coefficient]] of a [[multivariate normal distribution|bivariate normal]] population. It is well known that Fisher's ''z'' defined by the [[Fisher transformation]]: |
||
Line 87: | Line 92: | ||
:<math>H_n(\rho) = 1 - \mathit{\Phi}\left(\sqrt{n-3} \left({1 \over 2}\ln{1+r \over 1-r} -{1 \over 2}\ln{{1+\rho}\over{1-\rho}} \right)\right)</math> |
:<math>H_n(\rho) = 1 - \mathit{\Phi}\left(\sqrt{n-3} \left({1 \over 2}\ln{1+r \over 1-r} -{1 \over 2}\ln{{1+\rho}\over{1-\rho}} \right)\right)</math> |
||
is an asymptotic confidence distribution for ''ρ''. |
is an asymptotic confidence distribution for ''ρ''.<ref name="Singh2007" /> |
||
An exact confidence density for ''ρ'' is<ref>{{Cite journal|last=Taraldsen|first=Gunnar|date=2021|title=The Confidence Density for Correlation|journal=Sankhya A|volume=85 |pages=600–616 |language=en|doi=10.1007/s13171-021-00267-y|s2cid=244594067 |issn=0976-8378|doi-access=free}}</ref><ref>{{Cite journal|last=Taraldsen|first=Gunnar|date=2020|title=Confidence in Correlation|url=http://rgdoi.net/10.13140/RG.2.2.23673.49769|language=en|doi=10.13140/RG.2.2.23673.49769}}</ref> |
|||
⚫ | |||
<math>\pi (\rho | r) = |
|||
\frac{\nu (\nu - 1)\Gamma(\nu-1)}{\sqrt{2\pi}\Gamma(\nu + \frac{1}{2})} |
|||
(1 - r^2)^{\frac{\nu - 1}{2}} \cdot |
|||
(1 - \rho^2)^{\frac{\nu - 2}{2}} \cdot |
|||
(1 - r \rho )^{\frac{1-2\nu}{2}} F(\frac{3}{2},-\frac{1}{2}; \nu + \frac{1}{2}; \frac{1 + r \rho}{2})</math> |
|||
where <math>F</math> is the Gaussian hypergeometric function and <math>\nu = n-1 > 1</math> . This is also the posterior density of a Bayes matching prior for the five parameters in the binormal distribution.<ref>{{Cite journal|last1=Berger|first1=James O.|last2=Sun|first2=Dongchu|date=2008-04-01|title=Objective priors for the bivariate normal model|journal=The Annals of Statistics|volume=36|issue=2|doi=10.1214/07-AOS501|s2cid=14703802 |issn=0090-5364|doi-access=free|arxiv=0804.0987}}</ref> |
|||
The very last formula in the classical book by [[Ronald Fisher|Fisher]] gives |
|||
<math>\pi (\rho | r) = |
|||
\frac{(1 - r^2)^{\frac{\nu - 1}{2}} \cdot |
|||
(1 - \rho^2)^{\frac{\nu - 2}{2}}}{\pi (\nu - 2)!} |
|||
\partial_{\rho r}^{\nu - 2} |
|||
\left\{ |
|||
\frac{\theta - \frac{1}{2}\sin 2\theta}{\sin^3 \theta} |
|||
\right\}</math> |
|||
where <math> \cos \theta = -\rho r</math> and <math>0 < \theta < \pi</math>. This formula was derived by [[C. R. Rao]].<ref>{{Cite book|last=Fisher|first=Ronald Aylmer, Sir|url=https://www.worldcat.org/oclc/785822|title=Statistical methods and scientific inference|date=1973|publisher=Hafner Press|isbn=0-02-844740-9|edition=[3d ed., rev. and enl.]|location=New York|oclc=785822}}</ref> |
|||
=== Example 3: Binormal mean === |
|||
Let data be generated by <math>Y = \gamma + U</math> where <math>\gamma</math> is an unknown vector in the [[Plane (geometry)|plane]] and <math>U</math> has a [[Multivariate normal distribution|binormal]] and known distribution in the plane. The distribution of <math>\Gamma^y = y - U</math> defines a confidence distribution for <math>\gamma</math>. The confidence regions <math>A_p</math> can be chosen as the interior of [[ellipse]]s centered at <math>\gamma</math> and axes given by the eigenvectors of the [[Covariance matrix|covariance]] matrix of <math>\Gamma^y</math>. The confidence distribution is in this case binormal with mean <math>\gamma</math>, and the confidence regions can be chosen in many other ways.<ref name="Liu 1–19"/> The confidence distribution coincides in this case with the Bayesian posterior using the right Haar prior.<ref>{{Cite journal|last1=Eaton|first1=Morris L.|last2=Sudderth|first2=William D.|date=2012|title=Invariance, model matching and probability matching|url=https://www.jstor.org/stable/42003718|journal=Sankhyā: The Indian Journal of Statistics, Series A (2008-)|volume=74|issue=2|pages=170–193|doi=10.1007/s13171-012-0018-4 |jstor=42003718 |s2cid=120705955 |issn=0976-836X}}</ref> The argument generalizes to the case of an unknown mean <math>\gamma</math> in an infinite-dimensional [[Hilbert space]], but in this case the confidence distribution is not a Bayesian posterior.<ref name="Taraldsen">{{Cite journal|last1=Taraldsen|first1=Gunnar|last2=Lindqvist|first2=Bo Henry|date=2013-02-01|title=Fiducial theory and optimal inference|journal=The Annals of Statistics|volume=41|issue=1|doi=10.1214/13-AOS1083|s2cid=88520957 |issn=0090-5364|doi-access=free|arxiv=1301.1717}}</ref> |
|||
⚫ | |||
=== Confidence interval === |
=== Confidence interval === |
||
[[File:CDinference1.png|right|thumb|400px]] |
[[File:CDinference1.png|right|thumb|400px]] |
||
From the CD definition, it is evident that the interval <math>(-\infty, H_n^{-1}(1-\alpha)], [H_n^{-1}(\alpha), \infty)</math> and <math>[H_n^{-1}(\alpha/2), H_n^{-1}(1-\alpha/2)]</math> provide 100(1 − ''α'')%-level confidence intervals of different kinds, for ''θ'', for any ''α'' ∈ (0, 1). Also <math>[H_n^{-1}(\alpha_1), H_n^{-1}(1-\alpha_2)]</math> is a level 100(1 − ''α''<sub>1</sub> − ''α''<sub>2</sub>)% confidence interval for the parameter ''θ'' for any ''α''<sub>1</sub> > 0, ''α''<sub>2</sub> > 0 and ''α''<sub>1</sub> + ''α''<sub>2</sub> < 1. Here, <math> H_n^{-1}(\beta) </math> is the 100''β''% quantile of <math> H_n(\theta) </math> or it solves for ''θ'' in equation <math> H_n(\theta)=\beta </math>. The same holds for |
From the CD definition, it is evident that the interval <math>(-\infty, H_n^{-1}(1-\alpha)], [H_n^{-1}(\alpha), \infty)</math> and <math>[H_n^{-1}(\alpha/2), H_n^{-1}(1-\alpha/2)]</math> provide 100(1 − ''α'')%-level confidence intervals of different kinds, for ''θ'', for any ''α'' ∈ (0, 1). Also <math>[H_n^{-1}(\alpha_1), H_n^{-1}(1-\alpha_2)]</math> is a level 100(1 − ''α''<sub>1</sub> − ''α''<sub>2</sub>)% confidence interval for the parameter ''θ'' for any ''α''<sub>1</sub> > 0, ''α''<sub>2</sub> > 0 and ''α''<sub>1</sub> + ''α''<sub>2</sub> < 1. Here, <math> H_n^{-1}(\beta) </math> is the 100''β''% quantile of <math> H_n(\theta) </math> or it solves for ''θ'' in equation <math> H_n(\theta)=\beta </math>. The same holds for a CD, where the confidence level is achieved in limit. Some authors have proposed using them for graphically viewing what parameter values are consistent with the data, instead of coverage or performance purposes.<ref>{{Cite book|last1=Cox|first1=D. R.|url=https://www.taylorfrancis.com/books/9780429170218|title=Theoretical Statistics|last2=Hinkley|first2=D. V.|date=1979-09-06|publisher=Chapman and Hall/CRC|isbn=978-0-429-17021-8|language=en|doi=10.1201/b14832}}</ref><ref>{{Cite journal|last1=Rafi|first1=Zad|last2=Greenland|first2=Sander|date=2020-09-30|title=Semantic and cognitive tools to aid statistical science: replace confidence and significance by compatibility and surprise|url= |journal=BMC Medical Research Methodology|volume=20|issue=1|pages=244|doi=10.1186/s12874-020-01105-9|arxiv=1909.08579|issn=1471-2288|pmc=7528258|pmid=32998683 |doi-access=free }}</ref> |
||
=== Point estimation === |
=== Point estimation === |
||
Line 102: | Line 133: | ||
:<math>\widehat{\theta}_n=\arg\max_\theta h_n(\theta), h_n(\theta)=H'_n(\theta).</math> |
:<math>\widehat{\theta}_n=\arg\max_\theta h_n(\theta), h_n(\theta)=H'_n(\theta).</math> |
||
Under some modest conditions, among other properties, one can prove that these point estimators are all consistent.<ref name = "Xie2011" /><ref name = "Singh2007" /> |
Under some modest conditions, among other properties, one can prove that these point estimators are all consistent.<ref name = "Xie2011" /><ref name = "Singh2007" /> Certain confidence distributions can give optimal frequentist estimators.<ref name="Taraldsen"/> |
||
=== Hypothesis testing === |
=== Hypothesis testing === |
||
Line 113: | Line 144: | ||
See Figure 1 from Xie and Singh (2011)<ref name = "Xie2011"/> for a graphical illustration of the CD inference. |
See Figure 1 from Xie and Singh (2011)<ref name = "Xie2011"/> for a graphical illustration of the CD inference. |
||
== Implementations == |
|||
A few statistical programs have implemented the ability to construct and graph confidence distributions. |
|||
[[R (programming language)|R]], via the <code>concurve</code>,<ref name="cran.r-project.org">{{Citation|last1=Rafi [aut|first1=Zad|title=concurve: Computes and Plots Compatibility (Confidence) Intervals, P-Values, S-Values, & Likelihood Intervals to Form Consonance, Surprisal, & Likelihood Functions|date=2020-04-20|url=https://cran.r-project.org/package=concurve|access-date=2020-05-05|last2=cre|last3=Vigotsky|first3=Andrew D.}}</ref><ref>{{Cite web|url=https://statmodeling.stat.columbia.edu/2019/05/29/concurve-plots-consonance-curves-p-value-functions-and-s-value-functions/|title=Concurve plots consonance curves, p-value functions, and S-value functions « Statistical Modeling, Causal Inference, and Social Science|website=statmodeling.stat.columbia.edu|language=en-US|access-date=2020-04-15}}</ref> <code>pvaluefunctions</code>,<ref>{{Citation|last=Infanger|first=Denis|title=pvaluefunctions: Creates and Plots P-Value Functions, S-Value Functions, Confidence Distributions and Confidence Densities|date=2019-11-29|url=https://cran.r-project.org/package=pvaluefunctions|access-date=2020-04-15}}</ref> and <code>episheet</code><ref>{{Citation|last1=Black|first1=James|title=episheet: Rothman's Episheet|date=2019-01-23|url=https://cran.r-project.org/package=episheet|access-date=2020-04-15|last2=Rothman|first2=Ken|last3=Thelwall|first3=Simon}}</ref> packages |
|||
[[Microsoft Excel|Excel]], via <code>episheet</code><ref>{{Cite web|url=http://www.krothman.org/|title=Modern Epidemiology, 2nd Edition|website=www.krothman.org|access-date=2020-04-15|archive-date=2020-01-29|archive-url=https://web.archive.org/web/20200129153412/http://www.krothman.org/|url-status=dead}}</ref> |
|||
[[Stata]], via <code>concurve</code><ref name="cran.r-project.org"/> |
|||
==See also== |
==See also== |
||
Line 119: | Line 159: | ||
==References== |
==References== |
||
{{reflist| refs= |
{{reflist| refs= |
||
<ref name = "cox1958">Cox, D.R. (1958). "Some Problems Connected with Statistical Inference", "[[The Annals of Mathematical Statistics]]", "29" 357-372 (Section 4, Page 363)</ref> |
<ref name = "cox1958">Cox, D.R. (1958). "Some Problems Connected with Statistical Inference", "[[The Annals of Mathematical Statistics]]", "29" 357-372 (Section 4, Page 363) {{doi|10.1214/aoms/1177706618}}</ref> |
||
<ref name="Cox2006a">[[David R. Cox|Cox, D. R.]] (2006). ''Principles of Statistical Inference'', CUP. {{ISBN|0-521-68567-2}}. (page 66)</ref> |
<ref name="Cox2006a">[[David R. Cox|Cox, D. R.]] (2006). ''Principles of Statistical Inference'', CUP. {{ISBN|0-521-68567-2}}. (page 66)</ref> |
||
<ref name="Bayes1973">Bayes, T. (1763). "[[An Essay |
<ref name="Bayes1973">Bayes, T. (1763). "[[An Essay Towards Solving a Problem in the Doctrine of Chances]]." ''Phil. Trans. Roy. Soc'', London '''53''' 370–418 '''54''' 296–325. Reprinted in ''[[Biometrika]]'' '''45''' (1958) 293–315.</ref> |
||
<ref name="Efron1993">Efron, B. (1993). "Bayes and likelihood calculations from confidence intervals.'' ''[[Biometrika]]'', '''80''' 3–26.</ref> |
<ref name="Efron1993">Efron, B. (1993). "Bayes and likelihood calculations from confidence intervals.'' ''[[Biometrika]]'', '''80''' 3–26.</ref> |
||
<ref name="Efron1998">Efron, B. (1998). |
<ref name="Efron1998">Efron, B. (1998). "R.A.Fisher in the 21st Century" ''Statistical Science.'' '''13''' 95–122. {{JSTOR|2290557}}</ref> |
||
<ref name="Fisher1930">[[Ronald Fisher|Fisher, R.A.]] (1930). "Inverse probability." ''Proc. cambridge Pilos. Soc.'' '''26''', 528–535.</ref> |
<ref name="Fisher1930">[[Ronald Fisher|Fisher, R.A.]] (1930). "Inverse probability." ''Proc. cambridge Pilos. Soc.'' '''26''', 528–535.</ref> |
||
<ref name="Fraser1991">Fraser, D.A.S. (1991). |
<ref name="Fraser1991">Fraser, D.A.S. (1991). "Statistical inference: Likelihood to significance." ''[[Journal of the American Statistical Association]]'', '''86''', 258–265. {{JSTOR|2290557}}</ref> |
||
<ref name="Fraser2011">Fraser, D.A.S. (2011). |
<ref name="Fraser2011">Fraser, D.A.S. (2011). "Is Bayes posterior just quick and dirty confidence?" ''Statistical Science'' '''26''', 299-316. {{JSTOR|23059129}}</ref> |
||
<ref name="Kendall1974">Kendall, M., & Stuart, A. (1974). ''The Advanced Theory of Statistics'', Volume ?. (Chapter 21). Wiley.</ref> |
<ref name="Kendall1974">Kendall, M., & Stuart, A. (1974). ''The Advanced Theory of Statistics'', Volume ?. (Chapter 21). Wiley.</ref> |
||
<ref name="Neyman1937">Neyman, J. (1937). "Outline of a theory of statistical estimation based on the classical theory of probability." ''Phil. Trans. Roy. Soc'' '''A237''' 333–380</ref> |
<ref name="Neyman1937">Neyman, J. (1937). "Outline of a theory of statistical estimation based on the classical theory of probability." ''Phil. Trans. Roy. Soc'' '''A237''' 333–380</ref> |
||
<ref name="Schweder2002">Schweder, T. and Hjort, N.L. (2002). "Confidence and likelihood", ''Scandinavian Journal of Statistics.'' '''29''' 309–332. {{doi|10.1111/1467-9469.00285}}</ref> |
<ref name="Schweder2002">Schweder, T. and Hjort, N.L. (2002). "Confidence and likelihood", ''Scandinavian Journal of Statistics.'' '''29''' 309–332. {{doi|10.1111/1467-9469.00285}}</ref> |
||
<ref name="Singh2001">Singh, K. Xie, M. and Strawderman, W.E. (2001). "Confidence distributions—concept, theory and applications". Technical report, Dept. Statistics, Rutgers Univ. Revised 2004.</ref> |
<ref name="Singh2001">Singh, K. Xie, M. and Strawderman, W.E. (2001). "Confidence distributions—concept, theory and applications". Technical report, Dept. Statistics, Rutgers Univ. Revised 2004.</ref> |
||
<ref name="Singh2005">Singh, K. Xie, M. and Strawderman, W.E. (2005). |
<ref name="Singh2005">Singh, K. Xie, M. and Strawderman, W.E. (2005). "Combining Information from Independent Sources Through Confidence Distribution" ''[[Annals of Statistics]]'', '''33''', 159–183. {{JSTOR|3448660}}</ref> |
||
<ref name="Singh2007">Singh, K. Xie, M. and Strawderman, W.E. (2007). |
<ref name="Singh2007">Singh, K. Xie, M. and Strawderman, W.E. (2007). "Confidence Distribution (CD)-Distribution Estimator of a Parameter", in ''Complex Datasets and Inverse Problems'' ''IMS Lecture Notes—Monograph Series'', '''54''', (R. Liu, et al. Eds) 132–150. {{JSTOR|20461464}}</ref> |
||
<ref name="Singh2011">Singh, K. and Xie, M. (2011). |
<ref name="Singh2011">Singh, K. and Xie, M. (2011). "Discussions of “Is Bayes posterior just quick and dirty confidence?” by D.A.S. Fraser." Statistical Science. Vol. 26, 319-321. {{JSTOR|23059131}}</ref> |
||
<ref name="Xie2009">Xie, M., Liu, R., Daramuju, C.V., Olsan, W. (2012). [http://www.stat.rutgers.edu/home/mxie/RCPapers/expertopinions-final.pdf "Incorporating expert opinions with information from binomial clinical trials."] Annals of Applied Statistics. In press.</ref> |
<ref name="Xie2009">Xie, M., Liu, R., Daramuju, C.V., Olsan, W. (2012). [http://www.stat.rutgers.edu/home/mxie/RCPapers/expertopinions-final.pdf "Incorporating expert opinions with information from binomial clinical trials."] Annals of Applied Statistics. In press.</ref> |
||
<ref name = "Xie2011">Xie, M. and Singh, K. (2013). |
<ref name = "Xie2011">Xie, M. and Singh, K. (2013). "Confidence Distribution, the Frequentist Distribution Estimator of a Parameter – a Review (with discussion)". ''International Statistical Review'', '''81''', 3-39. {{doi|10.1111/insr.12000}}</ref> |
||
<ref name = "Xie2013r">Xie, M. (2013). |
<ref name = "Xie2013r">Xie, M. (2013). "Rejoinder of Confidence Distribution, the Frequentist Distribution Estimator of a Parameter – a Review". ''International Statistical Review'', '''81''', 68-77. {{doi|10.1111/insr.12001}}</ref> |
||
<ref name="Zabell1992">Zabell, S.L. (1992). "R.A.Fisher and fiducial argument", ''Stat. Sci.'', '''7''', 369–387</ref> |
<ref name="Zabell1992">Zabell, S.L. (1992). "R.A.Fisher and fiducial argument", ''Stat. Sci.'', '''7''', 369–387</ref> |
||
}} |
}} |
||
Line 153: | Line 193: | ||
==Bibliography== |
==Bibliography== |
||
{{refbegin}} |
{{refbegin}} |
||
* Xie, M. and Singh, K. (2013). [https://onlinelibrary.wiley.com/doi/epdf/10.1111/insr.12000] "Confidence Distribution, the Frequentist Distribution Estimator of a Parameter: A Review". ''International Statistical Review'', '''81''', 3–39. |
|||
* Schweder, T and Hjort, N L (2016). [https://doi.org/10.1017/CBO9781139046671]''Confidence, Likelihood, Probability: Statistical Inference with Confidence Distributions''. London: Cambridge University Press. {{ISBN|9781139046671}} |
|||
* Fisher, R A (1956). ''Statistical Methods and Scientific Inference''. New York: Hafner. {{ISBN|0-02-844740-9}}. |
* Fisher, R A (1956). ''Statistical Methods and Scientific Inference''. New York: Hafner. {{ISBN|0-02-844740-9}}. |
||
* Fisher, R. A. (1955). "Statistical methods and scientific induction" ''[[Journal of the Royal Statistical Society|J. Roy. Statist. Soc.]]'' Ser. B. 17, 69—78. (criticism of statistical theories of Jerzy Neyman and Abraham Wald from a fiducial perspective) |
* Fisher, R. A. (1955). "Statistical methods and scientific induction" ''[[Journal of the Royal Statistical Society|J. Roy. Statist. Soc.]]'' Ser. B. 17, 69—78. (criticism of statistical theories of Jerzy Neyman and Abraham Wald from a fiducial perspective) |
||
* Hannig, J. (2009). "On generalized fiducial inference". ''Statistica Sinica'', '''19''', 491–544. |
* Hannig, J. (2009). "[https://www.researchgate.net/profile/Jan_Hannig/publication/228369297_On_Generalized_Fiducial_Inference/links/00b4951cad797c1ccd000000.pdf On generalized fiducial inference]". ''Statistica Sinica'', '''19''', 491–544. |
||
*Lawless, F. and Fredette, M. (2005). "Frequentist prediction intervals and predictive distributions." ''Biometrika.'' '''92(3)''' 529–542. |
*Lawless, F. and Fredette, M. (2005). "[https://academic.oup.com/biomet/article-abstract/92/3/529/218911 Frequentist prediction intervals and predictive distributions]." ''Biometrika.'' '''92(3)''' 529–542. |
||
* Lehmann, E.L. (1993). "The Fisher, Neyman–Pearson theories of testing hypotheses: one theory or two?" ''Journal of the American Statistical Association'' '''88''' 1242–1249. |
* Lehmann, E.L. (1993). "[https://link.springer.com/content/pdf/10.1007/978-1-4614-1412-4_19.pdf The Fisher, Neyman–Pearson theories of testing hypotheses: one theory or two?]" ''Journal of the American Statistical Association'' '''88''' 1242–1249. |
||
* Neyman, Jerzy (1956). "Note on an Article by Sir Ronald Fisher". ''Journal of the Royal Statistical Society''. Series B (Methodological) 18 (2): 288–294. {{ |
* Neyman, Jerzy (1956). "Note on an Article by Sir Ronald Fisher". ''Journal of the Royal Statistical Society''. Series B (Methodological) 18 (2): 288–294. {{JSTOR|2983716}}. (reply to Fisher 1955, which diagnoses a fallacy of "fiducial inference") |
||
* Schweder T., Sadykova D., Rugh D. and Koski W. (2010) "Population Estimates From Aerial Photographic Surveys of Naturally and Variably Marked Bowhead Whales" '' Journal of Agricultural Biological and Environmental Statistics'' 2010 15: 1–19 |
* Schweder T., Sadykova D., Rugh D. and Koski W. (2010) "[https://pubag.nal.usda.gov/?page=17938&per_page=100&search_field=all_fields Population Estimates From Aerial Photographic Surveys of Naturally and Variably Marked Bowhead Whales]" '' Journal of Agricultural Biological and Environmental Statistics'' 2010 15: 1–19 |
||
* Bityukov S., Krasnikov N., Nadarajah S. and Smirnova V. (2010) "Confidence distributions in statistical inference". AIP Conference Proceedings, '''1305''', 346-353. |
* Bityukov S., Krasnikov N., Nadarajah S. and Smirnova V. (2010) "[http://djafari.free.fr/MaxEnt2010/slide/011.pdf Confidence distributions in statistical inference]". AIP Conference Proceedings, '''1305''', 346-353. |
||
* Singh, K. and Xie, M. (2012). [http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.imsc/1331731621 "CD-posterior --- combining prior and data through confidence distributions."] Contemporary Developments in Bayesian Analysis and Statistical Decision Theory: A Festschrift for William E. Strawderman. (D. Fourdrinier, et al., Eds.). IMS Collection, Volume 8, 200 -214. |
* Singh, K. and Xie, M. (2012). [http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.imsc/1331731621 "CD-posterior --- combining prior and data through confidence distributions."] Contemporary Developments in Bayesian Analysis and Statistical Decision Theory: A Festschrift for William E. Strawderman. (D. Fourdrinier, et al., Eds.). IMS Collection, Volume 8, 200 -214. |
||
{{refend}} |
{{refend}} |
Latest revision as of 19:43, 11 November 2024
In statistical inference, the concept of a confidence distribution (CD) has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels, and it was also commonly associated with a fiducial[1] interpretation (fiducial distribution), although it is a purely frequentist concept.[2] A confidence distribution is NOT a probability distribution function of the parameter of interest, but may still be a function useful for making inferences.[3]
In recent years, there has been a surge of renewed interest in confidence distributions.[3] In the more recent developments, the concept of confidence distribution has emerged as a purely frequentist concept, without any fiducial interpretation or reasoning. Conceptually, a confidence distribution is no different from a point estimator or an interval estimator (confidence interval), but it uses a sample-dependent distribution function on the parameter space (instead of a point or an interval) to estimate the parameter of interest.
A simple example of a confidence distribution, that has been broadly used in statistical practice, is a bootstrap distribution.[4] The development and interpretation of a bootstrap distribution does not involve any fiducial reasoning; the same is true for the concept of a confidence distribution. But the notion of confidence distribution is much broader than that of a bootstrap distribution. In particular, recent research suggests that it encompasses and unifies a wide range of examples, from regular parametric cases (including most examples of the classical development of Fisher's fiducial distribution) to bootstrap distributions, p-value functions,[5] normalized likelihood functions and, in some cases, Bayesian priors and Bayesian posteriors.[6]
Just as a Bayesian posterior distribution contains a wealth of information for any type of Bayesian inference, a confidence distribution contains a wealth of information for constructing almost all types of frequentist inferences, including point estimates, confidence intervals, critical values, statistical power and p-values,[7] among others. Some recent developments have highlighted the promising potentials of the CD concept, as an effective inferential tool.[3]
History
[edit]Neyman (1937)[8] introduced the idea of "confidence" in his seminal paper on confidence intervals which clarified the frequentist repetition property. According to Fraser,[9] the seed (idea) of confidence distribution can even be traced back to Bayes (1763)[10] and Fisher (1930).[1] Although the phrase seems to first be used in Cox (1958).[11] Some researchers view the confidence distribution as "the Neymanian interpretation of Fisher's fiducial distributions",[12] which was "furiously disputed by Fisher".[13] It is also believed that these "unproductive disputes" and Fisher's "stubborn insistence"[13] might be the reason that the concept of confidence distribution has been long misconstrued as a fiducial concept and not been fully developed under the frequentist framework.[6][14] Indeed, the confidence distribution is a purely frequentist concept with a purely frequentist interpretation, although it also has ties to Bayesian and fiducial inference concepts.
Definition
[edit]Classical definition
[edit]Classically, a confidence distribution is defined by inverting the upper limits of a series of lower-sided confidence intervals.[15][16][page needed] In particular,
- For every α in (0, 1), let (−∞, ξn(α)] be a 100α% lower-side confidence interval for θ, where ξn(α) = ξn(Xn,α) is continuous and increasing in α for each sample Xn. Then, Hn(•) = ξn−1(•) is a confidence distribution for θ.
Efron stated that this distribution "assigns probability 0.05 to θ lying between the upper endpoints of the 0.90 and 0.95 confidence interval, etc." and "it has powerful intuitive appeal".[16] In the classical literature,[3] the confidence distribution function is interpreted as a distribution function of the parameter θ, which is impossible unless fiducial reasoning is involved since, in a frequentist setting, the parameters are fixed and nonrandom.
To interpret the CD function entirely from a frequentist viewpoint and not interpret it as a distribution function of a (fixed/nonrandom) parameter is one of the major departures of recent development relative to the classical approach. The nice thing about treating confidence distributions as a purely frequentist concept (similar to a point estimator) is that it is now free from those restrictive, if not controversial, constraints set forth by Fisher on fiducial distributions.[6][14]
The modern definition
[edit]The following definition applies;[12][17][18] Θ is the parameter space of the unknown parameter of interest θ, and χ is the sample space corresponding to data Xn={X1, ..., Xn}:
- A function Hn(•) = Hn(Xn, •) on χ × Θ → [0, 1] is called a confidence distribution (CD) for a parameter θ, if it follows two requirements:
- (R1) For each given Xn ∈ χ, Hn(•) = Hn(Xn, •) is a continuous cumulative distribution function on Θ;
- (R2) At the true parameter value θ = θ0, Hn(θ0) ≡ Hn(Xn, θ0), as a function of the sample Xn, follows the uniform distribution U[0, 1].
Also, the function H is an asymptotic CD (aCD), if the U[0, 1] requirement is true only asymptotically and the continuity requirement on Hn(•) is dropped.
In nontechnical terms, a confidence distribution is a function of both the parameter and the random sample, with two requirements. The first requirement (R1) simply requires that a CD should be a distribution on the parameter space. The second requirement (R2) sets a restriction on the function so that inferences (point estimators, confidence intervals and hypothesis testing, etc.) based on the confidence distribution have desired frequentist properties. This is similar to the restrictions in point estimation to ensure certain desired properties, such as unbiasedness, consistency, efficiency, etc.[6][19]
A confidence distribution derived by inverting the upper limits of confidence intervals (classical definition) also satisfies the requirements in the above definition and this version of the definition is consistent with the classical definition.[18]
Unlike the classical fiducial inference, more than one confidence distributions may be available to estimate a parameter under any specific setting. Also, unlike the classical fiducial inference, optimality is not a part of requirement. Depending on the setting and the criterion used, sometimes there is a unique "best" (in terms of optimality) confidence distribution. But sometimes there is no optimal confidence distribution available or, in some extreme cases, we may not even be able to find a meaningful confidence distribution. This is not different from the practice of point estimation.
A definition with measurable spaces
[edit]A confidence distribution[20] for a parameter in a measurable space is a distribution estimator with for a family of confidence regions for with level for all levels . The family of confidence regions is not unique.[21] If only exists for , then is a confidence distribution with level set . Both and all are measurable functions of the data. This implies that is a random measure and is a random set. If the defining requirement holds with equality, then the confidence distribution is by definition exact. If, additionally, is a real parameter, then the measure theoretic definition coincides with the above classical definition.
Examples
[edit]Example 1: Normal mean and variance
[edit]Suppose a normal sample Xi ~ N(μ, σ2), i = 1, 2, ..., n is given.
(1) Variance σ2 is known
Let, Φ be the cumulative distribution function of the standard normal distribution, and the cumulative distribution function of the Student distribution. Both the functions and given by
satisfy the two requirements in the CD definition, and they are confidence distribution functions for μ.[3] Furthermore,
satisfies the definition of an asymptotic confidence distribution when n→∞, and it is an asymptotic confidence distribution for μ. The uses of and are equivalent to state that we use and to estimate , respectively.
(2) Variance σ2 is unknown
For the parameter μ, since involves the unknown parameter σ and it violates the two requirements in the CD definition, it is no longer a "distribution estimator" or a confidence distribution for μ.[3] However, is still a CD for μ and is an aCD for μ.
For the parameter σ2, the sample-dependent cumulative distribution function
is a confidence distribution function for σ2.[6] Here, is the cumulative distribution function of the distribution .
In the case when the variance σ2 is known, is optimal in terms of producing the shortest confidence intervals at any given level. In the case when the variance σ2 is unknown, is an optimal confidence distribution for μ.
Example 2: Bivariate normal correlation
[edit]Let ρ denotes the correlation coefficient of a bivariate normal population. It is well known that Fisher's z defined by the Fisher transformation:
has the limiting distribution with a fast rate of convergence, where r is the sample correlation and n is the sample size.
The function
is an asymptotic confidence distribution for ρ.[22]
An exact confidence density for ρ is[23][24]
where is the Gaussian hypergeometric function and . This is also the posterior density of a Bayes matching prior for the five parameters in the binormal distribution.[25]
The very last formula in the classical book by Fisher gives
where and . This formula was derived by C. R. Rao.[26]
Example 3: Binormal mean
[edit]Let data be generated by where is an unknown vector in the plane and has a binormal and known distribution in the plane. The distribution of defines a confidence distribution for . The confidence regions can be chosen as the interior of ellipses centered at and axes given by the eigenvectors of the covariance matrix of . The confidence distribution is in this case binormal with mean , and the confidence regions can be chosen in many other ways.[21] The confidence distribution coincides in this case with the Bayesian posterior using the right Haar prior.[27] The argument generalizes to the case of an unknown mean in an infinite-dimensional Hilbert space, but in this case the confidence distribution is not a Bayesian posterior.[28]
Using confidence distributions for inference
[edit]Confidence interval
[edit]From the CD definition, it is evident that the interval and provide 100(1 − α)%-level confidence intervals of different kinds, for θ, for any α ∈ (0, 1). Also is a level 100(1 − α1 − α2)% confidence interval for the parameter θ for any α1 > 0, α2 > 0 and α1 + α2 < 1. Here, is the 100β% quantile of or it solves for θ in equation . The same holds for a CD, where the confidence level is achieved in limit. Some authors have proposed using them for graphically viewing what parameter values are consistent with the data, instead of coverage or performance purposes.[29][30]
Point estimation
[edit]Point estimators can also be constructed given a confidence distribution estimator for the parameter of interest. For example, given Hn(θ) the CD for a parameter θ, natural choices of point estimators include the median Mn = Hn−1(1/2), the mean , and the maximum point of the CD density
Under some modest conditions, among other properties, one can prove that these point estimators are all consistent.[6][22] Certain confidence distributions can give optimal frequentist estimators.[28]
Hypothesis testing
[edit]One can derive a p-value for a test, either one-sided or two-sided, concerning the parameter θ, from its confidence distribution Hn(θ).[6][22] Denote by the probability mass of a set C under the confidence distribution function This ps(C) is called "support" in the CD inference and also known as "belief" in the fiducial literature.[31] We have
(1) For the one-sided test K0: θ ∈ C vs. K1: θ ∈ Cc, where C is of the type of (−∞, b] or [b, ∞), one can show from the CD definition that supθ ∈ CPθ(ps(C) ≤ α) = α. Thus, ps(C) = Hn(C) is the corresponding p-value of the test.
(2) For the singleton test K0: θ = b vs. K1: θ ≠ b, P{K0: θ = b}(2 min{ps(Clo), one can show from the CD definition that ps(Cup)} ≤ α) = α. Thus, 2 min{ps(Clo), ps(Cup)} = 2 min{Hn(b), 1 − Hn(b)} is the corresponding p-value of the test. Here, Clo = (−∞, b] and Cup = [b, ∞).
See Figure 1 from Xie and Singh (2011)[6] for a graphical illustration of the CD inference.
Implementations
[edit]A few statistical programs have implemented the ability to construct and graph confidence distributions.
R, via the concurve
,[32][33] pvaluefunctions
,[34] and episheet
[35] packages
See also
[edit]References
[edit]- ^ a b Fisher, R.A. (1930). "Inverse probability." Proc. cambridge Pilos. Soc. 26, 528–535.
- ^ Cox, D.R. (1958). "Some Problems Connected with Statistical Inference", "The Annals of Mathematical Statistics", "29" 357-372 (Section 4, Page 363) doi:10.1214/aoms/1177706618
- ^ a b c d e f Xie, M. (2013). "Rejoinder of Confidence Distribution, the Frequentist Distribution Estimator of a Parameter – a Review". International Statistical Review, 81, 68-77. doi:10.1111/insr.12001
- ^ Efron, B. (1998). "R.A.Fisher in the 21st Century" Statistical Science. 13 95–122. JSTOR 2290557
- ^ Fraser, D.A.S. (1991). "Statistical inference: Likelihood to significance." Journal of the American Statistical Association, 86, 258–265. JSTOR 2290557
- ^ a b c d e f g h Xie, M. and Singh, K. (2013). "Confidence Distribution, the Frequentist Distribution Estimator of a Parameter – a Review (with discussion)". International Statistical Review, 81, 3-39. doi:10.1111/insr.12000
- ^ Fraser, D. A. S. (2019-03-29). "The p-value Function and Statistical Inference". The American Statistician. 73 (sup1): 135–147. doi:10.1080/00031305.2018.1556735. ISSN 0003-1305.
- ^ Neyman, J. (1937). "Outline of a theory of statistical estimation based on the classical theory of probability." Phil. Trans. Roy. Soc A237 333–380
- ^ Fraser, D.A.S. (2011). "Is Bayes posterior just quick and dirty confidence?" Statistical Science 26, 299-316. JSTOR 23059129
- ^ Bayes, T. (1763). "An Essay Towards Solving a Problem in the Doctrine of Chances." Phil. Trans. Roy. Soc, London 53 370–418 54 296–325. Reprinted in Biometrika 45 (1958) 293–315.
- ^ Cox, D. R. (June 1958). "Some Problems Connected with Statistical Inference". The Annals of Mathematical Statistics. 29 (2): 357–372. doi:10.1214/aoms/1177706618. ISSN 0003-4851.
- ^ a b Schweder, T. and Hjort, N.L. (2002). "Confidence and likelihood", Scandinavian Journal of Statistics. 29 309–332. doi:10.1111/1467-9469.00285
- ^ a b Zabell, S.L. (1992). "R.A.Fisher and fiducial argument", Stat. Sci., 7, 369–387
- ^ a b Singh, K. and Xie, M. (2011). "Discussions of “Is Bayes posterior just quick and dirty confidence?” by D.A.S. Fraser." Statistical Science. Vol. 26, 319-321. JSTOR 23059131
- ^ Cox, D. R. (2006). Principles of Statistical Inference, CUP. ISBN 0-521-68567-2. (page 66)
- ^ a b Efron, B. (1993). "Bayes and likelihood calculations from confidence intervals. Biometrika, 80 3–26.
- ^ Singh, K. Xie, M. and Strawderman, W.E. (2001). "Confidence distributions—concept, theory and applications". Technical report, Dept. Statistics, Rutgers Univ. Revised 2004.
- ^ a b Singh, K. Xie, M. and Strawderman, W.E. (2005). "Combining Information from Independent Sources Through Confidence Distribution" Annals of Statistics, 33, 159–183. JSTOR 3448660
- ^ Xie, M., Liu, R., Daramuju, C.V., Olsan, W. (2012). "Incorporating expert opinions with information from binomial clinical trials." Annals of Applied Statistics. In press.
- ^ Taraldsen, Gunnar (2021). "Joint Confidence Distributions". doi:10.13140/RG.2.2.33079.85920.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ a b Liu, Dungang; Liu, Regina Y.; Xie, Min-ge (2021-04-30). "Nonparametric Fusion Learning for Multiparameters: Synthesize Inferences From Diverse Sources Using Data Depth and Confidence Distribution". Journal of the American Statistical Association. 117 (540): 2086–2104. doi:10.1080/01621459.2021.1902817. ISSN 0162-1459. S2CID 233657455.
- ^ a b c Singh, K. Xie, M. and Strawderman, W.E. (2007). "Confidence Distribution (CD)-Distribution Estimator of a Parameter", in Complex Datasets and Inverse Problems IMS Lecture Notes—Monograph Series, 54, (R. Liu, et al. Eds) 132–150. JSTOR 20461464
- ^ Taraldsen, Gunnar (2021). "The Confidence Density for Correlation". Sankhya A. 85: 600–616. doi:10.1007/s13171-021-00267-y. ISSN 0976-8378. S2CID 244594067.
- ^ Taraldsen, Gunnar (2020). "Confidence in Correlation". doi:10.13140/RG.2.2.23673.49769.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Berger, James O.; Sun, Dongchu (2008-04-01). "Objective priors for the bivariate normal model". The Annals of Statistics. 36 (2). arXiv:0804.0987. doi:10.1214/07-AOS501. ISSN 0090-5364. S2CID 14703802.
- ^ Fisher, Ronald Aylmer, Sir (1973). Statistical methods and scientific inference ([3d ed., rev. and enl.] ed.). New York: Hafner Press. ISBN 0-02-844740-9. OCLC 785822.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - ^ Eaton, Morris L.; Sudderth, William D. (2012). "Invariance, model matching and probability matching". Sankhyā: The Indian Journal of Statistics, Series A (2008-). 74 (2): 170–193. doi:10.1007/s13171-012-0018-4. ISSN 0976-836X. JSTOR 42003718. S2CID 120705955.
- ^ a b Taraldsen, Gunnar; Lindqvist, Bo Henry (2013-02-01). "Fiducial theory and optimal inference". The Annals of Statistics. 41 (1). arXiv:1301.1717. doi:10.1214/13-AOS1083. ISSN 0090-5364. S2CID 88520957.
- ^ Cox, D. R.; Hinkley, D. V. (1979-09-06). Theoretical Statistics. Chapman and Hall/CRC. doi:10.1201/b14832. ISBN 978-0-429-17021-8.
- ^ Rafi, Zad; Greenland, Sander (2020-09-30). "Semantic and cognitive tools to aid statistical science: replace confidence and significance by compatibility and surprise". BMC Medical Research Methodology. 20 (1): 244. arXiv:1909.08579. doi:10.1186/s12874-020-01105-9. ISSN 1471-2288. PMC 7528258. PMID 32998683.
- ^ Kendall, M., & Stuart, A. (1974). The Advanced Theory of Statistics, Volume ?. (Chapter 21). Wiley.
- ^ a b Rafi [aut, Zad; cre; Vigotsky, Andrew D. (2020-04-20), concurve: Computes and Plots Compatibility (Confidence) Intervals, P-Values, S-Values, & Likelihood Intervals to Form Consonance, Surprisal, & Likelihood Functions, retrieved 2020-05-05
- ^ "Concurve plots consonance curves, p-value functions, and S-value functions « Statistical Modeling, Causal Inference, and Social Science". statmodeling.stat.columbia.edu. Retrieved 2020-04-15.
- ^ Infanger, Denis (2019-11-29), pvaluefunctions: Creates and Plots P-Value Functions, S-Value Functions, Confidence Distributions and Confidence Densities, retrieved 2020-04-15
- ^ Black, James; Rothman, Ken; Thelwall, Simon (2019-01-23), episheet: Rothman's Episheet, retrieved 2020-04-15
- ^ "Modern Epidemiology, 2nd Edition". www.krothman.org. Archived from the original on 2020-01-29. Retrieved 2020-04-15.
Bibliography
[edit]- Xie, M. and Singh, K. (2013). [1] "Confidence Distribution, the Frequentist Distribution Estimator of a Parameter: A Review". International Statistical Review, 81, 3–39.
- Schweder, T and Hjort, N L (2016). [2]Confidence, Likelihood, Probability: Statistical Inference with Confidence Distributions. London: Cambridge University Press. ISBN 9781139046671
- Fisher, R A (1956). Statistical Methods and Scientific Inference. New York: Hafner. ISBN 0-02-844740-9.
- Fisher, R. A. (1955). "Statistical methods and scientific induction" J. Roy. Statist. Soc. Ser. B. 17, 69—78. (criticism of statistical theories of Jerzy Neyman and Abraham Wald from a fiducial perspective)
- Hannig, J. (2009). "On generalized fiducial inference". Statistica Sinica, 19, 491–544.
- Lawless, F. and Fredette, M. (2005). "Frequentist prediction intervals and predictive distributions." Biometrika. 92(3) 529–542.
- Lehmann, E.L. (1993). "The Fisher, Neyman–Pearson theories of testing hypotheses: one theory or two?" Journal of the American Statistical Association 88 1242–1249.
- Neyman, Jerzy (1956). "Note on an Article by Sir Ronald Fisher". Journal of the Royal Statistical Society. Series B (Methodological) 18 (2): 288–294. JSTOR 2983716. (reply to Fisher 1955, which diagnoses a fallacy of "fiducial inference")
- Schweder T., Sadykova D., Rugh D. and Koski W. (2010) "Population Estimates From Aerial Photographic Surveys of Naturally and Variably Marked Bowhead Whales" Journal of Agricultural Biological and Environmental Statistics 2010 15: 1–19
- Bityukov S., Krasnikov N., Nadarajah S. and Smirnova V. (2010) "Confidence distributions in statistical inference". AIP Conference Proceedings, 1305, 346-353.
- Singh, K. and Xie, M. (2012). "CD-posterior --- combining prior and data through confidence distributions." Contemporary Developments in Bayesian Analysis and Statistical Decision Theory: A Festschrift for William E. Strawderman. (D. Fourdrinier, et al., Eds.). IMS Collection, Volume 8, 200 -214.