Jump to content

Scheffé's method: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
BG19bot (talk | contribs)
m The method: WP:CHECKWIKI error fix for #61. Punctuation goes before References. Do general fixes if a problem exists. -
add link, avoid orphan
Line 1: Line 1:
In [[statistics]], '''Scheffé's method''', named after the [[United States|American]] [[statistician]] [[Henry Scheffé]], is a method for adjusting [[statistical significance|significance levels]] in a [[linear regression]] analysis to account for [[multiple comparisons]]. It is particularly useful in [[analysis of variance]] (a special case of regression analysis), and in constructing simultaneous [[confidence band]]s for regressions involving [[basis functions]].
In [[statistics]], '''Scheffé's method''', named after the [[United States|American]] [[statistician]] [[Henry Scheffé]], is a method for adjusting [[statistical significance|significance levels]] in a [[linear regression]] analysis to account for [[multiple comparisons]]. It is particularly useful in [[analysis of variance]] (a special case of regression analysis), and in constructing simultaneous [[confidence band]]s for regressions involving [[basis functions]].


Scheffé's method is a single-step multiple comparison procedure which applies to the set of estimates of all possible [[contrast (statistics)|contrast]]s among the factor level means, not just the pairwise differences considered by the [[Tukey–Kramer method]].
Scheffé's method is a single-step multiple comparison procedure which applies to the set of estimates of all possible [[contrast (statistics)|contrast]]s among the factor level means, not just the pairwise differences considered by the [[Tukey–Kramer method]]. It works on similar principles as the [[Working-Hotelling procedure]] for estimating mean responses in regression, which applies to the set of all possible factor levels.


==The method==
==The method==

Revision as of 03:46, 9 January 2017

In statistics, Scheffé's method, named after the American statistician Henry Scheffé, is a method for adjusting significance levels in a linear regression analysis to account for multiple comparisons. It is particularly useful in analysis of variance (a special case of regression analysis), and in constructing simultaneous confidence bands for regressions involving basis functions.

Scheffé's method is a single-step multiple comparison procedure which applies to the set of estimates of all possible contrasts among the factor level means, not just the pairwise differences considered by the Tukey–Kramer method. It works on similar principles as the Working-Hotelling procedure for estimating mean responses in regression, which applies to the set of all possible factor levels.

The method

Let μ1, ..., μr be the means of some variable in r disjoint populations.

An arbitrary contrast is defined by

where

If μ1, ..., μr are all equal to each other, then all contrasts among them are 0. Otherwise, some contrasts differ from 0.

Technically there are infinitely many contrasts. The simultaneous confidence coefficient is exactly 1 − α, whether the factor level sample sizes are equal or unequal. (Usually only a finite number of comparisons are of interest. In this case, Scheffé's method is typically quite conservative,[1] and the family-wise error rate (experimental error rate) will generally be much smaller than α.)[2][3]

We estimate C by

for which the estimated variance is

where

  • ni is the size of the sample taken from the ith population (the one whose mean is μi), and
  • is the estimated variance of the errors.

It can be shown that the probability is 1 − α that all confidence limits of the type

are simultaneously correct, where as usual N is the size of the whole population. [Draper and Smith, in their 'Applied Regression Analysis, Second edition' page 93, indicate that that should be 'r' not 'r-1'. The slip with 'r-1' is a result of failing to allow for the additional effect of the constant term in many regressions. That the result based on r-1 is wrong is readily seen by considering r = 2, as in a standard simple linear regression. That formula would then reduce to one with the usual t distribution, which is appropriate for predicting/estimating for a SINGLE value of the independent variable, NOT for constructing a confidence band for a range of values of the independent value. Also note that the formula is for dealing with the MEAN values for a range of independent values, NOT for comparing with individual values such as individual observed data values. Added by a retired stats prof.]

Denoting Scheffé significance in a table

Frequently, superscript letters are used to indicate which values are significantly different using the Scheffé method. For example, when mean values of variables that have been analyzed using an ANOVA are presented in a table, they are assigned a different letter superscript based on a Scheffé contrast. Values that are not significantly different based on the post-hoc Scheffé contrast will have the same superscript and values that are significantly different will have different superscripts (i.e. 15a, 17a, 34b would mean that the first and second variables both differ from the third variable but not each other because they are both assigned the superscript "a").[citation needed]

Comparison with the Tukey–Kramer method

If only a fixed number of pairwise comparisons are to be made, the Tukey–Kramer method will result in a more precise confidence interval. In the general case when many or all contrasts might be of interest, the Scheffé method is more appropriate and will give narrower confidence intervals in the case of a large number of comparisons.

References

  1. ^ Ijsmi, Editor (2016-11-14). "Post-hoc and multiple comparison test – An overview with SAS and R Statistical Package". International Journal of Statistics and Medical Informatics. 1 (1): 1–9. {{cite journal}}: |first= has generic name (help)
  2. ^ Maxwell, Scott E.; Delaney, Harold D. (2004). Designing Experiments and Analyzing Data: A Model Comparison. Lawrence Erlbaum Associates. pp. 217–218. ISBN 0-8058-3718-3.
  3. ^ Milliken, George A.; Johnson, Dallas E. (1993). Analysis of Messy Data. CRC Press. pp. 35–36. ISBN 0-412-99081-4.

Public Domain This article incorporates public domain material from the National Institute of Standards and Technology