Jump to content

Equivalence test

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by GrowCFO Limited Dan Wells (talk | contribs) at 11:25, 24 November 2022 (Grammar). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Equivalence tests are a variety of hypothesis tests used to draw statistical inferences from observed data. In these tests, the null hypothesis is defined as an effect large enough to be deemed interesting, specified by an equivalence bound. The alternative hypothesis is any effect that is less extreme than said equivalence bound. The observed data are statistically compared against the equivalence bounds. If the statistical test indicates the observed data is surprising, assuming that true effects are at least as extreme as the equivalence bounds, a Neyman-Pearson approach to statistical inferences can be used to reject effect sizes larger than the equivalence bounds with a pre-specified Type 1 error rate.  

Equivalence testing originates from the field of clinical trials.[1] One application, known as a non-inferiority trial, is used to show that a new drug that is cheaper than available alternatives works as well as an existing drug. In essence, equivalence tests consist of calculating a confidence interval around an observed effect size and rejecting effects more extreme than the equivalence bound when the confidence interval does not overlap with the equivalence bound. In two-sided tests, both upper and lower equivalence bounds are specified. In non-inferiority trials, where the goal is to test the hypothesis that a new treatment is not worse than existing treatments, only a lower equivalence bound is specified.   

Mean differences (black squares) and 90% confidence intervals (horizontal lines) with equivalence bounds ΔL = -0.5 and ΔU= 0.5 for four combinations of test results that are statistically equivalent or not and statistically different from zero or not. Pattern A is statistically equivalent, pattern B is statistically different from 0, pattern C is practically insignificant, and pattern D is inconclusive (neither statistically different from 0 nor equivalent).

Equivalence tests can be performed in addition to null-hypothesis significance tests.[1][2][3][4] This might prevent common misinterpretations of p-values larger than the alpha level as support for the absence of a true effect. Furthermore, equivalence tests can identify effects that are statistically significant but practically insignificant, whenever effects are statistically different from zero, but also statistically smaller than any effect size deemed worthwhile (see the first figure).[5] Equivalence tests were originally used in areas such as pharmaceutics, frequently in bioequivalence trials. However, these tests can be applied to any instance where the research question asks whether the means of two sets of scores are practically or theoretically equivalent. As such, equivalence analyses have seen increased usage in almost all medical research fields. Additionally, the field of psychology has been adopting the use of equivalence testing, particularly in clinical trials. This is not to say, however, that equivalence analyses should be limited to clinical trials, and the application of these tests can occur in a range of research areas. In this regard, equivalence tests have recently been introduced in exercise physiology and sports science.[6] Several tests exist for equivalence analyses; however, more recently the two-one-sided t-tests (TOST) procedure has been garnering considerable attention. As outlined below, this approach is an adaptation of the widely known t-test.  

TOST procedure

A very simple equivalence testing approach is the ‘two one-sided t-tests’ (TOST) procedure.[1] In the TOST procedure an upper (ΔU) and lower (–ΔL) equivalence bound is specified based on the smallest effect size of interest (e.g., a positive or negative difference of d = 0.3). Two composite null hypotheses are tested: H01: Δ ≤ –ΔL and H02: Δ ≥ ΔU. When both these one-sided tests can be statistically rejected, we can conclude that –ΔL < Δ < ΔU, or that the observed effect falls within the equivalence bounds and is statistically smaller than any effect deemed worthwhile and considered practically equivalent".[7] Alternatives to the TOST procedure have been developed as well.[3] A recent modification to TOST makes the approach feasible in cases of repeated measures and assessing multiple variables.[4]

Comparison between t-test and equivalence test

For comparison purposes, the equivalence test can be induced from the t-test.[1] Considering a t-test at the significance level αt-test achieving a power of 1-βt-test for a relevant effect size dr, both tests lead to the same inference whenever parameters Δ=dr as well as αequiv.-testt-test and βequiv.-testt-test coincide, i.e. the error types (type I and type II) are interchanged between the t-test and the equivalence test. To achieve this for the t-test, either the sample size calculation needs to be carried out correctly, or the t-test significance level αt-test needs to be adjusted, referred to as the so-called revised t-test.[1] Both approaches have difficulties in practice since sample size planning relies on unverifiable assumptions of the standard deviation, and the revised t-test yields numerical problems.[1] Preserving the test behaviour, those limitations can be removed by using an equivalence test.  

The figure below allows a visual comparison of the equivalence test and the t-test when the sample size calculation is affected by differences between the a priori standard deviation and the sample's standard deviation , which is a common problem. Using an equivalence test instead of a t-test additionally ensures that αequiv.-test is bounded, which the t-test does not do in case that with the type II error growing arbitrary large. On the other hand, having results in the t-test being stricter than the dr specified in the planning, which may randomly penalise the sample source (e.g., a device manufacturer). This makes the equivalence test safer to use.

Chances to pass (a) the t-test and (b) the equivalence test, depending on the actual error 𝜇. For more details, see[8]

See also

Literature

  • Walker, Esteban; Nowacki, Amy S. (February 2011). "Understanding Equivalence and Noninferiority Testing". Journal of General Internal Medicine. 26 (2): 192–6. doi:10.1007/s11606-010-1513-8. PMC 3019319. PMID 20857339.

References

  1. ^ a b c d e f Schuirmann, Donald J. (December 1987). "A comparison of the Two One-Sided Tests Procedure and the Power Approach for assessing the equivalence of average bioavailability". Journal of Pharmacokinetics and Biopharmaceutics. 15 (6): 657–680. doi:10.1007/BF01068419. ISSN 0090-466X.
  2. ^ Seaman, Michael A.; Serlin, Ronald C. (1998). "Equivalence confidence intervals for two-group comparisons of means". Psychological Methods. 3 (4): 403–411. doi:10.1037/1082-989X.3.4.403. ISSN 1939-1463.
  3. ^ a b Wellek, Stefan (2010). Testing statistical hypotheses of equivalence and noninferiority (2nd ed.). Boca Raton: CRC Press. ISBN 978-1-4398-0818-4. OCLC 457164048.
  4. ^ a b Rose, Evangeline M.; Mathew, Thomas; Coss, Derek A.; Lohr, Bernard; Omland, Kevin E. (November 2018). "A new statistical method to test equivalence: an application in male and female eastern bluebird song". Animal Behaviour. 145: 77–85. doi:10.1016/j.anbehav.2018.09.004.
  5. ^ Siebert, Michael; Ellenberger, David (December 2020). "Validation of automatic passenger counting: introducing the t-test-induced equivalence test". Transportation. 47 (6): 3031–3045. doi:10.1007/s11116-019-09991-9. ISSN 0049-4488.
  6. ^ Mazzolari, Raffaele; Porcelli, Simone; Bishop, David J.; Lakens, Daniël (March 2022). "Myths and methodologies: The use of equivalence and non‐inferiority tests for interventional studies in exercise physiology and sport science". Experimental Physiology. 107 (3): 201–212. doi:10.1113/EP090171. ISSN 0958-0670.
  7. ^ Lakens, Daniël (May 2017). "Equivalence Tests: A Practical Primer for t Tests, Correlations, and Meta-Analyses". Social Psychological and Personality Science. 8 (4): 355–362. doi:10.1177/1948550617697177. ISSN 1948-5506. PMC 5502906. PMID 28736600.
  8. ^ Siebert, Michael; Ellenberger, David (2019-04-10). "Validation of automatic passenger counting: introducing the t-test-induced equivalence test". Transportation. 47 (6): 3031–3045. doi:10.1007/s11116-019-09991-9. ISSN 0049-4488.