Jump to content

Draft:Clinical Versus Statistical Prediction

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by FelixKerscher (talk | contribs) at 22:30, 22 December 2024 (Submitting using AfC-submit-wizard). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Clinical and statistical prediction are two different ways of combining information to make a decision.[1] Many important decisions require the integration of multiple pieces of information. Doctors combine symptom information, demographic information (e.g., gender, age, work history, etc.), and results of medical tests to make a prediction of the patient's ailment, resulting in a diagnosis. Judges combine eyewitness testimony, DNA evidence, and criminal history to make a prediction about a person's guilt, resulting in a verdict. Selection professionals combine CV impressions, interview impressions, and questionnaire results to make a prediction of the future behavior of the applicant, resulting in a selection decision. Information is typically combined in the mind (by thinking about it). This is called clinical prediction, but it can also be referred to as holistic, subjective, impressionistic, and informal prediction (or combination).[2] However, information can also be quantified and then mechanically combined with the use of a formula. This is called statistical prediction, which is also sometimes referred to as actuarial, algorithmic, formal, and mechanical prediction (or combination). Statistical prediction most commonly involves attaching weights to various sources of quantitative information and mathematically combining said information. The weights can be obtained in various ways, such as optimal weights via multiple regression, bootstrapped weights using the so-called 'model of man' approach (which is based on Brunswik's lens model),[3] and unit weights by simply giving equal weight to every piece of information.

Statistical prediction is generally superior to clinical prediction in terms of decision accuracy.[4] This finding has since been extensively replicated and is supported by meta-analyses in the areas of human health and behaviour,[5] mental health,[6] and admissions and hiring.[7] Although the name 'clinical prediction' may imply a medical context, research on clinical versus statistical prediction has been carried out in a wide range of domains, including prediction of criminal recidivism,[8] assessment of marital satisfaction, lie detection, prediction of business failure, and prediction of magazine advertising sales.[9]

Paul Meehl was the first to introduce the issue of clinical versus statistical prediction to a broader audience of social scientists with his seminal work in 1954 entitled Clinical versus statistical prediction: A theoretical analysis and a review of the evidence.[10]

The Robustness of the Linear Model in Decision Making

One of the key findings from clinical versus statistical prediction research is that the exact weighting of the information is far less important than the act of mechanically combining the information. A 1974 landmark study by Robyn Dawes and Bernard Corrigan found that even statistical prediction based on randomly determined weights can outperform clinical prediction as long as positive predictor information is weighted positive and negative predictor information negative.[11] An example of a positive predictor of job performance is intelligence.[12]

This finding was later replicated by Martin Yu and Nathan Kuncel in 2020. In their study, doctoral level psychologists that were working in an international management consulting firm and were trained in carrying out managerial hiring assessments, were outperformed by randomly chosen weights that were statistically combined in evaluating candidates for management positions.[13]

Findings also show that, assuming all predictor information is on the same scale (e.g., a 5-point scale), unit-weighting, i.e., simply adding up the predictor information, is also superior to experts’ clinical judgement in terms of accuracy.[14]

Poor Reception

Despite strong evidence of the superior decision accuracy of statistical prediction over clinical prediction, statistical prediction is rarely used in practice (also see algorithm aversion).[15] There are various reasons for this including but not limited to not knowing about the superiority of statistical prediction, not knowing how or not wanting to quantify qualitative data, restriction of the decision makers' psychological need for autonomy, and disbelief in the evidence concerning clinical vs statistical prediction.[16]

References

  1. ^ Meehl, P. E. (1954). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence (pp. 3, 149). University of Minnesota Press. https://doi.org/10.1037/11281-000
  2. ^ Grove, W. M., & Meehl, P. E. (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction. Psychology, Public Policy, and Law, 2(2), 293–323. https://psycnet.apa.org/doi/10.1037/1076-8971.2.2.293
  3. ^ Goldberg, L. R. (1970). Man versus model of man: A rationale, plus some evidence, for a method of improving on clinical inferences. Psychological Bulletin, 73(6), 422–432. https://psycnet.apa.org/doi/10.1037/h0029230
  4. ^ Meehl, P. E. (1954). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. University of Minnesota Press. https://doi.org/10.1037/11281-000
  5. ^ Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: A meta-analysis. Psychological Assessment, 12(1), 19–30. https://doi.org/10.1037/1040-3590.12.1.19
  6. ^ Ægisdóttir, S., White, M. J., Spengler, P. M., Maugherman, A. S., Anderson, L. A., Cook, R. S., Nichols, C. N., Lampropoulos, G. K., Walker, B. S., Cohen, G., & Rush, J. D. (2006). The Meta-Analysis of Clinical Judgment Project: Fifty-Six Years of Accumulated Research on Clinical Versus Statistical Prediction. The Counseling Psychologist, 34(3), 341–382. https://doi.org/10.1177/0011000005285875
  7. ^ Kuncel, N. R., Klieger, D. M., Connelly, B. S., & Ones, D. S. (2013). Mechanical versus clinical data combination in selection and admissions decisions: A meta-analysis. Journal of Applied Psychology, 98(6), 1060–1072. https://doi.org/10.1037/a0034156
  8. ^ Wormith, J. S., Hogg, S., & Guzzo, L. (2012). The Predictive Validity of a General Risk/Needs Assessment Inventory on Sexual Offender Recidivism and an Exploration of the Professional Override. Criminal Justice and Behavior, 39(12), 1511–1538. https://doi.org/10.1177/0093854812455741
  9. ^ Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: A meta-analysis. Psychological Assessment, 12(1), 19–30. https://doi.org/10.1037/1040-3590.12.1.19
  10. ^ Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical Versus Actuarial Judgment. Science, New Series, 243(4899), 1668–1674. https://psycnet.apa.org/doi/10.1126/science.2648573
  11. ^ Dawes, R. M., & Corrigan, B. (1974). Linear models in decision making. Psychological Bulletin, 81(2), 95–106. https://doi.org/10.1037/h0037613
  12. ^ Sackett, P. R., Zhang, C., Berry, C. M., & Lievens, F. (2022). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology, 107(11), 2040–2068. https://doi.org/10.1037/apl0000994
  13. ^ Yu, M., & Kuncel, N. (2020). Pushing the Limits for Judgmental Consistency: Comparing Random Weighting Schemes with Expert Judgments. Personnel Assessment and Decisions, 6(2). https://doi.org/10.25035/pad.2020.02.002
  14. ^ Dawes, R. M., & Corrigan, B. (1974). Linear models in decision making. Psychological Bulletin, 81(2), 95–106. https://doi.org/10.1037/h0037613
  15. ^ Meehl, P. E. (1986). Causes and Effects of My Disturbing Little Book. Journal of Personality Assessment, 50(3), 370–375. https://doi.org/10.1207/s15327752jpa5003_6
  16. ^ Neumann, M., Niessen, A. S. M., Hurks, P. P. M., & Meijer, R. R. (2023). Holistic and mechanical combination in psychological assessment: Why algorithms are underutilized and what is needed to increase their use. International Journal of Selection and Assessment, ijsa.12416. https://doi.org/10.1111/ijsa.12416