Cohen's kappa: Difference between revisions
m Task 18 (cosmetic): eval 30 templates: del empty params (14×); hyphenate params (1×); |
m spaces |
||
Line 1: | Line 1: | ||
{{short description|Statistic measuring inter-rater agreement for categorical items}} |
{{short description|Statistic measuring inter-rater agreement for categorical items}} |
||
'''Cohen's kappa coefficient''' ('''''κ''''') is a [[statistic]] that is used to measure [[inter-rater reliability]] (and also [[Intra-rater reliability]]) for qualitative (categorical) items.<ref name="Mary2012">{{cite journal|last1=McHugh|first1=Mary L.|year=2012|title=Interrater reliability: The kappa statistic|journal=Biochemia Medica|volume=22|issue=3|pages=276–282|doi=10.11613/bm.2012.031|pmc=3900052|pmid=23092060}}</ref> It is generally thought to be a more robust measure than simple percent agreement calculation, as ''κ'' takes into account the possibility of the agreement occurring by chance. |
'''Cohen's kappa coefficient''' ('''''κ''''') is a [[statistic]] that is used to measure [[inter-rater reliability]] (and also [[Intra-rater reliability]]) for qualitative (categorical) items.<ref name="Mary2012">{{cite journal|last1=McHugh|first1=Mary L.|year=2012|title=Interrater reliability: The kappa statistic|journal=Biochemia Medica|volume=22|issue=3|pages=276–282|doi=10.11613/bm.2012.031|pmc=3900052|pmid=23092060}}</ref> It is generally thought to be a more robust measure than simple percent agreement calculation, as ''κ'' takes into account the possibility of the agreement occurring by chance. There is controversy surrounding Cohen's kappa due to the difficulty in interpreting indices of agreement. Some researchers have suggested that it is conceptually simpler to evaluate disagreement between items.<ref name=":0" /> See the [[Cohen's kappa#Limitations|Limitations]] section for more detail. |
||
==History== |
==History== |
||
Line 14: | Line 14: | ||
:<math>\kappa \equiv \frac{p_o - p_e}{1 - p_e} = 1- \frac{1 - p_o}{1 - p_e}, \!</math> |
:<math>\kappa \equiv \frac{p_o - p_e}{1 - p_e} = 1- \frac{1 - p_o}{1 - p_e}, \!</math> |
||
where {{mvar|p<sub>o</sub>}} is the relative observed agreement among raters (identical to [[Evaluation of binary classifiers|accuracy]]), and {{mvar|p<sub>e</sub>}} is the hypothetical probability of chance agreement, using the observed data to calculate the probabilities of each observer randomly seeing each category. |
where {{mvar|p<sub>o</sub>}} is the relative observed agreement among raters (identical to [[Evaluation of binary classifiers|accuracy]]), and {{mvar|p<sub>e</sub>}} is the hypothetical probability of chance agreement, using the observed data to calculate the probabilities of each observer randomly seeing each category. If the raters are in complete agreement then <math display="inline">\kappa=1</math>. If there is no agreement among the raters other than what would be expected by chance (as given by {{mvar|p<sub>e</sub>}}), <math display="inline">\kappa=0</math>. It is possible for the statistic to be negative,<ref>{{cite journal|title=The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements|journal=Physical Therapy|volume=85|issue=3|pages=257–268|year=2005|issn=1538-6724|doi=10.1093/ptj/85.3.257|last1=Sim|first1=Julius|last2=Wright|first2=Chris C.|pmid=15733050|doi-access=free}}</ref> which implies that there is no effective agreement between the two raters or the agreement is worse than random. |
||
For {{mvar|k}} categories, {{mvar|N}} observations to categorize and <math>n_{ki}</math> the number of times rater {{mvar|i}} predicted category {{mvar|k}}: |
For {{mvar|k}} categories, {{mvar|N}} observations to categorize and <math>n_{ki}</math> the number of times rater {{mvar|i}} predicted category {{mvar|k}}: |
||
Line 29: | Line 29: | ||
==Examples== |
==Examples== |
||
===Simple example=== |
===Simple example=== |
||
Suppose that you were analyzing data related to a group of 50 people applying for a grant. |
Suppose that you were analyzing data related to a group of 50 people applying for a grant. Each grant proposal was read by two readers and each reader either said "Yes" or "No" to the proposal. Suppose the disagreement count data were as follows, where A and B are readers, data on the main diagonal of the matrix (a and d) count the number of agreements and off-diagonal data (b and c) count the number of disagreements: |
||
{| class="wikitable" border="1" |
{| class="wikitable" border="1" |
||
! colspan="2" rowspan="2"| |
! colspan="2" rowspan="2"| |
||
Line 70: | Line 70: | ||
To calculate {{mvar|p<sub>e</sub>}} (the probability of random agreement) we note that: |
To calculate {{mvar|p<sub>e</sub>}} (the probability of random agreement) we note that: |
||
* Reader A said "Yes" to 25 applicants and "No" to 25 applicants. |
* Reader A said "Yes" to 25 applicants and "No" to 25 applicants. Thus reader A said "Yes" 50% of the time. |
||
* Reader B said "Yes" to 30 applicants and "No" to 20 applicants. |
* Reader B said "Yes" to 30 applicants and "No" to 20 applicants. Thus reader B said "Yes" 60% of the time. |
||
So the expected probability that both would say yes at random is: |
So the expected probability that both would say yes at random is: |
||
Line 101: | Line 101: | ||
| archive-date = 2011-07-07 |
| archive-date = 2011-07-07 |
||
| url-status = dead |
| url-status = dead |
||
}}</ref> (In the cases below, notice B has 70 yeses and 30 nos, in the first case, but those numbers are reversed in the second.) For instance, in the following two cases there is equal agreement between A and B (60 out of 100 in both cases) in terms of agreement in each class, so we would expect the relative values of Cohen's Kappa to reflect this. |
}}</ref> (In the cases below, notice B has 70 yeses and 30 nos, in the first case, but those numbers are reversed in the second.) For instance, in the following two cases there is equal agreement between A and B (60 out of 100 in both cases) in terms of agreement in each class, so we would expect the relative values of Cohen's Kappa to reflect this. However, calculating Cohen's Kappa for each: |
||
{| class="wikitable" border="1" |
{| class="wikitable" border="1" |
||
Line 175: | Line 175: | ||
[[File:Kappa vs accuracy.png|right|thumb|400px|Kappa (vertical axis) and [[Accuracy and precision#In binary classification|Accuracy]] (horizontal axis) calculated from the same simulated binary data. Each point on the graph is calculated from a pairs of judges randomly rating 10 subjects for having a diagnosis of X or not. Note in this example a Kappa=0 is approximately equivalent to an accuracy=0.5]] |
[[File:Kappa vs accuracy.png|right|thumb|400px|Kappa (vertical axis) and [[Accuracy and precision#In binary classification|Accuracy]] (horizontal axis) calculated from the same simulated binary data. Each point on the graph is calculated from a pairs of judges randomly rating 10 subjects for having a diagnosis of X or not. Note in this example a Kappa=0 is approximately equivalent to an accuracy=0.5]] |
||
If statistical significance is not a useful guide, what magnitude of kappa reflects adequate agreement? |
If statistical significance is not a useful guide, what magnitude of kappa reflects adequate agreement? Guidelines would be helpful, but factors other than agreement can influence its magnitude, which makes interpretation of a given magnitude problematic. As Sim and Wright noted, two important factors are prevalence (are the codes equiprobable or do their probabilities vary) and bias (are the marginal probabilities for the two observers similar or different). Other things being equal, kappas are higher when codes are equiprobable. On the other hand, Kappas are higher when codes are distributed asymmetrically by the two observers. In contrast to probability variations, the effect of bias is greater when Kappa is small than when it is large.<ref name=SimWright2005>{{cite journal |
||
|last=Sim|first=J|author2=Wright, C. C|year=2005 |
|last=Sim|first=J|author2=Wright, C. C|year=2005 |
||
|title=The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements |
|title=The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements |
||
|journal= Physical Therapy|volume=85|issue=3|pages=257–268|pmid= 15733050 |doi=10.1093/ptj/85.3.257|doi-access=free}}</ref>{{rp|261–262}} |
|journal= Physical Therapy|volume=85|issue=3|pages=257–268|pmid= 15733050 |doi=10.1093/ptj/85.3.257|doi-access=free}}</ref>{{rp|261–262}} |
||
Another factor is the number of codes. |
Another factor is the number of codes. As number of codes increases, kappas become higher. Based on a simulation study, Bakeman and colleagues concluded that for fallible observers, values for kappa were lower when codes were fewer. And, in agreement with Sim & Wrights's statement concerning prevalence, kappas were higher when codes were roughly equiprobable. Thus Bakeman et al. concluded that "no one value of kappa can be regarded as universally acceptable."<ref name=BakemanEtAl1997>{{cite journal |
||
|doi=10.1037/1082-989X.2.4.357 |
|doi=10.1037/1082-989X.2.4.357 |
||
|last=Bakeman|first=R. |author2=Quera, V. |author3=McArthur, D. |author4=Robinson, B. F. |
|last=Bakeman|first=R. |author2=Quera, V. |author3=McArthur, D. |author4=Robinson, B. F. |
||
|year=1997 |title=Detecting sequential patterns and determining their reliability with fallible observers |
|year=1997 |title=Detecting sequential patterns and determining their reliability with fallible observers |
||
|journal=Psychological Methods|volume=2|issue=4|pages=357–370}}</ref>{{rp|357}} They also provide a computer program that lets users compute values for kappa specifying number of codes, their probability, and observer accuracy. |
|journal=Psychological Methods|volume=2|issue=4|pages=357–370}}</ref>{{rp|357}} They also provide a computer program that lets users compute values for kappa specifying number of codes, their probability, and observer accuracy. For example, given equiprobable codes and observers who are 85% accurate, value of kappa are 0.49, 0.60, 0.66, and 0.69 when number of codes is 2, 3, 5, and 10, respectively. |
||
Nonetheless, magnitude guidelines have appeared in the literature. Perhaps the first was Landis and Koch,<ref name=LandisKoch1977>{{cite journal |
Nonetheless, magnitude guidelines have appeared in the literature. Perhaps the first was Landis and Koch,<ref name=LandisKoch1977>{{cite journal |
||
Line 193: | Line 193: | ||
|journal=Biometrics|volume=33 |
|journal=Biometrics|volume=33 |
||
|issue=1|pages=159–174|pmid=843571}}</ref> |
|issue=1|pages=159–174|pmid=843571}}</ref> |
||
who characterized values < 0 as indicating no agreement and 0–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1 as almost perfect agreement. |
who characterized values < 0 as indicating no agreement and 0–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1 as almost perfect agreement. This set of guidelines is however by no means universally accepted; Landis and Koch supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful.<ref>Gwet, K. (2010). "[http://www.agreestat.com/ Handbook of Inter-Rater Reliability (Second Edition)]" {{ISBN|978-0-9708062-2-2}} {{Page needed|date=April 2012}}</ref> Fleiss's<ref name=Fleiss1981>{{cite book |
||
|last=Fleiss |first=J.L. |year=1981 |
|last=Fleiss |first=J.L. |year=1981 |
||
|title=Statistical methods for rates and proportions |
|title=Statistical methods for rates and proportions |
||
Line 256: | Line 256: | ||
The disagreement proportion is 2/16 or 0.125. The disagreement is due to allocation because quantities are identical. Kappa is -0.07. |
The disagreement proportion is 2/16 or 0.125. The disagreement is due to allocation because quantities are identical. Kappa is -0.07. |
||
Here, reporting quantity and allocation disagreement is informative while Kappa obscures information. Furthermore, Kappa introduces some challenges in calculation and interpretation because Kappa is a ratio. It is possible for Kappa's ratio to return an undefined value due to zero in the denominator. Furthermore, a ratio does not reveal its numerator nor its denominator. It is more informative for researchers to report disagreement in two components, quantity and allocation. These two components describe the relationship between the categories more clearly than a single summary statistic. When predictive accuracy is the goal, researchers can more easily begin to think about ways to improve a prediction by using two components of quantity and allocation, rather than one ratio of Kappa.<ref name=":0">{{Cite journal|last1=Pontius|first1=Robert|last2=Millones|first2=Marco|date=2011|title=Death to Kappa: birth of quantity disagreement and allocation disagreement for accuracy assessment|url=http://www.clarku.edu/~rpontius/|journal=International Journal of Remote Sensing|volume=32|issue=15|pages=4407–4429|doi=10.1080/01431161.2011.552923|bibcode=2011IJRS...32.4407P|s2cid=62883674}}</ref>{{For|a measure of difference between two continuous variables|Mean absolute error}}Some researchers have expressed concern over κ's tendency to take the observed categories' frequencies as givens, which can make it unreliable for measuring agreement in situations such as the diagnosis of rare diseases. In these situations, κ tends to underestimate the agreement on the rare category.<ref>{{cite journal|last2=Garrett|first2=Joanne M.|year=2005|title=Understanding interobserver agreement: the kappa statistic|journal=Family Medicine|volume=37|issue=5|pages=360–363|last1=Viera|first1=Anthony J.|pmid=15883903}}</ref> For this reason, κ is considered an overly conservative measure of agreement.<ref name="SMPJ">{{Cite journal|last2=Martens|first2=R.|last3=Prins|first3=F.|last4=Jochems|first4=W.|year=2006|title=Content analysis: What are they talking about?|journal=Computers & Education|volume=46|pages=29–48|doi=10.1016/j.compedu.2005.04.002|last1=Strijbos|first1=J.|citeseerx=10.1.1.397.5780}}</ref> Others<ref>{{cite journal|year=1987|title=Diversity of decision-making models and the measurement of interrater agreement|url=http://www.na-mic.org/Wiki/images/d/df/Kapp_and_decision_making_models.pdf|journal=Psychological Bulletin|volume=101|pages=140–146|doi=10.1037/0033-2909.101.1.140|last1=Uebersax|first1=JS.|citeseerx=10.1.1.498.4965|access-date=2010-10-16|archive-url=https://web.archive.org/web/20160303171301/http://www.na-mic.org/Wiki/images/d/df/Kapp_and_decision_making_models.pdf|archive-date=2016-03-03|url-status=dead}}</ref>{{Citation needed|date=April 2012}} contest the assertion that kappa "takes into account" chance agreement. |
Here, reporting quantity and allocation disagreement is informative while Kappa obscures information. Furthermore, Kappa introduces some challenges in calculation and interpretation because Kappa is a ratio. It is possible for Kappa's ratio to return an undefined value due to zero in the denominator. Furthermore, a ratio does not reveal its numerator nor its denominator. It is more informative for researchers to report disagreement in two components, quantity and allocation. These two components describe the relationship between the categories more clearly than a single summary statistic. When predictive accuracy is the goal, researchers can more easily begin to think about ways to improve a prediction by using two components of quantity and allocation, rather than one ratio of Kappa.<ref name=":0">{{Cite journal|last1=Pontius|first1=Robert|last2=Millones|first2=Marco|date=2011|title=Death to Kappa: birth of quantity disagreement and allocation disagreement for accuracy assessment|url=http://www.clarku.edu/~rpontius/|journal=International Journal of Remote Sensing|volume=32|issue=15|pages=4407–4429|doi=10.1080/01431161.2011.552923|bibcode=2011IJRS...32.4407P|s2cid=62883674}}</ref>{{For|a measure of difference between two continuous variables|Mean absolute error}}Some researchers have expressed concern over κ's tendency to take the observed categories' frequencies as givens, which can make it unreliable for measuring agreement in situations such as the diagnosis of rare diseases. In these situations, κ tends to underestimate the agreement on the rare category.<ref>{{cite journal|last2=Garrett|first2=Joanne M.|year=2005|title=Understanding interobserver agreement: the kappa statistic|journal=Family Medicine|volume=37|issue=5|pages=360–363|last1=Viera|first1=Anthony J.|pmid=15883903}}</ref> For this reason, κ is considered an overly conservative measure of agreement.<ref name="SMPJ">{{Cite journal|last2=Martens|first2=R.|last3=Prins|first3=F.|last4=Jochems|first4=W.|year=2006|title=Content analysis: What are they talking about?|journal=Computers & Education|volume=46|pages=29–48|doi=10.1016/j.compedu.2005.04.002|last1=Strijbos|first1=J.|citeseerx=10.1.1.397.5780}}</ref> Others<ref>{{cite journal|year=1987|title=Diversity of decision-making models and the measurement of interrater agreement|url=http://www.na-mic.org/Wiki/images/d/df/Kapp_and_decision_making_models.pdf|journal=Psychological Bulletin|volume=101|pages=140–146|doi=10.1037/0033-2909.101.1.140|last1=Uebersax|first1=JS.|citeseerx=10.1.1.498.4965|access-date=2010-10-16|archive-url=https://web.archive.org/web/20160303171301/http://www.na-mic.org/Wiki/images/d/df/Kapp_and_decision_making_models.pdf|archive-date=2016-03-03|url-status=dead}}</ref>{{Citation needed|date=April 2012}} contest the assertion that kappa "takes into account" chance agreement. To do this effectively would require an explicit model of how chance affects rater decisions. The so-called chance adjustment of kappa statistics supposes that, when not completely certain, raters simply guess—a very unrealistic scenario. |
||
==Related statistics== |
==Related statistics== |
||
===Scott's Pi=== |
===Scott's Pi=== |
||
A similar statistic, called [[Scott's Pi|pi]], was proposed by Scott (1955). |
A similar statistic, called [[Scott's Pi|pi]], was proposed by Scott (1955). Cohen's kappa and [[Scott's Pi|Scott's pi]] differ in terms of how {{mvar|p<sub>e</sub>}} is calculated. |
||
===Fleiss' kappa=== |
===Fleiss' kappa=== |
||
Note that Cohen's kappa measures agreement between '''two''' raters only. |
Note that Cohen's kappa measures agreement between '''two''' raters only. For a similar measure of agreement ([[Fleiss' kappa]]) used when there are more than two raters, see [[Joseph L. Fleiss|Fleiss]] (1971). The Fleiss kappa, however, is a multi-rater generalization of [[Scott's Pi|Scott's pi]] statistic, not Cohen's kappa. Kappa is also used to compare performance in [[machine learning]], but the directional version known as [[Informedness]] or [[Youden's J statistic]] is argued to be more appropriate for supervised learning.<ref name=Powers2012>{{cite conference |first=David M. W. |last=Powers |date=2012 |title=The Problem with Kappa |book-title=Conference of the European Chapter of the Association for Computational Linguistics (EACL2012) Joint ROBUS-UNSUP Workshop |url=http://dl.dropbox.com/u/27743223/201209-eacl2012-Kappa.pdf |archive-url=http://arquivo.pt/wayback/20160518183306/http://dl.dropbox.com/u/27743223/201209-eacl2012-Kappa.pdf |url-status=dead |archive-date=2016-05-18 |access-date=2012-07-20 }}</ref> |
||
===Weighted kappa=== |
===Weighted kappa=== |
||
Line 308: | Line 308: | ||
* {{Cite journal | last1 = Sim | first1 = J. | last2 = Wright | first2 = C. C. | year = 2005 | title = The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements | journal = Physical Therapy | volume = 85 | issue = 3| pages = 257–268 | pmid = 15733050 | doi = 10.1093/ptj/85.3.257 | doi-access = free }} |
* {{Cite journal | last1 = Sim | first1 = J. | last2 = Wright | first2 = C. C. | year = 2005 | title = The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements | journal = Physical Therapy | volume = 85 | issue = 3| pages = 257–268 | pmid = 15733050 | doi = 10.1093/ptj/85.3.257 | doi-access = free }} |
||
* {{Cite journal | last1 = Warrens | first1 = J. | year = 2011 | title = Cohen's kappa is a weighted average |
* {{Cite journal | last1 = Warrens | first1 = J. | year = 2011 | title = Cohen's kappa is a weighted average |
||
| journal = Statistical Methodology |
| journal = Statistical Methodology | volume = 8 | issue = 6 | pages = 473–484 | doi = 10.1016/j.stamet.2011.06.002 | doi-access = free }} |
||
==External links== |
==External links== |
||
*[http://www.agreestat.com/research_papers.html Kappa, its meaning, problems, and several alternatives] |
*[http://www.agreestat.com/research_papers.html Kappa, its meaning, problems, and several alternatives] |
||
*[http://www.john-uebersax.com/stat/kappa.htm#procon Kappa Statistics: |
*[http://www.john-uebersax.com/stat/kappa.htm#procon Kappa Statistics: Pros and Cons] |
||
* Software implementations |
* Software implementations |
||
**[http://www.gsu.edu/~psyrab/ComKappa2.zip Windows program for kappa, weighted kappa, and kappa maximum] |
**[http://www.gsu.edu/~psyrab/ComKappa2.zip Windows program for kappa, weighted kappa, and kappa maximum] |
Revision as of 15:37, 10 January 2021
Cohen's kappa coefficient (κ) is a statistic that is used to measure inter-rater reliability (and also Intra-rater reliability) for qualitative (categorical) items.[1] It is generally thought to be a more robust measure than simple percent agreement calculation, as κ takes into account the possibility of the agreement occurring by chance. There is controversy surrounding Cohen's kappa due to the difficulty in interpreting indices of agreement. Some researchers have suggested that it is conceptually simpler to evaluate disagreement between items.[2] See the Limitations section for more detail.
History
The first mention of a kappa-like statistic is attributed to Galton (1892);[3] see Smeeton (1985).[4].
The seminal paper introducing kappa as a new technique was published by Jacob Cohen in the journal Educational and Psychological Measurement in 1960.[5]
Definition
Cohen's kappa measures the agreement between two raters who each classify N items into C mutually exclusive categories. The definition of is:
where po is the relative observed agreement among raters (identical to accuracy), and pe is the hypothetical probability of chance agreement, using the observed data to calculate the probabilities of each observer randomly seeing each category. If the raters are in complete agreement then . If there is no agreement among the raters other than what would be expected by chance (as given by pe), . It is possible for the statistic to be negative,[6] which implies that there is no effective agreement between the two raters or the agreement is worse than random.
For k categories, N observations to categorize and the number of times rater i predicted category k:
This is derived from the following construction:
Where is the estimated probability that both rater 1 and rater 2 will classify the same item as k, while is the estimated probability that rater 1 will classify an item as k (and similarly for rater 2). The relation is based on using the assumption that the rating of the two raters are independent. The term is estimated by using the number of items classified as k by rater 1 () divided by the total items to classify (): (and similarly for rater 2).
Examples
Simple example
Suppose that you were analyzing data related to a group of 50 people applying for a grant. Each grant proposal was read by two readers and each reader either said "Yes" or "No" to the proposal. Suppose the disagreement count data were as follows, where A and B are readers, data on the main diagonal of the matrix (a and d) count the number of agreements and off-diagonal data (b and c) count the number of disagreements:
B | |||
---|---|---|---|
Yes | No | ||
A | Yes | a | b |
No | c | d |
e.g.
B | |||
---|---|---|---|
Yes | No | ||
A | Yes | 20 | 5 |
No | 10 | 15 |
The observed proportionate agreement is:
To calculate pe (the probability of random agreement) we note that:
- Reader A said "Yes" to 25 applicants and "No" to 25 applicants. Thus reader A said "Yes" 50% of the time.
- Reader B said "Yes" to 30 applicants and "No" to 20 applicants. Thus reader B said "Yes" 60% of the time.
So the expected probability that both would say yes at random is:
Similarly:
Overall random agreement probability is the probability that they agreed on either Yes or No, i.e.:
So now applying our formula for Cohen's Kappa we get:
Same percentages but different numbers
A case sometimes considered to be a problem with Cohen's Kappa occurs when comparing the Kappa calculated for two pairs of raters with the two raters in each pair having the same percentage agreement but one pair give a similar number of ratings in each class while the other pair give a very different number of ratings in each class.[7] (In the cases below, notice B has 70 yeses and 30 nos, in the first case, but those numbers are reversed in the second.) For instance, in the following two cases there is equal agreement between A and B (60 out of 100 in both cases) in terms of agreement in each class, so we would expect the relative values of Cohen's Kappa to reflect this. However, calculating Cohen's Kappa for each:
B | |||
---|---|---|---|
Yes | No | ||
A | Yes | 45 | 15 |
No | 25 | 15 |
B | |||
---|---|---|---|
Yes | No | ||
A | Yes | 25 | 35 |
No | 5 | 35 |
we find that it shows greater similarity between A and B in the second case, compared to the first. This is because while the percentage agreement is the same, the percentage agreement that would occur 'by chance' is significantly higher in the first case (0.54 compared to 0.46).
Properties
Hypothesis testing and confidence interval
P-value for kappa is rarely reported, probably because even relatively low values of kappa can nonetheless be significantly different from zero but not of sufficient magnitude to satisfy investigators.[8]: 66 Still, its standard error has been described[9] and is computed by various computer programs.[10]
Confidence intervals for Kappa may be constructed, for the expected Kappa values if we had infinite number of items checked, using the following formula:[1]
Where is the standard normal percentile when , and
This is calculated by ignoring that pe is estimated from the data, and by treating po as an estimated probability of a binomial distribution while using asymptotic normality (i.e.: assuming that the number of items is large and that po is not close to either 0 or 1). (and the CI in general) may also be estimated using bootstrap methods.
Interpreting magnitude
If statistical significance is not a useful guide, what magnitude of kappa reflects adequate agreement? Guidelines would be helpful, but factors other than agreement can influence its magnitude, which makes interpretation of a given magnitude problematic. As Sim and Wright noted, two important factors are prevalence (are the codes equiprobable or do their probabilities vary) and bias (are the marginal probabilities for the two observers similar or different). Other things being equal, kappas are higher when codes are equiprobable. On the other hand, Kappas are higher when codes are distributed asymmetrically by the two observers. In contrast to probability variations, the effect of bias is greater when Kappa is small than when it is large.[11]: 261–262
Another factor is the number of codes. As number of codes increases, kappas become higher. Based on a simulation study, Bakeman and colleagues concluded that for fallible observers, values for kappa were lower when codes were fewer. And, in agreement with Sim & Wrights's statement concerning prevalence, kappas were higher when codes were roughly equiprobable. Thus Bakeman et al. concluded that "no one value of kappa can be regarded as universally acceptable."[12]: 357 They also provide a computer program that lets users compute values for kappa specifying number of codes, their probability, and observer accuracy. For example, given equiprobable codes and observers who are 85% accurate, value of kappa are 0.49, 0.60, 0.66, and 0.69 when number of codes is 2, 3, 5, and 10, respectively.
Nonetheless, magnitude guidelines have appeared in the literature. Perhaps the first was Landis and Koch,[13] who characterized values < 0 as indicating no agreement and 0–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1 as almost perfect agreement. This set of guidelines is however by no means universally accepted; Landis and Koch supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful.[14] Fleiss's[15]: 218 equally arbitrary guidelines characterize kappas over 0.75 as excellent, 0.40 to 0.75 as fair to good, and below 0.40 as poor.
Kappa maximum
Kappa assumes its theoretical maximum value of 1 only when both observers distribute codes the same, that is, when corresponding row and column sums are identical. Anything less is less than perfect agreement. Still, the maximum value kappa could achieve given unequal distributions helps interpret the value of kappa actually obtained. The equation for κ maximum is:[16]
where , as usual, ,
k = number of codes, are the row probabilities, and are the column probabilities.
Limitations
Kappa is an index that considers observed agreement with respect to a baseline agreement. However, investigators must consider carefully whether Kappa's baseline agreement is relevant for the particular research question. Kappa's baseline is frequently described as the agreement due to chance, which is only partially correct. Kappa's baseline agreement is the agreement that would be expected due to random allocation, given the quantities specified by the marginal totals of square contingency table. Thus, Kappa = 0 when the observed allocation is apparently random, regardless of the quantity disagreement as constrained by the marginal totals. However, for many applications, investigators should be more interested in the quantity disagreement in the marginal totals than in the allocation disagreement as described by the additional information on the diagonal of the square contingency table. Thus for many applications, Kappa's baseline is more distracting than enlightening. Consider the following example:
Reference | |||
---|---|---|---|
G | R | ||
Comparison | G | 1 | 14 |
R | 0 | 1 |
The disagreement proportion is 14/16 or 0.875. The disagreement is due to quantity because allocation is optimal. Kappa is 0.01.
Reference | |||
---|---|---|---|
G | R | ||
Comparison | G | 0 | 1 |
R | 1 | 14 |
The disagreement proportion is 2/16 or 0.125. The disagreement is due to allocation because quantities are identical. Kappa is -0.07.
Here, reporting quantity and allocation disagreement is informative while Kappa obscures information. Furthermore, Kappa introduces some challenges in calculation and interpretation because Kappa is a ratio. It is possible for Kappa's ratio to return an undefined value due to zero in the denominator. Furthermore, a ratio does not reveal its numerator nor its denominator. It is more informative for researchers to report disagreement in two components, quantity and allocation. These two components describe the relationship between the categories more clearly than a single summary statistic. When predictive accuracy is the goal, researchers can more easily begin to think about ways to improve a prediction by using two components of quantity and allocation, rather than one ratio of Kappa.[2]
Some researchers have expressed concern over κ's tendency to take the observed categories' frequencies as givens, which can make it unreliable for measuring agreement in situations such as the diagnosis of rare diseases. In these situations, κ tends to underestimate the agreement on the rare category.[17] For this reason, κ is considered an overly conservative measure of agreement.[18] Others[19][citation needed] contest the assertion that kappa "takes into account" chance agreement. To do this effectively would require an explicit model of how chance affects rater decisions. The so-called chance adjustment of kappa statistics supposes that, when not completely certain, raters simply guess—a very unrealistic scenario.
Related statistics
Scott's Pi
A similar statistic, called pi, was proposed by Scott (1955). Cohen's kappa and Scott's pi differ in terms of how pe is calculated.
Fleiss' kappa
Note that Cohen's kappa measures agreement between two raters only. For a similar measure of agreement (Fleiss' kappa) used when there are more than two raters, see Fleiss (1971). The Fleiss kappa, however, is a multi-rater generalization of Scott's pi statistic, not Cohen's kappa. Kappa is also used to compare performance in machine learning, but the directional version known as Informedness or Youden's J statistic is argued to be more appropriate for supervised learning.[20]
Weighted kappa
The weighted kappa allows disagreements to be weighted differently[21] and is especially useful when codes are ordered.[8]: 66 Three matrices are involved, the matrix of observed scores, the matrix of expected scores based on chance agreement, and the weight matrix. Weight matrix cells located on the diagonal (upper-left to bottom-right) represent agreement and thus contain zeros. Off-diagonal cells contain weights indicating the seriousness of that disagreement. Often, cells one off the diagonal are weighted 1, those two off 2, etc.
The equation for weighted κ is:
where k=number of codes and , , and are elements in the weight, observed, and expected matrices, respectively. When diagonal cells contain weights of 0 and all off-diagonal cells weights of 1, this formula produces the same value of kappa as the calculation given above.
See also
References
- ^ a b McHugh, Mary L. (2012). "Interrater reliability: The kappa statistic". Biochemia Medica. 22 (3): 276–282. doi:10.11613/bm.2012.031. PMC 3900052. PMID 23092060.
- ^ a b Pontius, Robert; Millones, Marco (2011). "Death to Kappa: birth of quantity disagreement and allocation disagreement for accuracy assessment". International Journal of Remote Sensing. 32 (15): 4407–4429. Bibcode:2011IJRS...32.4407P. doi:10.1080/01431161.2011.552923. S2CID 62883674.
- ^ Galton, F. (1892) Finger Prints Macmillan, London.
- ^ Smeeton, N.C. (1985). "Early History of the Kappa Statistic". Biometrics. 41 (3): 795. JSTOR 2531300.
- ^ Cohen, Jacob (1960). "A coefficient of agreement for nominal scales". Educational and Psychological Measurement. 20 (1): 37–46. doi:10.1177/001316446002000104. hdl:1942/28116. S2CID 15926286.
- ^ Sim, Julius; Wright, Chris C. (2005). "The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements". Physical Therapy. 85 (3): 257–268. doi:10.1093/ptj/85.3.257. ISSN 1538-6724. PMID 15733050.
- ^ Kilem Gwet (May 2002). "Inter-Rater Reliability: Dependency on Trait Prevalence and Marginal Homogeneity" (PDF). Statistical Methods for Inter-Rater Reliability Assessment. 2: 1–10. Archived from the original (PDF) on 2011-07-07. Retrieved 2011-02-02.
- ^ a b Bakeman, R.; Gottman, J.M. (1997). Observing interaction: An introduction to sequential analysis (2nd ed.). Cambridge, UK: Cambridge University Press. ISBN 978-0-521-27593-4.
- ^ Fleiss, J.L.; Cohen, J.; Everitt, B.S. (1969). "Large sample standard errors of kappa and weighted kappa". Psychological Bulletin. 72 (5): 323–327. doi:10.1037/h0028106.
- ^ Robinson, B.F; Bakeman, R. (1998). "ComKappa: A Windows 95 program for calculating kappa and related statistics". Behavior Research Methods, Instruments, and Computers. 30 (4): 731–732. doi:10.3758/BF03209495.
- ^ Sim, J; Wright, C. C (2005). "The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements". Physical Therapy. 85 (3): 257–268. doi:10.1093/ptj/85.3.257. PMID 15733050.
- ^ Bakeman, R.; Quera, V.; McArthur, D.; Robinson, B. F. (1997). "Detecting sequential patterns and determining their reliability with fallible observers". Psychological Methods. 2 (4): 357–370. doi:10.1037/1082-989X.2.4.357.
- ^ Landis, J.R.; Koch, G.G. (1977). "The measurement of observer agreement for categorical data". Biometrics. 33 (1): 159–174. doi:10.2307/2529310. JSTOR 2529310. PMID 843571.
- ^ Gwet, K. (2010). "Handbook of Inter-Rater Reliability (Second Edition)" ISBN 978-0-9708062-2-2 [page needed]
- ^ Fleiss, J.L. (1981). Statistical methods for rates and proportions (2nd ed.). New York: John Wiley. ISBN 978-0-471-26370-8.
- ^ Umesh, U. N.; Peterson, R.A.; Sauber M. H. (1989). "Interjudge agreement and the maximum value of kappa". Educational and Psychological Measurement. 49 (4): 835–850. doi:10.1177/001316448904900407. S2CID 123306239.
- ^ Viera, Anthony J.; Garrett, Joanne M. (2005). "Understanding interobserver agreement: the kappa statistic". Family Medicine. 37 (5): 360–363. PMID 15883903.
- ^ Strijbos, J.; Martens, R.; Prins, F.; Jochems, W. (2006). "Content analysis: What are they talking about?". Computers & Education. 46: 29–48. CiteSeerX 10.1.1.397.5780. doi:10.1016/j.compedu.2005.04.002.
- ^ Uebersax, JS. (1987). "Diversity of decision-making models and the measurement of interrater agreement" (PDF). Psychological Bulletin. 101: 140–146. CiteSeerX 10.1.1.498.4965. doi:10.1037/0033-2909.101.1.140. Archived from the original (PDF) on 2016-03-03. Retrieved 2010-10-16.
- ^ Powers, David M. W. (2012). "The Problem with Kappa" (PDF). Conference of the European Chapter of the Association for Computational Linguistics (EACL2012) Joint ROBUS-UNSUP Workshop. Archived from the original (PDF) on 2016-05-18. Retrieved 2012-07-20.
- ^ Cohen, J. (1968). "Weighed kappa: Nominal scale agreement with provision for scaled disagreement or partial credit". Psychological Bulletin. 70 (4): 213–220. doi:10.1037/h0026256. PMID 19673146.
Further reading
- Banerjee, M.; Capozzoli, Michelle; McSweeney, Laura; Sinha, Debajyoti (1999). "Beyond Kappa: A Review of Interrater Agreement Measures". The Canadian Journal of Statistics. 27 (1): 3–23. doi:10.2307/3315487. JSTOR 3315487.
- Brennan, R. L.; Prediger, D. J. (1981). "Coefficient λ: Some Uses, Misuses, and Alternatives". Educational and Psychological Measurement. 41 (3): 687–699. doi:10.1177/001316448104100307. S2CID 122806628.
- Cohen, Jacob (1960). "A coefficient of agreement for nominal scales". Educational and Psychological Measurement. 20 (1): 37–46. doi:10.1177/001316446002000104. hdl:1942/28116. S2CID 15926286.
- Cohen, J. (1968). "Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit". Psychological Bulletin. 70 (4): 213–220. doi:10.1037/h0026256. PMID 19673146.
- Fleiss, J.L. (1971). "Measuring nominal scale agreement among many raters". Psychological Bulletin. 76 (5): 378–382. doi:10.1037/h0031619.
- Fleiss, J. L. (1981) Statistical methods for rates and proportions. 2nd ed. (New York: John Wiley) pp. 38–46
- Fleiss, J.L.; Cohen, J. (1973). "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability". Educational and Psychological Measurement. 33 (3): 613–619. doi:10.1177/001316447303300309. S2CID 145183399.
- Gwet, Kilem L. (2014) Handbook of Inter-Rater Reliability, Fourth Edition, (Gaithersburg : Advanced Analytics, LLC) ISBN 978-0970806284
- Gwet, K. (2008). "Computing inter-rater reliability and its variance in the presence of high agreement" (PDF). British Journal of Mathematical and Statistical Psychology. 61 (Pt 1): 29–48. doi:10.1348/000711006X126600. PMID 18482474. Archived from the original (PDF) on 2016-03-03. Retrieved 2010-06-16.
- Gwet, K. (2008). "Variance Estimation of Nominal-Scale Inter-Rater Reliability with Random Selection of Raters" (PDF). Psychometrika. 73 (3): 407–430. doi:10.1007/s11336-007-9054-8. S2CID 20827973.
- Gwet, K. (2008). "Intrarater Reliability." Wiley Encyclopedia of Clinical Trials, Copyright 2008 John Wiley & Sons, Inc.
- Scott, W. (1955). "Reliability of content analysis: The case of nominal scale coding". Public Opinion Quarterly. 17 (3): 321–325. doi:10.1086/266577.
- Sim, J.; Wright, C. C. (2005). "The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements". Physical Therapy. 85 (3): 257–268. doi:10.1093/ptj/85.3.257. PMID 15733050.
- Warrens, J. (2011). "Cohen's kappa is a weighted average". Statistical Methodology. 8 (6): 473–484. doi:10.1016/j.stamet.2011.06.002.
External links
- Kappa, its meaning, problems, and several alternatives
- Kappa Statistics: Pros and Cons
- Software implementations