Kappa statistic to measure agreement beyond chance in free-response assessments
Abstract Background The usual kappa statistic requires that all observations be enumerated. However, in free-response assessments, only positive (or abnormal) findings are notified, but negative (or normal) findings are not. This situation occurs frequently in imaging or other diagnostic studies. We...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
BMC
2017-04-01
|
Series: | BMC Medical Research Methodology |
Subjects: | |
Online Access: | http://link.springer.com/article/10.1186/s12874-017-0340-6 |
_version_ | 1818272243902316544 |
---|---|
author | Marc Carpentier Christophe Combescure Laura Merlini Thomas V. Perneger |
author_facet | Marc Carpentier Christophe Combescure Laura Merlini Thomas V. Perneger |
author_sort | Marc Carpentier |
collection | DOAJ |
description | Abstract Background The usual kappa statistic requires that all observations be enumerated. However, in free-response assessments, only positive (or abnormal) findings are notified, but negative (or normal) findings are not. This situation occurs frequently in imaging or other diagnostic studies. We propose here a kappa statistic that is suitable for free-response assessments. Method We derived the equivalent of Cohen’s kappa statistic for two raters under the assumption that the number of possible findings for any given patient is very large, as well as a formula for sampling variance that is applicable to independent observations (for clustered observations, a bootstrap procedure is proposed). The proposed statistic was applied to a real-life dataset, and compared with the common practice of collapsing observations within a finite number of regions of interest. Results The free-response kappa is computed from the total numbers of discordant (b and c) and concordant positive (d) observations made in all patients, as 2d/(b + c + 2d). In 84 full-body magnetic resonance imaging procedures in children that were evaluated by 2 independent raters, the free-response kappa statistic was 0.820. Aggregation of results within regions of interest resulted in overestimation of agreement beyond chance. Conclusions The free-response kappa provides an estimate of agreement beyond chance in situations where only positive findings are reported by raters. |
first_indexed | 2024-12-12T21:38:59Z |
format | Article |
id | doaj.art-351c9d41bcc44e4d955aa2e67cf1cfd1 |
institution | Directory Open Access Journal |
issn | 1471-2288 |
language | English |
last_indexed | 2024-12-12T21:38:59Z |
publishDate | 2017-04-01 |
publisher | BMC |
record_format | Article |
series | BMC Medical Research Methodology |
spelling | doaj.art-351c9d41bcc44e4d955aa2e67cf1cfd12022-12-22T00:11:06ZengBMCBMC Medical Research Methodology1471-22882017-04-011711810.1186/s12874-017-0340-6Kappa statistic to measure agreement beyond chance in free-response assessmentsMarc Carpentier0Christophe Combescure1Laura Merlini2Thomas V. Perneger3Division of Clinical Epidemiology, Geneva University Hospitals, and Faculty of Medicine, University of GenevaDivision of Clinical Epidemiology, Geneva University Hospitals, and Faculty of Medicine, University of GenevaDivision of Radiology, Geneva University Hospitals, and Faculty of Medicine, University of GenevaDivision of Clinical Epidemiology, Geneva University Hospitals, and Faculty of Medicine, University of GenevaAbstract Background The usual kappa statistic requires that all observations be enumerated. However, in free-response assessments, only positive (or abnormal) findings are notified, but negative (or normal) findings are not. This situation occurs frequently in imaging or other diagnostic studies. We propose here a kappa statistic that is suitable for free-response assessments. Method We derived the equivalent of Cohen’s kappa statistic for two raters under the assumption that the number of possible findings for any given patient is very large, as well as a formula for sampling variance that is applicable to independent observations (for clustered observations, a bootstrap procedure is proposed). The proposed statistic was applied to a real-life dataset, and compared with the common practice of collapsing observations within a finite number of regions of interest. Results The free-response kappa is computed from the total numbers of discordant (b and c) and concordant positive (d) observations made in all patients, as 2d/(b + c + 2d). In 84 full-body magnetic resonance imaging procedures in children that were evaluated by 2 independent raters, the free-response kappa statistic was 0.820. Aggregation of results within regions of interest resulted in overestimation of agreement beyond chance. Conclusions The free-response kappa provides an estimate of agreement beyond chance in situations where only positive findings are reported by raters.http://link.springer.com/article/10.1186/s12874-017-0340-6Reproducibility of resultsReliability (Epidemiology)Methodological StudyBiostatistics |
spellingShingle | Marc Carpentier Christophe Combescure Laura Merlini Thomas V. Perneger Kappa statistic to measure agreement beyond chance in free-response assessments BMC Medical Research Methodology Reproducibility of results Reliability (Epidemiology) Methodological Study Biostatistics |
title | Kappa statistic to measure agreement beyond chance in free-response assessments |
title_full | Kappa statistic to measure agreement beyond chance in free-response assessments |
title_fullStr | Kappa statistic to measure agreement beyond chance in free-response assessments |
title_full_unstemmed | Kappa statistic to measure agreement beyond chance in free-response assessments |
title_short | Kappa statistic to measure agreement beyond chance in free-response assessments |
title_sort | kappa statistic to measure agreement beyond chance in free response assessments |
topic | Reproducibility of results Reliability (Epidemiology) Methodological Study Biostatistics |
url | http://link.springer.com/article/10.1186/s12874-017-0340-6 |
work_keys_str_mv | AT marccarpentier kappastatistictomeasureagreementbeyondchanceinfreeresponseassessments AT christophecombescure kappastatistictomeasureagreementbeyondchanceinfreeresponseassessments AT lauramerlini kappastatistictomeasureagreementbeyondchanceinfreeresponseassessments AT thomasvperneger kappastatistictomeasureagreementbeyondchanceinfreeresponseassessments |