Kappa statistic considerations in evaluating inter-rater reliability between two raters: which, when and context matters

Abstract Background In research designs that rely on observational ratings provided by two raters, assessing inter-rater reliability (IRR) is a frequently required task. However, some studies fall short in properly utilizing statistical procedures, omitting essential information necessary for interp...

Full description

Bibliographic Details
Main Authors: Ming Li, Qian Gao, Tianfei Yu
Format: Article
Language:English
Published: BMC 2023-08-01
Series:BMC Cancer
Subjects:
Online Access:https://doi.org/10.1186/s12885-023-11325-z