Reliability in evaluator-based tests: using simulation-constructed models to determine contextually relevant agreement thresholds
Abstract Background Indices of inter-evaluator reliability are used in many fields such as computational linguistics, psychology, and medical science; however, the interpretation of resulting values and determination of appropriate thresholds lack context and are often guided only by arbitrary “rule...
Main Authors: | Dylan T. Beckler, Zachary C. Thumser, Jonathon S. Schofield, Paul D. Marasco |
---|---|
Format: | Article |
Language: | English |
Published: |
BMC
2018-11-01
|
Series: | BMC Medical Research Methodology |
Subjects: | |
Online Access: | http://link.springer.com/article/10.1186/s12874-018-0606-7 |
Similar Items
-
K-Alpha Calculator–Krippendorff's Alpha Calculator: A user-friendly tool for computing Krippendorff's Alpha inter-rater reliability coefficient
by: Giacomo Marzi, et al.
Published: (2024-06-01) -
An Empirical Comparative Assessment of Inter-Rater Agreement of Binary Outcomes and Multiple Raters
by: Menelaos Konstantinidis, et al.
Published: (2022-01-01) -
Inter-Rater Agreement in Assessing Risk of Bias in Melanoma Prediction Studies Using the Prediction Model Risk of Bias Assessment Tool (PROBAST): Results from a Controlled Experiment on the Effect of Specific Rater Training
by: Isabelle Kaiser, et al.
Published: (2023-03-01) -
Measuring inter-rater reliability for nominal data – which coefficients and confidence intervals are appropriate?
by: Antonia Zapf, et al.
Published: (2016-08-01) -
How to assess and compare inter-rater reliability, agreement and correlation of ratings: an exemplary analysis of mother-father and parent-teacher expressive vocabulary rating pairs
by: Margarita eStolarova, et al.
Published: (2014-06-01)