Computer-Assisted Relevance Assessment: A Case Study of Updating Systematic Medical Reviews

It is becoming more challenging for health professionals to keep up to date with current research. To save time, many experts perform evidence syntheses on systematic reviews instead of primary studies. Subsequently, there is a need to update reviews to include new evidence, which requires a signifi...

Full description

Bibliographic Details
Main Authors: Noha S. Tawfik, Marco Spruit
Format: Article
Language:English
Published: MDPI AG 2020-04-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/10/8/2845
Description
Summary:It is becoming more challenging for health professionals to keep up to date with current research. To save time, many experts perform evidence syntheses on systematic reviews instead of primary studies. Subsequently, there is a need to update reviews to include new evidence, which requires a significant amount of effort and delays the update process. These efforts can be significantly reduced by applying computer-assisted techniques to identify relevant studies. In this study, we followed a “human-in-the-loop” approach by engaging medical experts through a controlled user experiment to update systematic reviews. The primary outcome of interest was to compare the performance levels achieved when judging full abstracts versus single sentences accompanied by Natural Language Inference labels. The experiment included post-task questionnaires to collect participants’ feedback on the usability of the computer-assisted suggestions. The findings lead us to the conclusion that employing sentence-level, for relevance assessment, achieves higher recall.
ISSN:2076-3417