Automated vetting of radiology referrals: exploring natural language processing and traditional machine learning approaches

Abstract Background With a significant increase in utilisation of computed tomography (CT), inappropriate imaging is a significant concern. Manual justification audits of radiology referrals are time-consuming and require financial resources. We aimed to retrospectively audit justification of brain...

Full description

Bibliographic Details
Main Authors: Jaka Potočnik, Edel Thomas, Ronan Killeen, Shane Foley, Aonghus Lawlor, John Stowe
Format: Article
Language:English
Published: SpringerOpen 2022-08-01
Series:Insights into Imaging
Subjects:
Online Access:https://doi.org/10.1186/s13244-022-01267-8
_version_ 1811342742089891840
author Jaka Potočnik
Edel Thomas
Ronan Killeen
Shane Foley
Aonghus Lawlor
John Stowe
author_facet Jaka Potočnik
Edel Thomas
Ronan Killeen
Shane Foley
Aonghus Lawlor
John Stowe
author_sort Jaka Potočnik
collection DOAJ
description Abstract Background With a significant increase in utilisation of computed tomography (CT), inappropriate imaging is a significant concern. Manual justification audits of radiology referrals are time-consuming and require financial resources. We aimed to retrospectively audit justification of brain CT referrals by applying natural language processing and traditional machine learning (ML) techniques to predict their justification based on the audit outcomes. Methods Two human experts retrospectively analysed justification of 375 adult brain CT referrals performed in a tertiary referral hospital during the 2019 calendar year, using a cloud-based platform for structured referring. Cohen’s kappa was computed to measure inter-rater reliability. Referrals were represented as bag-of-words (BOW) and term frequency-inverse document frequency models. Text preprocessing techniques, including custom stop words (CSW) and spell correction (SC), were applied to the referral text. Logistic regression, random forest, and support vector machines (SVM) were used to predict the justification of referrals. A test set (300/75) was used to compute weighted accuracy, sensitivity, specificity, and the area under the curve (AUC). Results In total, 253 (67.5%) examinations were deemed justified, 75 (20.0%) as unjustified, and 47 (12.5%) as maybe justified. The agreement between the annotators was strong (κ = 0.835). The BOW + CSW + SC + SVM outperformed other binary models with a weighted accuracy of 92%, a sensitivity of 91%, a specificity of 93%, and an AUC of 0.948. Conclusions Traditional ML models can accurately predict justification of unstructured brain CT referrals. This offers potential for automated justification analysis of CT referrals in clinical departments.
first_indexed 2024-04-13T19:16:11Z
format Article
id doaj.art-dc4a26d6bc674bff99ad034fc0df8972
institution Directory Open Access Journal
issn 1869-4101
language English
last_indexed 2024-04-13T19:16:11Z
publishDate 2022-08-01
publisher SpringerOpen
record_format Article
series Insights into Imaging
spelling doaj.art-dc4a26d6bc674bff99ad034fc0df89722022-12-22T02:33:40ZengSpringerOpenInsights into Imaging1869-41012022-08-011311810.1186/s13244-022-01267-8Automated vetting of radiology referrals: exploring natural language processing and traditional machine learning approachesJaka Potočnik0Edel Thomas1Ronan Killeen2Shane Foley3Aonghus Lawlor4John Stowe5University College Dublin School of MedicineUniversity College Dublin School of MedicineUniversity College Dublin School of MedicineUniversity College Dublin School of MedicineUniversity College Dublin School of Computer ScienceUniversity College Dublin School of MedicineAbstract Background With a significant increase in utilisation of computed tomography (CT), inappropriate imaging is a significant concern. Manual justification audits of radiology referrals are time-consuming and require financial resources. We aimed to retrospectively audit justification of brain CT referrals by applying natural language processing and traditional machine learning (ML) techniques to predict their justification based on the audit outcomes. Methods Two human experts retrospectively analysed justification of 375 adult brain CT referrals performed in a tertiary referral hospital during the 2019 calendar year, using a cloud-based platform for structured referring. Cohen’s kappa was computed to measure inter-rater reliability. Referrals were represented as bag-of-words (BOW) and term frequency-inverse document frequency models. Text preprocessing techniques, including custom stop words (CSW) and spell correction (SC), were applied to the referral text. Logistic regression, random forest, and support vector machines (SVM) were used to predict the justification of referrals. A test set (300/75) was used to compute weighted accuracy, sensitivity, specificity, and the area under the curve (AUC). Results In total, 253 (67.5%) examinations were deemed justified, 75 (20.0%) as unjustified, and 47 (12.5%) as maybe justified. The agreement between the annotators was strong (κ = 0.835). The BOW + CSW + SC + SVM outperformed other binary models with a weighted accuracy of 92%, a sensitivity of 91%, a specificity of 93%, and an AUC of 0.948. Conclusions Traditional ML models can accurately predict justification of unstructured brain CT referrals. This offers potential for automated justification analysis of CT referrals in clinical departments.https://doi.org/10.1186/s13244-022-01267-8Machine learningNatural language processingJustification auditRadiology referralClinical decision support
spellingShingle Jaka Potočnik
Edel Thomas
Ronan Killeen
Shane Foley
Aonghus Lawlor
John Stowe
Automated vetting of radiology referrals: exploring natural language processing and traditional machine learning approaches
Insights into Imaging
Machine learning
Natural language processing
Justification audit
Radiology referral
Clinical decision support
title Automated vetting of radiology referrals: exploring natural language processing and traditional machine learning approaches
title_full Automated vetting of radiology referrals: exploring natural language processing and traditional machine learning approaches
title_fullStr Automated vetting of radiology referrals: exploring natural language processing and traditional machine learning approaches
title_full_unstemmed Automated vetting of radiology referrals: exploring natural language processing and traditional machine learning approaches
title_short Automated vetting of radiology referrals: exploring natural language processing and traditional machine learning approaches
title_sort automated vetting of radiology referrals exploring natural language processing and traditional machine learning approaches
topic Machine learning
Natural language processing
Justification audit
Radiology referral
Clinical decision support
url https://doi.org/10.1186/s13244-022-01267-8
work_keys_str_mv AT jakapotocnik automatedvettingofradiologyreferralsexploringnaturallanguageprocessingandtraditionalmachinelearningapproaches
AT edelthomas automatedvettingofradiologyreferralsexploringnaturallanguageprocessingandtraditionalmachinelearningapproaches
AT ronankilleen automatedvettingofradiologyreferralsexploringnaturallanguageprocessingandtraditionalmachinelearningapproaches
AT shanefoley automatedvettingofradiologyreferralsexploringnaturallanguageprocessingandtraditionalmachinelearningapproaches
AT aonghuslawlor automatedvettingofradiologyreferralsexploringnaturallanguageprocessingandtraditionalmachinelearningapproaches
AT johnstowe automatedvettingofradiologyreferralsexploringnaturallanguageprocessingandtraditionalmachinelearningapproaches