Common evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reports

Since 2008, the Oxford Diagnostic Horizon Scan Programme has been identifying and summarising evidence on new and emerging diagnostic technologies relevant to primary care. We used these reports to determine the sequence and timing of evidence for new point-of-care diagnostic tests and to identify c...

Full description

Bibliographic Details
Main Authors: Verbakel, J, Turner, P, Thompson, M, Plüddemann, A, Price, C, Shinkins, B, Van den Bruel, A
Format: Journal article
Language:English
Published: BMJ Publishing Group 2017
_version_ 1826277733517230080
author Verbakel, J
Turner, P
Thompson, M
Plüddemann, A
Price, C
Shinkins, B
Van den Bruel, A
author_facet Verbakel, J
Turner, P
Thompson, M
Plüddemann, A
Price, C
Shinkins, B
Van den Bruel, A
author_sort Verbakel, J
collection OXFORD
description Since 2008, the Oxford Diagnostic Horizon Scan Programme has been identifying and summarising evidence on new and emerging diagnostic technologies relevant to primary care. We used these reports to determine the sequence and timing of evidence for new point-of-care diagnostic tests and to identify common evidence gaps in this process.Systematic overview of diagnostic horizon scan reports.We obtained the primary studies referenced in each horizon scan report (n=40) and extracted details of the study size, clinical setting and design characteristics. In particular, we assessed whether each study evaluated test accuracy, test impact or cost-effectiveness. The evidence for each point-of-care test was mapped against the Horvath framework for diagnostic test evaluation.We extracted data from 500 primary studies. Most diagnostic technologies underwent clinical performance (ie, ability to detect a clinical condition) assessment (71.2%), with very few progressing to comparative clinical effectiveness (10.0%) and a cost-effectiveness evaluation (8.6%), even in the more established and frequently reported clinical domains, such as cardiovascular disease. The median time to complete an evaluation cycle was 9 years (IQR 5.5-12.5 years). The sequence of evidence generation was typically haphazard and some diagnostic tests appear to be implemented in routine care without completing essential evaluation stages such as clinical effectiveness.Evidence generation for new point-of-care diagnostic tests is slow and tends to focus on accuracy, and overlooks other test attributes such as impact, implementation and cost-effectiveness. Evaluation of this dynamic cycle and feeding back data from clinical effectiveness to refine analytical and clinical performance are key to improve the efficiency of point-of-care diagnostic test development and impact on clinically relevant outcomes. While the 'road map' for the steps needed to generate evidence are reasonably well delineated, we provide evidence on the complexity, length and variability of the actual process that many diagnostic technologies undergo.
first_indexed 2024-03-06T23:33:21Z
format Journal article
id oxford-uuid:6cccea05-064b-48ec-a8f7-b689192a070e
institution University of Oxford
language English
last_indexed 2024-03-06T23:33:21Z
publishDate 2017
publisher BMJ Publishing Group
record_format dspace
spelling oxford-uuid:6cccea05-064b-48ec-a8f7-b689192a070e2022-03-26T19:13:31ZCommon evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reportsJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:6cccea05-064b-48ec-a8f7-b689192a070eEnglishSymplectic Elements at OxfordBMJ Publishing Group2017Verbakel, JTurner, PThompson, MPlüddemann, APrice, CShinkins, BVan den Bruel, ASince 2008, the Oxford Diagnostic Horizon Scan Programme has been identifying and summarising evidence on new and emerging diagnostic technologies relevant to primary care. We used these reports to determine the sequence and timing of evidence for new point-of-care diagnostic tests and to identify common evidence gaps in this process.Systematic overview of diagnostic horizon scan reports.We obtained the primary studies referenced in each horizon scan report (n=40) and extracted details of the study size, clinical setting and design characteristics. In particular, we assessed whether each study evaluated test accuracy, test impact or cost-effectiveness. The evidence for each point-of-care test was mapped against the Horvath framework for diagnostic test evaluation.We extracted data from 500 primary studies. Most diagnostic technologies underwent clinical performance (ie, ability to detect a clinical condition) assessment (71.2%), with very few progressing to comparative clinical effectiveness (10.0%) and a cost-effectiveness evaluation (8.6%), even in the more established and frequently reported clinical domains, such as cardiovascular disease. The median time to complete an evaluation cycle was 9 years (IQR 5.5-12.5 years). The sequence of evidence generation was typically haphazard and some diagnostic tests appear to be implemented in routine care without completing essential evaluation stages such as clinical effectiveness.Evidence generation for new point-of-care diagnostic tests is slow and tends to focus on accuracy, and overlooks other test attributes such as impact, implementation and cost-effectiveness. Evaluation of this dynamic cycle and feeding back data from clinical effectiveness to refine analytical and clinical performance are key to improve the efficiency of point-of-care diagnostic test development and impact on clinically relevant outcomes. While the 'road map' for the steps needed to generate evidence are reasonably well delineated, we provide evidence on the complexity, length and variability of the actual process that many diagnostic technologies undergo.
spellingShingle Verbakel, J
Turner, P
Thompson, M
Plüddemann, A
Price, C
Shinkins, B
Van den Bruel, A
Common evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reports
title Common evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reports
title_full Common evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reports
title_fullStr Common evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reports
title_full_unstemmed Common evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reports
title_short Common evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reports
title_sort common evidence gaps in point of care diagnostic test evaluation a review of horizon scan reports
work_keys_str_mv AT verbakelj commonevidencegapsinpointofcarediagnostictestevaluationareviewofhorizonscanreports
AT turnerp commonevidencegapsinpointofcarediagnostictestevaluationareviewofhorizonscanreports
AT thompsonm commonevidencegapsinpointofcarediagnostictestevaluationareviewofhorizonscanreports
AT pluddemanna commonevidencegapsinpointofcarediagnostictestevaluationareviewofhorizonscanreports
AT pricec commonevidencegapsinpointofcarediagnostictestevaluationareviewofhorizonscanreports
AT shinkinsb commonevidencegapsinpointofcarediagnostictestevaluationareviewofhorizonscanreports
AT vandenbruela commonevidencegapsinpointofcarediagnostictestevaluationareviewofhorizonscanreports