Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Expert Reviewers with those of End Users

BACKGROUND: Decision-support tools (DST) are typically developed by computer engineers for use by clinicians. Prototype testing DSTs may be performed relatively easily by one or two clinical experts. The costly alternative is to test each prototype on a larger number of diverse clinicians, based on...

Full description

Bibliographic Details
Main Authors: Walsh, Paul, Doyle, Donal, McQuillen, Kemedy K, Bigler, Joshua, Thompson, Caleb, Lin, Ed
Format: Article
Language:English
Published: eScholarship Publishing, University of California 2008-05-01
Series:Western Journal of Emergency Medicine
Subjects:
Online Access:http://escholarship.org/uc/item/5f28q4rz
_version_ 1828303134425874432
author Walsh, Paul
Doyle, Donal
McQuillen, Kemedy K
Bigler, Joshua
Thompson, Caleb
Lin, Ed
author_facet Walsh, Paul
Doyle, Donal
McQuillen, Kemedy K
Bigler, Joshua
Thompson, Caleb
Lin, Ed
author_sort Walsh, Paul
collection DOAJ
description BACKGROUND: Decision-support tools (DST) are typically developed by computer engineers for use by clinicians. Prototype testing DSTs may be performed relatively easily by one or two clinical experts. The costly alternative is to test each prototype on a larger number of diverse clinicians, based on the untested assumption that these evaluations would more accurately reflect those of actual end users.HYPOTHESIS: We hypothesized substantial or better agreement (as defined by a kappa statistic greater than 0.6) between the evaluations of a case based reasoning (CBR) DST predicting ED admission for bronchiolitis performed by the clinically diverse end users, to those of two clinical experts who evaluated the same DST output.METHODS: Three outputs from a previously described DST were evaluated by the emergency physicians (EP) who originally saw the patients and by two pediatric EPs with an interest in bronchiolitis. The DST outputs were as follows: predicted disposition, an example of another previously seen patient to explain the prediction, and explanatory dialog. Each was rated using the scale Definitely Not, No, Maybe, Yes, and Absolutely. This was converted to a Likert scale for analysis. Agreement was measured using the kappa statistic.RESULTS: Agreement with the DST predicted disposition was moderate between end users and the expert reviewers, but was only fair or poor for value of the explanatory case and dialog.CONCLUSION: Agreement between expert evaluators and end users on the value of a CBR DST predicted dispositions was moderate. For the more subjective explicative components, agreement was fair, poor, or worse.
first_indexed 2024-04-13T13:48:25Z
format Article
id doaj.art-e08fa941411c43f79346728da18031ff
institution Directory Open Access Journal
issn 1936-900X
1936-9018
language English
last_indexed 2024-04-13T13:48:25Z
publishDate 2008-05-01
publisher eScholarship Publishing, University of California
record_format Article
series Western Journal of Emergency Medicine
spelling doaj.art-e08fa941411c43f79346728da18031ff2022-12-22T02:44:25ZengeScholarship Publishing, University of CaliforniaWestern Journal of Emergency Medicine1936-900X1936-90182008-05-01927480Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Expert Reviewers with those of End UsersWalsh, PaulDoyle, DonalMcQuillen, Kemedy KBigler, JoshuaThompson, CalebLin, EdBACKGROUND: Decision-support tools (DST) are typically developed by computer engineers for use by clinicians. Prototype testing DSTs may be performed relatively easily by one or two clinical experts. The costly alternative is to test each prototype on a larger number of diverse clinicians, based on the untested assumption that these evaluations would more accurately reflect those of actual end users.HYPOTHESIS: We hypothesized substantial or better agreement (as defined by a kappa statistic greater than 0.6) between the evaluations of a case based reasoning (CBR) DST predicting ED admission for bronchiolitis performed by the clinically diverse end users, to those of two clinical experts who evaluated the same DST output.METHODS: Three outputs from a previously described DST were evaluated by the emergency physicians (EP) who originally saw the patients and by two pediatric EPs with an interest in bronchiolitis. The DST outputs were as follows: predicted disposition, an example of another previously seen patient to explain the prediction, and explanatory dialog. Each was rated using the scale Definitely Not, No, Maybe, Yes, and Absolutely. This was converted to a Likert scale for analysis. Agreement was measured using the kappa statistic.RESULTS: Agreement with the DST predicted disposition was moderate between end users and the expert reviewers, but was only fair or poor for value of the explanatory case and dialog.CONCLUSION: Agreement between expert evaluators and end users on the value of a CBR DST predicted dispositions was moderate. For the more subjective explicative components, agreement was fair, poor, or worse.http://escholarship.org/uc/item/5f28q4rzevaluationcase basedreasoningdecision supportemergency department
spellingShingle Walsh, Paul
Doyle, Donal
McQuillen, Kemedy K
Bigler, Joshua
Thompson, Caleb
Lin, Ed
Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Expert Reviewers with those of End Users
Western Journal of Emergency Medicine
evaluation
case based
reasoning
decision support
emergency department
title Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Expert Reviewers with those of End Users
title_full Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Expert Reviewers with those of End Users
title_fullStr Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Expert Reviewers with those of End Users
title_full_unstemmed Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Expert Reviewers with those of End Users
title_short Comparison of the Evaluations of a Case-Based Reasoning Decision Support Tool by Expert Reviewers with those of End Users
title_sort comparison of the evaluations of a case based reasoning decision support tool by expert reviewers with those of end users
topic evaluation
case based
reasoning
decision support
emergency department
url http://escholarship.org/uc/item/5f28q4rz
work_keys_str_mv AT walshpaul comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyexpertreviewerswiththoseofendusers
AT doyledonal comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyexpertreviewerswiththoseofendusers
AT mcquillenkemedyk comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyexpertreviewerswiththoseofendusers
AT biglerjoshua comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyexpertreviewerswiththoseofendusers
AT thompsoncaleb comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyexpertreviewerswiththoseofendusers
AT lined comparisonoftheevaluationsofacasebasedreasoningdecisionsupporttoolbyexpertreviewerswiththoseofendusers