The performance of kaizen tasks across three online DCE surveys: an evidence synthesis
<p>Kaizen is a Japanese term for continuous improvement (kai ~ change, zen ~ good). In a kaizen task, a respondent makes sequential choices to improve an object’s profile, revealing a preference path. Including kaizen tasks in a discrete choice experiment (DCE) has the advantage of c...
Glavni autori: | , , |
---|---|
Format: | Journal article |
Jezik: | English |
Izdano: |
Springer
2024
|
_version_ | 1826316079693037568 |
---|---|
author | Craig, BM Jumamyradov, M Rivero-Arias, O |
author_facet | Craig, BM Jumamyradov, M Rivero-Arias, O |
author_sort | Craig, BM |
collection | OXFORD |
description | <p>Kaizen is a Japanese term for continuous improvement (kai ~ change, zen ~ good). In a kaizen task, a respondent makes sequential choices to improve an object’s profile, revealing a preference path. Including kaizen tasks in a discrete choice experiment (DCE) has the advantage of collecting greater preference evidence than pick-one tasks, such as paired comparisons. So far, three online DCEs have included kaizen tasks: the 2020 US COVID-19 vaccination (CVP) study, the 2021 UK Children’s Surgery Outcome Reporting (CSOR) study, and the 2023 US EQ-5D-Y-3L valuation (Y-3L) study. In this evidence synthesis, we describe the performance of the kaizen tasks in terms of response behaviors, conditional logit and Zermelo-Bradley-Terry (ZBT) estimates, and their standard errors in each of the surveys. Comparing the CVP and Y-3L, including hold-outs (i.e., attributes shared by all alternatives) seems to reduce positional behavior by half. The CVP tasks excluded multilevel improvements; therefore, we could not estimate logit main effects directly. In the CSOR, only 12 of the 21 logit estimates are significantly positive (p-value < 0.05), possibly due to the fixed attribute order. All Y-3L estimates are significantly positive, and their predictions are highly correlated (Pearson: logit 0.802, ZBT 0.882) and strongly agree (Lin: logit 0.744, ZBT 0.852) with the paired-comparison probabilities. These DCEs offer important lessons for future studies: (1) include warm-up tasks, hold-outs, and multilevel improvements; (2) randomize the attribute order (i.e., up-down) at the respondent level; and (3) recruit smaller samples of respondents than traditional DCEs with only pick-one tasks.</p> |
first_indexed | 2024-09-25T04:33:39Z |
format | Journal article |
id | oxford-uuid:84e15a8c-4994-425e-ae4b-8e3598d8efdf |
institution | University of Oxford |
language | English |
last_indexed | 2024-12-09T03:39:29Z |
publishDate | 2024 |
publisher | Springer |
record_format | dspace |
spelling | oxford-uuid:84e15a8c-4994-425e-ae4b-8e3598d8efdf2024-12-06T09:04:45ZThe performance of kaizen tasks across three online DCE surveys: an evidence synthesisJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:84e15a8c-4994-425e-ae4b-8e3598d8efdfEnglishSymplectic ElementsSpringer2024Craig, BMJumamyradov, MRivero-Arias, O<p>Kaizen is a Japanese term for continuous improvement (kai ~ change, zen ~ good). In a kaizen task, a respondent makes sequential choices to improve an object’s profile, revealing a preference path. Including kaizen tasks in a discrete choice experiment (DCE) has the advantage of collecting greater preference evidence than pick-one tasks, such as paired comparisons. So far, three online DCEs have included kaizen tasks: the 2020 US COVID-19 vaccination (CVP) study, the 2021 UK Children’s Surgery Outcome Reporting (CSOR) study, and the 2023 US EQ-5D-Y-3L valuation (Y-3L) study. In this evidence synthesis, we describe the performance of the kaizen tasks in terms of response behaviors, conditional logit and Zermelo-Bradley-Terry (ZBT) estimates, and their standard errors in each of the surveys. Comparing the CVP and Y-3L, including hold-outs (i.e., attributes shared by all alternatives) seems to reduce positional behavior by half. The CVP tasks excluded multilevel improvements; therefore, we could not estimate logit main effects directly. In the CSOR, only 12 of the 21 logit estimates are significantly positive (p-value < 0.05), possibly due to the fixed attribute order. All Y-3L estimates are significantly positive, and their predictions are highly correlated (Pearson: logit 0.802, ZBT 0.882) and strongly agree (Lin: logit 0.744, ZBT 0.852) with the paired-comparison probabilities. These DCEs offer important lessons for future studies: (1) include warm-up tasks, hold-outs, and multilevel improvements; (2) randomize the attribute order (i.e., up-down) at the respondent level; and (3) recruit smaller samples of respondents than traditional DCEs with only pick-one tasks.</p> |
spellingShingle | Craig, BM Jumamyradov, M Rivero-Arias, O The performance of kaizen tasks across three online DCE surveys: an evidence synthesis |
title | The performance of kaizen tasks across three online DCE surveys: an evidence synthesis |
title_full | The performance of kaizen tasks across three online DCE surveys: an evidence synthesis |
title_fullStr | The performance of kaizen tasks across three online DCE surveys: an evidence synthesis |
title_full_unstemmed | The performance of kaizen tasks across three online DCE surveys: an evidence synthesis |
title_short | The performance of kaizen tasks across three online DCE surveys: an evidence synthesis |
title_sort | performance of kaizen tasks across three online dce surveys an evidence synthesis |
work_keys_str_mv | AT craigbm theperformanceofkaizentasksacrossthreeonlinedcesurveysanevidencesynthesis AT jumamyradovm theperformanceofkaizentasksacrossthreeonlinedcesurveysanevidencesynthesis AT riveroariaso theperformanceofkaizentasksacrossthreeonlinedcesurveysanevidencesynthesis AT craigbm performanceofkaizentasksacrossthreeonlinedcesurveysanevidencesynthesis AT jumamyradovm performanceofkaizentasksacrossthreeonlinedcesurveysanevidencesynthesis AT riveroariaso performanceofkaizentasksacrossthreeonlinedcesurveysanevidencesynthesis |