Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study

Background Surveys of hospitalized patients are important for research and learning about unobservable medical issues (eg, mental health, quality of life, and symptoms), but there has been little work examining survey data quality in this population whose capacity to respond...

Full description

Bibliographic Details
Main Authors: Megan E Gregory, Lindsey N Sova, Timothy R Huerta, Ann Scheck McAlearney
Format: Article
Language:English
Published: JMIR Publications 2023-11-01
Series:Journal of Medical Internet Research
Online Access:https://www.jmir.org/2023/1/e48236
_version_ 1797643350200811520
author Megan E Gregory
Lindsey N Sova
Timothy R Huerta
Ann Scheck McAlearney
author_facet Megan E Gregory
Lindsey N Sova
Timothy R Huerta
Ann Scheck McAlearney
author_sort Megan E Gregory
collection DOAJ
description Background Surveys of hospitalized patients are important for research and learning about unobservable medical issues (eg, mental health, quality of life, and symptoms), but there has been little work examining survey data quality in this population whose capacity to respond to survey items may differ from the general population. Objective The aim of this study is to determine what factors drive response rates, survey drop-offs, and missing data in surveys of hospitalized patients. Methods Cross-sectional surveys were distributed on an inpatient tablet to patients in a large, midwestern US hospital. Three versions were tested: 1 with 174 items and 2 with 111 items; one 111-item version had missing item reminders that prompted participants when they did not answer items. Response rate, drop-off rate (abandoning survey before completion), and item missingness (skipping items) were examined to investigate data quality. Chi-square tests, Kaplan-Meyer survival curves, and distribution charts were used to compare data quality among survey versions. Response duration was computed for each version. ResultsOverall, 2981 patients responded. Response rate did not differ between the 174- and 111-item versions (81.7% vs 83%, P=.53). Drop-off was significantly reduced when the survey was shortened (65.7% vs 20.2% of participants dropped off, P<.001). Approximately one-quarter of participants dropped off by item 120, with over half dropping off by item 158. The percentage of participants with missing data decreased substantially when missing item reminders were added (77.2% vs 31.7% of participants, P<.001). The mean percentage of items with missing data was reduced in the shorter survey (40.7% vs 20.3% of items missing); with missing item reminders, the percentage of items with missing data was further reduced (20.3% vs 11.7% of items missing). Across versions, for the median participant, each item added 24.6 seconds to a survey’s duration. Conclusions Hospitalized patients may have a higher tolerance for longer surveys than the general population, but surveys given to hospitalized patients should have a maximum of 120 items to ensure high rates of completion. Missing item prompts should be used to reduce missing data. Future research should examine generalizability to nonhospitalized individuals.
first_indexed 2024-03-11T14:13:33Z
format Article
id doaj.art-be092dc8458e420f8af7f0a47bcce33a
institution Directory Open Access Journal
issn 1438-8871
language English
last_indexed 2024-03-11T14:13:33Z
publishDate 2023-11-01
publisher JMIR Publications
record_format Article
series Journal of Medical Internet Research
spelling doaj.art-be092dc8458e420f8af7f0a47bcce33a2023-11-01T14:30:34ZengJMIR PublicationsJournal of Medical Internet Research1438-88712023-11-0125e4823610.2196/48236Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional StudyMegan E Gregoryhttps://orcid.org/0000-0002-6888-6886Lindsey N Sovahttps://orcid.org/0000-0001-5477-3808Timothy R Huertahttps://orcid.org/0000-0002-9978-3564Ann Scheck McAlearneyhttps://orcid.org/0000-0001-9107-5419 Background Surveys of hospitalized patients are important for research and learning about unobservable medical issues (eg, mental health, quality of life, and symptoms), but there has been little work examining survey data quality in this population whose capacity to respond to survey items may differ from the general population. Objective The aim of this study is to determine what factors drive response rates, survey drop-offs, and missing data in surveys of hospitalized patients. Methods Cross-sectional surveys were distributed on an inpatient tablet to patients in a large, midwestern US hospital. Three versions were tested: 1 with 174 items and 2 with 111 items; one 111-item version had missing item reminders that prompted participants when they did not answer items. Response rate, drop-off rate (abandoning survey before completion), and item missingness (skipping items) were examined to investigate data quality. Chi-square tests, Kaplan-Meyer survival curves, and distribution charts were used to compare data quality among survey versions. Response duration was computed for each version. ResultsOverall, 2981 patients responded. Response rate did not differ between the 174- and 111-item versions (81.7% vs 83%, P=.53). Drop-off was significantly reduced when the survey was shortened (65.7% vs 20.2% of participants dropped off, P<.001). Approximately one-quarter of participants dropped off by item 120, with over half dropping off by item 158. The percentage of participants with missing data decreased substantially when missing item reminders were added (77.2% vs 31.7% of participants, P<.001). The mean percentage of items with missing data was reduced in the shorter survey (40.7% vs 20.3% of items missing); with missing item reminders, the percentage of items with missing data was further reduced (20.3% vs 11.7% of items missing). Across versions, for the median participant, each item added 24.6 seconds to a survey’s duration. Conclusions Hospitalized patients may have a higher tolerance for longer surveys than the general population, but surveys given to hospitalized patients should have a maximum of 120 items to ensure high rates of completion. Missing item prompts should be used to reduce missing data. Future research should examine generalizability to nonhospitalized individuals.https://www.jmir.org/2023/1/e48236
spellingShingle Megan E Gregory
Lindsey N Sova
Timothy R Huerta
Ann Scheck McAlearney
Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study
Journal of Medical Internet Research
title Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study
title_full Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study
title_fullStr Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study
title_full_unstemmed Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study
title_short Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study
title_sort implications for electronic surveys in inpatient settings based on patient survey response patterns cross sectional study
url https://www.jmir.org/2023/1/e48236
work_keys_str_mv AT meganegregory implicationsforelectronicsurveysininpatientsettingsbasedonpatientsurveyresponsepatternscrosssectionalstudy
AT lindseynsova implicationsforelectronicsurveysininpatientsettingsbasedonpatientsurveyresponsepatternscrosssectionalstudy
AT timothyrhuerta implicationsforelectronicsurveysininpatientsettingsbasedonpatientsurveyresponsepatternscrosssectionalstudy
AT annscheckmcalearney implicationsforelectronicsurveysininpatientsettingsbasedonpatientsurveyresponsepatternscrosssectionalstudy