Validating Parallel-Forms Tests for Assessing Anesthesia Resident Knowledge

We created a serious game to teach first year anesthesiology (CA-1) residents to perform general anesthesia for cesarean delivery. We aimed to investigate resident knowledge gains after playing the game and having received one of 2 modalities of debriefing. We report on the development and validatio...

Full description

Bibliographic Details
Main Authors: Allison J. Lee, Stephanie R. Goodman, Melissa E. B. Bauer, Rebecca D. Minehart, Shawn Banks, Yi Chen, Ruth L. Landau, Madhabi Chatterji
Format: Article
Language:English
Published: SAGE Publishing 2024-02-01
Series:Journal of Medical Education and Curricular Development
Online Access:https://doi.org/10.1177/23821205241229778
_version_ 1797311723145789440
author Allison J. Lee
Stephanie R. Goodman
Melissa E. B. Bauer
Rebecca D. Minehart
Shawn Banks
Yi Chen
Ruth L. Landau
Madhabi Chatterji
author_facet Allison J. Lee
Stephanie R. Goodman
Melissa E. B. Bauer
Rebecca D. Minehart
Shawn Banks
Yi Chen
Ruth L. Landau
Madhabi Chatterji
author_sort Allison J. Lee
collection DOAJ
description We created a serious game to teach first year anesthesiology (CA-1) residents to perform general anesthesia for cesarean delivery. We aimed to investigate resident knowledge gains after playing the game and having received one of 2 modalities of debriefing. We report on the development and validation of scores from parallel test forms for criterion-referenced interpretations of resident knowledge. The test forms were intended for use as pre- and posttests for the experiment. Validation of instruments measuring the study's primary outcome was considered essential for adding rigor to the planned experiment, to be able to trust the study's results. Parallel, multiple-choice test forms development steps included: (1) assessment purpose and population specification; (2) content domain specification and writing/selection of items; (3) content validation by experts of paired items by topic and cognitive level; and (4) empirical validation of scores from the parallel test forms using Classical Test Theory (CTT) techniques. Field testing involved online administration of 52 shuffled items from both test forms to 24 CA-1's, 21 second-year anesthesiology (CA-2) residents, 2 fellows, 1 attending anesthesiologist, and 1 of unknown rank at 3 US institutions. Items from each form yielded near-normal score distributions, with similar medians, ranges, and standard deviations. Evaluations of CTT item difficulty (item p values) and discrimination (D) indices indicated that most items met assumptions of criterion-referenced test design, separating experienced from novice residents. Experienced residents performed better on overall domain scores than novices ( P  < .05). Kuder-Richardson Formula 20 (KR-20) reliability estimates of both test forms were above the acceptability cut of .70, and parallel forms reliability estimate was high at .86, indicating results were consistent with theoretical expectations. Total scores of parallel test forms demonstrated item-level validity, strong internal consistency and parallel forms reliability, suggesting sufficient robustness for knowledge outcomes assessments of CA-1 residents.
first_indexed 2024-03-08T02:04:55Z
format Article
id doaj.art-7fc218c1896a411a9090b2a1bf1c62fa
institution Directory Open Access Journal
issn 2382-1205
language English
last_indexed 2024-03-08T02:04:55Z
publishDate 2024-02-01
publisher SAGE Publishing
record_format Article
series Journal of Medical Education and Curricular Development
spelling doaj.art-7fc218c1896a411a9090b2a1bf1c62fa2024-02-13T19:03:23ZengSAGE PublishingJournal of Medical Education and Curricular Development2382-12052024-02-011110.1177/23821205241229778Validating Parallel-Forms Tests for Assessing Anesthesia Resident KnowledgeAllison J. Lee0Stephanie R. Goodman1Melissa E. B. Bauer2Rebecca D. Minehart3Shawn Banks4Yi Chen5Ruth L. Landau6Madhabi Chatterji7 Department of Anesthesiology, , New York, NY, USA Department of Anesthesiology, , New York, NY, USA Department of Anesthesiology, , Durham, NC, USA Department of Anesthesia, Critical Care and Pain Medicine, , Boston, MA, USA Department of Anesthesiology, Perioperative Medicine and Pain Management, University of Miami, Miami, FL, USA Teachers College, , New York, NY, USA Department of Anesthesiology, , New York, NY, USA Teachers College, , New York, NY, USAWe created a serious game to teach first year anesthesiology (CA-1) residents to perform general anesthesia for cesarean delivery. We aimed to investigate resident knowledge gains after playing the game and having received one of 2 modalities of debriefing. We report on the development and validation of scores from parallel test forms for criterion-referenced interpretations of resident knowledge. The test forms were intended for use as pre- and posttests for the experiment. Validation of instruments measuring the study's primary outcome was considered essential for adding rigor to the planned experiment, to be able to trust the study's results. Parallel, multiple-choice test forms development steps included: (1) assessment purpose and population specification; (2) content domain specification and writing/selection of items; (3) content validation by experts of paired items by topic and cognitive level; and (4) empirical validation of scores from the parallel test forms using Classical Test Theory (CTT) techniques. Field testing involved online administration of 52 shuffled items from both test forms to 24 CA-1's, 21 second-year anesthesiology (CA-2) residents, 2 fellows, 1 attending anesthesiologist, and 1 of unknown rank at 3 US institutions. Items from each form yielded near-normal score distributions, with similar medians, ranges, and standard deviations. Evaluations of CTT item difficulty (item p values) and discrimination (D) indices indicated that most items met assumptions of criterion-referenced test design, separating experienced from novice residents. Experienced residents performed better on overall domain scores than novices ( P  < .05). Kuder-Richardson Formula 20 (KR-20) reliability estimates of both test forms were above the acceptability cut of .70, and parallel forms reliability estimate was high at .86, indicating results were consistent with theoretical expectations. Total scores of parallel test forms demonstrated item-level validity, strong internal consistency and parallel forms reliability, suggesting sufficient robustness for knowledge outcomes assessments of CA-1 residents.https://doi.org/10.1177/23821205241229778
spellingShingle Allison J. Lee
Stephanie R. Goodman
Melissa E. B. Bauer
Rebecca D. Minehart
Shawn Banks
Yi Chen
Ruth L. Landau
Madhabi Chatterji
Validating Parallel-Forms Tests for Assessing Anesthesia Resident Knowledge
Journal of Medical Education and Curricular Development
title Validating Parallel-Forms Tests for Assessing Anesthesia Resident Knowledge
title_full Validating Parallel-Forms Tests for Assessing Anesthesia Resident Knowledge
title_fullStr Validating Parallel-Forms Tests for Assessing Anesthesia Resident Knowledge
title_full_unstemmed Validating Parallel-Forms Tests for Assessing Anesthesia Resident Knowledge
title_short Validating Parallel-Forms Tests for Assessing Anesthesia Resident Knowledge
title_sort validating parallel forms tests for assessing anesthesia resident knowledge
url https://doi.org/10.1177/23821205241229778
work_keys_str_mv AT allisonjlee validatingparallelformstestsforassessinganesthesiaresidentknowledge
AT stephaniergoodman validatingparallelformstestsforassessinganesthesiaresidentknowledge
AT melissaebbauer validatingparallelformstestsforassessinganesthesiaresidentknowledge
AT rebeccadminehart validatingparallelformstestsforassessinganesthesiaresidentknowledge
AT shawnbanks validatingparallelformstestsforassessinganesthesiaresidentknowledge
AT yichen validatingparallelformstestsforassessinganesthesiaresidentknowledge
AT ruthllandau validatingparallelformstestsforassessinganesthesiaresidentknowledge
AT madhabichatterji validatingparallelformstestsforassessinganesthesiaresidentknowledge