Novel In-Training Evaluation Report in an Internal Medicine Residency Program: Improving the Quality of the Narrative Assessment

OBJECTIVE To determine whether incorporating our novel in-training evaluation report (ITER), which prompts each resident to list at least three self-identified learning goals, improved the quality of narrative assessments as measured by the Narrative Evaluation Quality Instrument (NEQI). METHODS A t...

Full description

Bibliographic Details
Main Authors: Marc Gutierrez, Kelsey Wilson, Brant Bickford, Joseph Yuhas, Ronald Markert, Kathryn M Burtson
Format: Article
Language:English
Published: SAGE Publishing 2023-10-01
Series:Journal of Medical Education and Curricular Development
Online Access:https://doi.org/10.1177/23821205231206058
_version_ 1797663195080425472
author Marc Gutierrez
Kelsey Wilson
Brant Bickford
Joseph Yuhas
Ronald Markert
Kathryn M Burtson
author_facet Marc Gutierrez
Kelsey Wilson
Brant Bickford
Joseph Yuhas
Ronald Markert
Kathryn M Burtson
author_sort Marc Gutierrez
collection DOAJ
description OBJECTIVE To determine whether incorporating our novel in-training evaluation report (ITER), which prompts each resident to list at least three self-identified learning goals, improved the quality of narrative assessments as measured by the Narrative Evaluation Quality Instrument (NEQI). METHODS A total of 1468 narrative assessments from a single institution from 2017 to 2021 were deidentified, compiled, and sorted into the pre-intervention form arm and post-intervention form arm. Due to limitations in our residency management suite, incorporating learning goals required switching from an electronic form to a hand-deliver form. Comments were graded by two research personnel utilizing the NEQI's scale of 0–12, with 12 representing the maximum quality for a comment. The outcome of the study was the mean difference in NEQI score between the electronic pre-intervention period and paper post-intervention period. RESULTS The mean NEQI score for the pre-intervention period was 2.43 ± 3.34, and the mean NEQI score for the post-intervention period was 3.31 ± 1.71, with a mean difference of 0.88 (p < 0.001). In the pre-intervention period, 46% of evaluations were submitted without a narrative assessment (scored as a zero) while 1% of post-intervention period evaluations had no narrative assessment. Internal consistency reliability, as measured by Ebel's intraclass correlation coefficient (ICC), showed high agreement between the two raters (ICC = 0.92). CONCLUSIONS Our findings suggest that implementing a timely, hand-delivered paper ITER that incorporates resident learning goals can lead to overall higher-quality narrative assessments.
first_indexed 2024-03-11T19:11:02Z
format Article
id doaj.art-544836d5bc654e3dbdd69fabfe6217dc
institution Directory Open Access Journal
issn 2382-1205
language English
last_indexed 2024-03-11T19:11:02Z
publishDate 2023-10-01
publisher SAGE Publishing
record_format Article
series Journal of Medical Education and Curricular Development
spelling doaj.art-544836d5bc654e3dbdd69fabfe6217dc2023-10-09T16:34:06ZengSAGE PublishingJournal of Medical Education and Curricular Development2382-12052023-10-011010.1177/23821205231206058Novel In-Training Evaluation Report in an Internal Medicine Residency Program: Improving the Quality of the Narrative AssessmentMarc Gutierrez0Kelsey Wilson1Brant Bickford2Joseph Yuhas3Ronald Markert4Kathryn M Burtson5 Internal Medicine Program, , Wright-Patterson AFB, OH, USA Internal Medicine Program, , Wright-Patterson AFB, OH, USA Internal Medicine Program, , Wright-Patterson AFB, OH, USA Internal Medicine Program, , Wright-Patterson AFB, OH, USA Department of Internal Medicine and Neurology, , Dayton, OH, USA Internal Medicine Program, , Wright-Patterson AFB, OH 45433, USAOBJECTIVE To determine whether incorporating our novel in-training evaluation report (ITER), which prompts each resident to list at least three self-identified learning goals, improved the quality of narrative assessments as measured by the Narrative Evaluation Quality Instrument (NEQI). METHODS A total of 1468 narrative assessments from a single institution from 2017 to 2021 were deidentified, compiled, and sorted into the pre-intervention form arm and post-intervention form arm. Due to limitations in our residency management suite, incorporating learning goals required switching from an electronic form to a hand-deliver form. Comments were graded by two research personnel utilizing the NEQI's scale of 0–12, with 12 representing the maximum quality for a comment. The outcome of the study was the mean difference in NEQI score between the electronic pre-intervention period and paper post-intervention period. RESULTS The mean NEQI score for the pre-intervention period was 2.43 ± 3.34, and the mean NEQI score for the post-intervention period was 3.31 ± 1.71, with a mean difference of 0.88 (p < 0.001). In the pre-intervention period, 46% of evaluations were submitted without a narrative assessment (scored as a zero) while 1% of post-intervention period evaluations had no narrative assessment. Internal consistency reliability, as measured by Ebel's intraclass correlation coefficient (ICC), showed high agreement between the two raters (ICC = 0.92). CONCLUSIONS Our findings suggest that implementing a timely, hand-delivered paper ITER that incorporates resident learning goals can lead to overall higher-quality narrative assessments.https://doi.org/10.1177/23821205231206058
spellingShingle Marc Gutierrez
Kelsey Wilson
Brant Bickford
Joseph Yuhas
Ronald Markert
Kathryn M Burtson
Novel In-Training Evaluation Report in an Internal Medicine Residency Program: Improving the Quality of the Narrative Assessment
Journal of Medical Education and Curricular Development
title Novel In-Training Evaluation Report in an Internal Medicine Residency Program: Improving the Quality of the Narrative Assessment
title_full Novel In-Training Evaluation Report in an Internal Medicine Residency Program: Improving the Quality of the Narrative Assessment
title_fullStr Novel In-Training Evaluation Report in an Internal Medicine Residency Program: Improving the Quality of the Narrative Assessment
title_full_unstemmed Novel In-Training Evaluation Report in an Internal Medicine Residency Program: Improving the Quality of the Narrative Assessment
title_short Novel In-Training Evaluation Report in an Internal Medicine Residency Program: Improving the Quality of the Narrative Assessment
title_sort novel in training evaluation report in an internal medicine residency program improving the quality of the narrative assessment
url https://doi.org/10.1177/23821205231206058
work_keys_str_mv AT marcgutierrez novelintrainingevaluationreportinaninternalmedicineresidencyprogramimprovingthequalityofthenarrativeassessment
AT kelseywilson novelintrainingevaluationreportinaninternalmedicineresidencyprogramimprovingthequalityofthenarrativeassessment
AT brantbickford novelintrainingevaluationreportinaninternalmedicineresidencyprogramimprovingthequalityofthenarrativeassessment
AT josephyuhas novelintrainingevaluationreportinaninternalmedicineresidencyprogramimprovingthequalityofthenarrativeassessment
AT ronaldmarkert novelintrainingevaluationreportinaninternalmedicineresidencyprogramimprovingthequalityofthenarrativeassessment
AT kathrynmburtson novelintrainingevaluationreportinaninternalmedicineresidencyprogramimprovingthequalityofthenarrativeassessment