Voluntary vs. compulsory student evaluation of clerkships: effect on validity and potential bias

Abstract Background Students evaluations of their learning experiences can provide a useful source of information about clerkship effectiveness in undergraduate medical education. However, low response rates in clerkship evaluation surveys remain an important limitation. This study examined the impa...

Full description

Bibliographic Details
Main Authors: Sola Aoun Bahous, Pascale Salameh, Angelique Salloum, Wael Salameh, Yoon Soo Park, Ara Tekian
Format: Article
Language:English
Published: BMC 2018-01-01
Series:BMC Medical Education
Subjects:
Online Access:http://link.springer.com/article/10.1186/s12909-017-1116-8
Description
Summary:Abstract Background Students evaluations of their learning experiences can provide a useful source of information about clerkship effectiveness in undergraduate medical education. However, low response rates in clerkship evaluation surveys remain an important limitation. This study examined the impact of increasing response rates using a compulsory approach on validity evidence. Methods Data included 192 responses obtained voluntarily from 49 third-year students in 2014–2015, and 171 responses obtained compulsorily from 49 students in the first six months of the consecutive year at one medical school in Lebanon. Evidence supporting internal structure and response process validity was compared between the two administration modalities. The authors also tested for potential bias introduced by the use of the compulsory approach by examining students’ responses to a sham item that was added to the last survey administration. Results Response rates increased from 56% in the voluntary group to 100% in the compulsory group (P < 0.001). Students in both groups provided comparable clerkship rating except for one clerkship that received higher rating in the voluntary group (P = 0.02). Respondents in the voluntary group had higher academic performance compared to the compulsory group but this difference diminished when whole class grades were compared. Reliability of ratings was adequately high and comparable between the two consecutive years. Testing for non-response bias in the voluntary group showed that females were more frequent responders in two clerkships. Testing for authority-induced bias revealed that students might complete the evaluation randomly without attention to content. Conclusions While increasing response rates is often a policy requirement aimed to improve the credibility of ratings, using authority to enforce responses may not increase reliability and can raise concerns over the meaningfulness of the evaluation. Administrators are urged to consider not only response rates, but also representativeness and quality of responses in administering evaluation surveys.
ISSN:1472-6920