Measuring evolution learning: impacts of student participation incentives and test timing

Abstract Background Policy documents like Vision and Change and the Next Generation Science Standards emphasize the importance of using constructed-response assessments to measure student learning, but little work has examined the extent to which administration conditions (e.g., participation incent...

Full description

Bibliographic Details
Main Authors: Gena C. Sbeglia, Ross H. Nehm
Format: Article
Language:English
Published: BMC 2022-06-01
Series:Evolution: Education and Outreach
Subjects:
Online Access:https://doi.org/10.1186/s12052-022-00166-2
_version_ 1811334537409462272
author Gena C. Sbeglia
Ross H. Nehm
author_facet Gena C. Sbeglia
Ross H. Nehm
author_sort Gena C. Sbeglia
collection DOAJ
description Abstract Background Policy documents like Vision and Change and the Next Generation Science Standards emphasize the importance of using constructed-response assessments to measure student learning, but little work has examined the extent to which administration conditions (e.g., participation incentives, end-of-course timing) bias inferences about learning using such instruments. This study investigates potential biases in the measurement of evolution understanding (one time point) and learning (pre-post) using a constructed-response instrument. Methods The constructed-response ACORNS instrument (Assessment of COntextual Reasoning about Natural Selection) was administered at the beginning of the semester, during the final exam, and at end of the semester to large samples of North American undergraduates (N = 488–1379, 68–96% participation rate). Three ACORNS scores were studied: number of evolutionary core concepts (CC), presence of evolutionary misconceptions (MIS), and presence of normative scientific reasoning across contexts (MODC). Hierarchical logistic and linear models (HLMs) were used to study the impact of participation incentives (regular credit vs. extra credit) and end-of-course timing (final exam vs. post-test) on inferences about evolution understanding (single time point) and learning (pre-post) derived from the three ACORNS scores. The analyses also explored whether results were generalizable across race/ethnicity and gender. Results Variation in participation incentives and end-of-course ACORNS administration timing did not meaningfully impact inferences about evolution understanding (i.e., interpretations of CC, MIS, and MODC magnitudes at a single time point); all comparisons were either insignificant or, if significant, considered to be small effect sizes. Furthermore, participation incentives and end-of-course timing did not meaningfully impact inferences about evolution learning (i.e., interpretations of CC, MIS, and MODC changes through time). These findings were consistent across race/ethnicity and gender groups. Conclusion Inferences about evolution understanding and learning derived from ACORNS scores were in most cases robust to variations in participation incentives and end-of-course timing, suggesting that educators may have some flexibility in terms of when and how they deploy the ACORNS instrument.
first_indexed 2024-04-13T17:10:28Z
format Article
id doaj.art-f072e329a27548b1993f06b8c5c9df8e
institution Directory Open Access Journal
issn 1936-6426
1936-6434
language English
last_indexed 2024-04-13T17:10:28Z
publishDate 2022-06-01
publisher BMC
record_format Article
series Evolution: Education and Outreach
spelling doaj.art-f072e329a27548b1993f06b8c5c9df8e2022-12-22T02:38:18ZengBMCEvolution: Education and Outreach1936-64261936-64342022-06-0115111510.1186/s12052-022-00166-2Measuring evolution learning: impacts of student participation incentives and test timingGena C. Sbeglia0Ross H. Nehm1Department of Ecology and Evolution Stony, Brook UniversityDepartment of Ecology and Evolution Stony, Brook UniversityAbstract Background Policy documents like Vision and Change and the Next Generation Science Standards emphasize the importance of using constructed-response assessments to measure student learning, but little work has examined the extent to which administration conditions (e.g., participation incentives, end-of-course timing) bias inferences about learning using such instruments. This study investigates potential biases in the measurement of evolution understanding (one time point) and learning (pre-post) using a constructed-response instrument. Methods The constructed-response ACORNS instrument (Assessment of COntextual Reasoning about Natural Selection) was administered at the beginning of the semester, during the final exam, and at end of the semester to large samples of North American undergraduates (N = 488–1379, 68–96% participation rate). Three ACORNS scores were studied: number of evolutionary core concepts (CC), presence of evolutionary misconceptions (MIS), and presence of normative scientific reasoning across contexts (MODC). Hierarchical logistic and linear models (HLMs) were used to study the impact of participation incentives (regular credit vs. extra credit) and end-of-course timing (final exam vs. post-test) on inferences about evolution understanding (single time point) and learning (pre-post) derived from the three ACORNS scores. The analyses also explored whether results were generalizable across race/ethnicity and gender. Results Variation in participation incentives and end-of-course ACORNS administration timing did not meaningfully impact inferences about evolution understanding (i.e., interpretations of CC, MIS, and MODC magnitudes at a single time point); all comparisons were either insignificant or, if significant, considered to be small effect sizes. Furthermore, participation incentives and end-of-course timing did not meaningfully impact inferences about evolution learning (i.e., interpretations of CC, MIS, and MODC changes through time). These findings were consistent across race/ethnicity and gender groups. Conclusion Inferences about evolution understanding and learning derived from ACORNS scores were in most cases robust to variations in participation incentives and end-of-course timing, suggesting that educators may have some flexibility in terms of when and how they deploy the ACORNS instrument.https://doi.org/10.1186/s12052-022-00166-2AssessmentLearning evolutionUndergraduatesACORNSTesting conditions
spellingShingle Gena C. Sbeglia
Ross H. Nehm
Measuring evolution learning: impacts of student participation incentives and test timing
Evolution: Education and Outreach
Assessment
Learning evolution
Undergraduates
ACORNS
Testing conditions
title Measuring evolution learning: impacts of student participation incentives and test timing
title_full Measuring evolution learning: impacts of student participation incentives and test timing
title_fullStr Measuring evolution learning: impacts of student participation incentives and test timing
title_full_unstemmed Measuring evolution learning: impacts of student participation incentives and test timing
title_short Measuring evolution learning: impacts of student participation incentives and test timing
title_sort measuring evolution learning impacts of student participation incentives and test timing
topic Assessment
Learning evolution
Undergraduates
ACORNS
Testing conditions
url https://doi.org/10.1186/s12052-022-00166-2
work_keys_str_mv AT genacsbeglia measuringevolutionlearningimpactsofstudentparticipationincentivesandtesttiming
AT rosshnehm measuringevolutionlearningimpactsofstudentparticipationincentivesandtesttiming