From: Measuring evolution learning: impacts of student participation incentives and test timing
ACORNS study | Authors | Key finding |
---|---|---|
Grounding the design of the assessment in well-established cognitive principles | Opfer et al. (2012) | The ACORNS aligns with three core cognitive principles central to scientific reasoning following NRC (2001) recommendations |
Correspondence of written explanation scores to clinical oral interviews with undergraduates | Beggrow et al. (2014) | More than 100 students’ interview scores were compared to ACORNS scores and found to have greater correspondence than to a multiple-choice evolution assessment |
Analysis of potential English Learner (EL) bias in written tasks | Ha and Nehm (2016) | Scoring of EL ACORNS spelling errors did not show bias using the EvoGrader scoring tool |
Examination of potential gender bias in written tasks | Federer et al. (2016) | DIF analyses found minimal gender bias in ACORNS written tasks |
Study of how the order of items impacts student performance | Federer et al. (2014) | Recommendation that two ACORNS items differing in surface features have the least order and test fatigue effects |
Analysis of ACORNS-like responses and interpretation bias for lexically ambiguous wording (e.g., “adapt”) | Rector et al. (2013) | The vast majority of scoring interpretations were corroborated after follow-up questioning, although some misinterpretation errors were documented |
Correspondence of automated scoring of ACORNS responses using machine learning models to trained raters | Moherrari et al. (2014) | The EvoGrader automated scoring tool provides accurate and consistent scoring of answers, eliminating human rater inconsistencies across individuals and through time |