This project aims to transform the way faculty assess student learning in undergraduate physics courses. Quantitative assessments of student learning in physics have generally focused on whether a student got the “right” answer on a multiple-choice exam. This kind of analysis fails to capture how close a student’s understanding is to being “right” when a student selects one of many incorrect responses. It therefore cannot accurately track how students’ understanding improves or progresses. This project seeks to develop sophisticated scoring methods for multiple-choice tests that can reveal students’ productive, but “wrong” ideas. This method will result in better-informed and more equitable decisions regarding instructional practices. We present results from using item response theory (IRT) to analyze over 14,000 students’ responses to a commonly used research-based multiple-choice assessment in physics: the Force and Motion Conceptual Evaluation (FMCE). We use the parameters estimated by various IRT models to rank incorrect responses to FMCE items to show how closely aligned each response is to being correct. We explore similarities and differences between results from different student populations. The results from our analyses could be used to determine if students who choose different incorrect responses before and after instruction have improved their understanding. Ultimately, this could lead to methods for assigning partial credit for incorrect responses to multiple-choice items that could more accurately represent students’ overall understanding than current dichotomous scoring methods.
Nasrine Bendjilali, Rowan University, Glassboro, NJ