Revealing college biology student graphing practices with a digital performance-based assessment

Author(s):
Stephanie Gardner
Associate Professor
Purdue University

Need: A major barrier to improving teaching of complex skills in science is measuring them on larger scales. Evidence-based teaching depends on evidence, and with large classes or across varied populations it can be challenging to find the time and resources to gather the data needed to design and refine curriculum. We have developed a digital, performance-based assessment of graph construction competence that can be (mostly) auto-scored and used at large scale. We call the assessment GraphSmarts and it uses a storyline about trophic cascades affecting conservation of an ecological community as the biological context in which students test predictions through graphing.

Guiding questions: What are the features of a digital performance-based assessment that can reveal undergraduate biology student graphing practices? What are the graphing practices of undergraduate biology students from different institutions and course contexts? Approach: We’ve gone through six iterations of the assessment design, the last three using the Evidence-Centered Design (Mislevy, 2013) framework. We’ve used a combination of literature review, almost 100 student interviews, 20 faculty interviews, several faculty focus groups, and testing in 11 classrooms in diverse settings to refine and validate the assessment. The assessment contains three graphing tasks (performance-based assessment), eight intermediate-constraint response questions (e.g. sorting relevant variables), and three free-response prompts (e.g. graph choice justification). Conceptual overlap of some of the questions allows for triangulation of findings regarding some graphing practices (e.g. variable relevance). Instructors teaching biology at diverse institution types were recruited to use GraphSmarts as an activity in their class. Course contexts include majors (n=152 students) and non-majors biology (n=80 students) and upper-division Ecology (n=49 students). Institution types include community college, state research-intensive and Master’s universities, and private liberal arts colleges. Students completed the assessment online and instructors were provided with a summary report.

Outcomes: We will present the student model which consists of seventeen practices for graph construction in biology. We have used quantitative methods to describe graphing competence on graphing tasks and medium-constraint responses and inductive and deductive coding to characterize free-response items. We will show data that demonstrates practices where GraphSmarts appears to capture competence well, and where it still needs improvement. We find that GraphSmarts distinguishes graphing competence between populations in an expected pattern based on graphing experience. GraphSmarts performance-based assessment has significant correlation (tau = 51%) with data from questions on the same practice. Order effects for the different tasks in the assessment were small and non-significant (p > 0.25 for all comparisons). We have evidence of test-retest reliability from testing multiple semesters of the same class, and internal reliability from significant correlation between assessment tasks.

Broader Impacts: GraphSmarts can serve as a tool for wide-scale assessment of graphing competence with the ability to describe undergraduate graphing practices quickly and at scale, filling a needed gap in the literature and contributing to claims that are more comprehensive and generalizable across student populations. With our existing and growing evidence base we can make specific recommendations for instructors on how to support students’ development of competence with graphing.

Coauthors

Eli Meir, SimBio; Joel Abraham, California State University-Fullerton; Elizabeth Suazo-Flores, Purdue University; Susan Maruca, SimBio; Anupriya Karippadath, Purdue University; Nouran Amin, Purdue University