Prevalence of and barriers to the adoption of high-impact teaching practices

Author(s):
Chris Moore
Professor of Physics
University of Nebraska Omaha

This project is using a Concerns-Based Adoption Model (CBAM) as the basis for acquiring data from stakeholders about the use of high-impact teaching practices (HIPs) in general education science, mathematics, and social science (STEM) courses at the University of Nebraska Omaha. Specifically, the project has been determining the current types and prevalence of research-verified HIPs used in in these courses and is identifying faculty concerns and barriers to their broader adoption. Key findings from this work are being used by both administrative and faculty leaders to allocate resources based on need, potential for measurable impact, and in a manner that addresses faculty concerns and attitudes. The three diagnostic dimensions of the CBAM are: (1) the Model of Success, (2) Measures of Behaviors, and (3) Measures of Attitudes. The Model of Success being used for this project are represented by the eight categories defined by the Teaching Practices Inventory (TPI), which serve as a gauge for the extent of use of research-based teaching practices. The Measures of Behaviors used in this project include faculty self-report of practices, analysis of collaboration and sharing via social networking analysis (SNA), and validation of self-reports using the Classroom Observation Protocol for Undergraduate STEM (COPUS) and a new student evaluations of teaching (SET) instrument under development. The Measures of Attitudes include the an attitudinal survey and faculty focus groups and interviews on structural barriers to the implementation of HIPs. The central research questions for this project have been as follows: (1) What HIPs are faculty using in their general education science, math, and social science courses, and what is their prevalence? (2) What faculty-perceived barriers/concerns exist with respect to adoption of HIPs? (3) What is the weighted effect that the extent of HIP use, density of the faculty network, and self-identified stages of concern have on institutional effectiveness metrics and department-reported assessments of student learning? We expect the answers to the first two questions to be specific to the institution, with generalizability possible with respect to methods and results across similar institutions. The answer to our third research question will be generalizable, since we will connect measures of teaching practice to measures of student learning. The implementation of a broad range of research-verified teaching practices has been shown to improve course metrics, such as DFW rates, and institutional metrics such as retention. However, university administrators and faculty senate face a dearth of information about the actual teaching practices used by their faculty, and therefore must attempt to blindly provide support, resources, and intervention. This project is providing an assessment model is helping address local STEM learning challenges as a broader impact, while also helping to retain undergraduate STEM majors. It will also represent a vertically integrated project to be embraced as a national model that is worthy of replication.

Coauthors

Julie Pelton, University of Nebraska Omaha; Sarah Edwards, University of Nebraska Omaha