Author(s):
Algorithmic thinking refers to breaking down a process into systematic, step-by-step instructions that can be completed by humans or machines and is the foundation of algorithm-driven technologies. Although algorithms present many advances and advantages, there is increasing concern about the harms toward individuals and society. The concerns focus on the ethics of algorithmic decision-making, specifically that algorithms can be (1) biased due to the characteristics of the underlying data; (2) unaccountable due to the black-box nature of many widely used algorithms; and (3) misunderstood regarding how algorithm-driven decisions impact social justice or ethical issues. Therefore, it is vital to integrate training on ethical algorithmic decision-making into the technology curriculum so that students becoming professional engineers, data scientists, and analysts are aware of and can mitigate unintended algorithmic outcomes. The research supported through this NSF award aims to improve undergraduate students’ ethical decision-making skills related to the use of algorithms. Toward that aim, the specific objective of goal of this project is to develop, implement, and test interactive case studies that include role-playing activities to engage undergraduate students in the ethical aspects of algorithmic thinking and algorithm design. Our overall guiding question is, how well do case studies and role-plays work for furthering a situated understanding of ethical decision-making related to algorithmic thinking among learners? Building on the situated cognition paradigm of learning, the case studies enable students to think through algorithmic decision-making issues from different perspectives. The case studies are based on both real-world scenarios and fictitious (but grounded in near reality) cases that highlight the complexities of ethical algorithmic decision-making. The case studies are implemented in courses through the use of role-play scenarios, an exercise in which students take on a specific role and engage in conversation through that perspective. The combination of case study and role-play allows students to navigate their own perspective on a scenario with others that may be similar or vastly different. The research approach uses mixed-methods including focus groups, discussions, and student-generated artifacts such as concept maps. Thus far, we have implemented our designed case studies and role-plays in technology courses for over 200 students. Our work suggests that tailored case studies implemented through role-play activities promote student understanding of ethical considerations in the context of algorithmic thinking. Additionally, this method works well in furthering students’ recognition of algorithmic ethics concepts such as data bias, systematic transparency, decision fairness, societal inclusion, and algorithmic discrimination. Students can better recognize the potential harms of algorithmic use after exploring the case study and participating in the role-play activity. The broader impact of this work includes adapting the interactive case studies for use in other courses, in standalone workshops, and at other institutions. Finally, it improves the school-to-work transition by providing students with training on real-world dilemmas they will likely face, thereby preparing the future workforce for ethical algorithmic decision-making.
Coauthors
Ashish Hingle, George Mason University, Fairfax, VA; Aditya Johri, George Mason University, Fairfax, VA; Huzefa Rangwala, George Mason University, Fairfax, VA; Alexander Monea, George Mason University, Fairfax, VA;