A New Test to Help Driverless Cars Make ‘Moral’ Decisions? Philosophers Approve.

Researchers have validated a technique for studying how people make “moral” decisions when driving, with the goal of using the resulting data to train the artificial intelligence used in autonomous vehicles. These moral psychology experiments were tested using the most critical audience researchers could think of: philosophers.
“Very few people set out to cause an accident or hurt other people on the road,” says Veljko Dubljević, corresponding author of the study and a professor in the Science, Technology & Society program at North Carolina State University. “Accidents often stem from low-stakes decisions, such as whether to go five miles over the speed limit or make a rolling stop at a stop sign. How do we make these decisions? And what constitutes a moral decision when we’re behind the wheel?
“We needed to find a way to collect quantifiable data on this, because that sort of data is necessary if we want to train autonomous vehicles to make moral decisions,” Dubljević says. “Once we found a way to collect that data, we needed to find a way to validate the technique – to demonstrate that the data is meaningful and can be used to train AI. For moral psychology, the most detail-oriented set of critics would be philosophers, so we decided to test our technique with them.”
The technique the researchers developed is based on the Agent Deed Consequence model, which posits that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that results from the deed.
Specifically, the technique tests how people judge the morality of driving decisions by sharing a variety of traffic scenarios with test subjects, and then having the test subjects answer a series of questions about moral acceptability and various aspects of what took place in each scenario.
For this validation study, the researchers enlisted 274 study participants with advanced degrees in philosophy. The researchers shared driving scenarios with the study participants and asked them about the morality of the decisions that drivers made in each scenario. The researchers also used a validated measure to assess the study participants’ ethical frameworks.
“Different philosophers subscribe to different schools of thought regarding what constitutes moral decision-making,” Dubljević says. “For example, utilitarians approach moral problems very differently from deontologists who are very focused on following rules. In theory, because different schools of thought approach morality differently, results on what constituted moral behavior should have varied depending on which framework different philosophers used.
“What was exciting here is that our findings were consistent across the board,” Dubljević says. “Utilitarians, deontologists, virtue ethicists – whatever their school of thought, they all reached the same conclusions regarding moral decision-making in the context of driving.
“That means we can generalize the findings,” says Dubljević. “And that means this technique has tremendous potential for AI training. This is a significant step forward.
“The next step is to scale up testing among broader populations and in multiple languages, with the goal of determining the extent to which this approach can be generalized both within western culture and beyond.”
The paper, “Morality on the road: the ADC model in low-stakes traffic vignettes,” is published in the journal Frontiers in Psychology. First author of the paper is Michael Pflanzer, a Ph.D. student at NC State. The paper was co-authored by Dario Cecchini, a postdoctoral researcher at NC State; and by Sam Cacace, an assistant professor of psychology at the University of North Carolina at Charlotte.
This work was done with support from the National Science Foundation under grant 2043612.
-shipman-
Note to Editors: The study abstract follows.
“Morality on the road: the ADC model in low-stakes traffic vignettes”
Authors: Michael Pflanzer, Dario Cecchini and Veljko Dubljević, North Carolina State University; Sam Cacace, University of North Carolina at Charlotte
Published: June 8, Frontiers in Psychology
DOI: 10.3389/fpsyg.2025.1508763
Abstract:
Introduction: In recent years, the ethical implications of traffic decision-making, particularly in the context of autonomous vehicles (AVs), have garnered significant attention. While much of the existing research has focused on high-stakes moral dilemmas, such as those exemplified by the trolley problem, everyday traffic situations—characterized by mundane, low-stakes decisions—remain underexplored.
Methods: This study addresses this gap by empirically investigating the applicability of the Agent-Deed-Consequences (ADC) model in the moral judgment of low-stakes traffic scenarios. Using a vignette approach, we surveyed professional philosophers to examine how their moral judgments are influenced by the character of the driver (Agent), their adherence to traffic rules (Deed), and the outcomes of their actions (Consequences).
Results: Our findings support the primary hypothesis that each component of the ADC model significantly influences moral judgment, with positive valences in agents, deeds, and consequences leading to greater moral acceptability. We additionally explored whether participants’ normative ethical leanings–classified as deontological, utilitarian, or virtue ethics–influenced how they weighted ADC components. However, no moderating effects of moral preference were observed. The results also reveal interaction effects among some components, illustrating the complexity of moral reasoning in traffic situations.
Discussion: The study’s implications are crucial for the ethical programming of AVs, suggesting that these systems should be designed to navigate not only high-stakes dilemmas but also the nuanced moral landscape of everyday driving. Our work creates a foundation for stakeholders to integrate human moral judgments into AV decision-making algorithms. Future research should build on these findings by including a more diverse range of participants and exploring the generalizability of the ADC model across different cultural contexts.
This post was originally published in NC State News.
- Categories: