
Automakers and technology companies continue to work on vehicles capable of completely driving themselves, but as those systems become more sophisticated, their remaining safety issues appear to be increasingly thorny.
Researchers from North Carolina State University recently detailed their efforts to add more nuance to one snag known as the “trolley problem.” The issue presents a hypothetical choice to drivers and, by extension, to autonomous driving systems: would they swerve to deliberately strike a pedestrian to avoid an otherwise much more catastrophic collision?
NCSU researchers, however, suggested that the focus should not be on a single, improbable scenario, but on the many more mundane ethical decisions that drivers make every day, such as speeding, proceeding through yellow lights, or passing other vehicles.
To do so, they developed a study based on the “Agent Deed Consequence” model, which suggests that people consider the person doing an action, the action itself, and the resulting consequences when making a “moral judgment.”
Study participants went through multiple versions of seven driving scenarios in virtual reality and rated how “moral” the behavior of the driver was. NCSU researchers believe that collecting the same types of information on a larger scale could allow developers to create more effective algorithms for autonomous vehicle decision-making.