Are Autonomous Vehicles the Next Tool for Terrorists?

Are Autonomous Vehicles the Next Tool for Terrorists?

Programmers dealing with ethical questions in AI and autonomous vehicles (AVs) often overlook a key issue: people might misuse AVs for harmful purposes, as highlighted by a study from North Carolina State University.

Imagine an autonomous vehicle driving without any passengers. Suddenly, it’s about to crash into another car carrying five people. It could avoid the crash by swerving, but then it risks hitting a pedestrian instead. Most ethical discussions here focus on whether the AV should be selfish (saving itself and its cargo) or utilitarian (minimizing harm to the greatest number). However, these discussions often miss the complexities involved.

“Current methods for addressing ethics in autonomous vehicles are too simple; moral decisions are more complex than that,” says Veljko Dubljević, an assistant professor at NC State and author of a paper discussing this issue. “Consider this: What if the five people in the car are terrorists using the AI’s programming to intentionally harm a pedestrian or others? In this case, you might actually want the AV to hit the car with the five passengers. This underscores that the current ethical frameworks don’t consider malicious intent, but they need to.”

To address this gap, Dubljević suggests using the agent-deed-consequence (ADC) model to help AIs make moral decisions. This model evaluates decisions based on three aspects:

1. Is the agent’s intent good or bad?
2. Is the action itself good or bad?
3. Is the outcome good or bad?

This method offers more nuanced decisions. For instance, most people agree that running a red light is bad, but what if you do it to avoid a speeding ambulance and prevent a collision? The ADC model helps accommodate these subtleties.

“The ADC model can make AI’s moral judgment more stable and flexible, similar to human judgment,” Dubljević explains. “Human judgment is stable because lying is generally seen as morally bad, but it’s also flexible enough to recognize that lying to Nazis to protect Jews was morally good.”

Dubljević also emphasizes the need for more research. “My experiments with how philosophers and everyday people approach moral judgment have been insightful, but they were based on written scenarios. We need to study human moral judgment through more immediate methods, like virtual reality, to confirm our findings and apply them to AVs. Extensive testing with driving simulators is crucial before these ‘ethical’ AVs are regularly on the roads. Given the increase in vehicle terror attacks, we must ensure AV technology isn’t misused for harmful purposes.”

smartautotrends