The ethics around self-driving cars

It may have once seemed like something out of a science fiction film, but self-driving cars are already well on their way to becoming a widespread tool for everyday use. The trouble is, cars have enough potential danger to them when the human is in control. Can preprogrammed decisions really be trusted?

The result is an ethical dilemma that scientists have to work through before self-driving cars can be truly accepted in everyday use. Human beings can hardly think rationally during a car accident, so what procedures are going to be put in place? How should the car be programmed to decide what course of action to take during a collision?

As it stands right now, there is still a lot of human control involved. Regular cars are being fitted with features such as automatic breaking or cruise control. In this sense, people are being eased into the idea of a self-driving car by getting the features one at a time, so that they don’t have to give up control all at once.

Google is a big contender in getting this technology into the hands of the public as soon as possible. Their project, known as Waymo, uses a combination of sensors and decision-making software to safely navigate roads and has accumulated over 2 million miles of self-driving. Their plans are to have this technology into public use by 2020, but many scientists don’t think this is realistic.

“There are many complexities involved from a regulatory, liability, and infrastructural standpoint that are only just starting to be explored,” Matt Sloustcher, head of Acura public relations, said in a Consumer Report published last March.

Perhaps the most urgent of these complexities is the safety issue. While Waymo has many features in place to ensure it is able to safely avoid obstacles and navigate the roads, it’s only one car. Human drivers are still reckless and fully capable of causing potentially fatal accidents. Self-driving cars need to be fitted with technology to help them decide what course of action to take during the case of a collision.

But how to begin?

According to an article by MIT Technology Review, some scientists are leaving it up to the public to decide how these cars should be programmed. It comes down to a simple choice of what the car should prioritize – keeping casualties low, or keeping the people inside the car alive at all costs.

Except, this isn’t a simple choice at all. It’s a heavy ethical debate that can’t be decided so lightly. A study was conducted by Jean-Francois Bonnefon, a psychologist from the Toulouse School of Economics, in order to gauge public opinion. In general, people were in favour of whatever choice results in the fewest casualties, but that they probably wouldn’t drive a car themselves that didn’t put their safety first.

Another survey conducted by the University of Michigan had over 96 percent of people say that they wouldn’t drive a self-driving car if they weren’t given the option to take control of the car when they wanted. While this would result in occupants certainly feeling safer while in the car, it ultimately makes it more difficult to work towards the ideal conditions for self-driving cars – a world where no one has to drive at all. If the cars didn’t have to have human drivers to deal with, the chances for a collision would become almost non-existent.

While Diego Burgos doesn’t have first hand experience with self-driving cars, he is a University of New Brunswick student doing an internship at the automotive electronics division at Robert Bosch. According to him, the biggest problem for autonomous driving is the actual human interaction that has to be taken into account by the car.

“The only way autonomous driving could be completely safe would be in a completely interconnected system where cars can communicate among themselves and with different sensors along the road,” said Burgos. “And most importantly there would be no human input at all.”

This is where he believes the ethical issues arise – with the complete lack of control drivers would have. In general, though, he believes the problem lies less with self-driving cars, and more with Artificial Intelligence in itself.

“I think that it is most important to focus on if Artificial Intelligence (AI) and machine learning are headed towards a dangerous future and what regulations the tech community are implementing in order to keep everyone safe.”