Articles, Cognitive Vehicles, Level 5 Autonomous Driving

Autonomous Cars: Can their ethics be programmed?

This text was guet-authored by Giles Kirkland, car expert and all-round motoring enthusiast.

Autonomous cars are becoming smarter and smarter. With the newest connectivity technologies they’ve already become technically viable, but one important question still remains: can their ethics be programmed?

This is an issue many academics worry about these days: how will AI be programmed to navigate ethical and moral dilemmas? Founders of AI-driven companies say there’s nothing to be concerned about and that they’re working on it.

Teaching Machines Intelligence

Driverless cars have become a huge part of the auto industry. According to Inc. they attract an estimated $100 billion in investment globally.

AI isn’t just a matter of pre-coded if-then statements. Intelligent systems are designed to learn and adapt as they are given data by humans, eventually accumulating experience in the real world. However, by that same token, there’s no way to really know quite how or why a machine is making the decisions it is making. When it comes to driverless cars, which is essentially AI powered by deep learning, how do you trace the ethical trade-offs created in reaching a particular conclusion? Can it be done?

Researchers, philosophers, scientists and car makers are now trying to explore this question. Humans by their very nature make decisions on the fly, depending on what’s going on in front of them at the moment. They may justify going over the speed limit to keep up safely with the flow of traffic, where slowing down could actually cause a bottleneck and increase the chance for a crash.

How can a driverless car make these second-by-second judgement calls? Should these vehicles protect their occupants above everyone else, even jaywalkers? To some, the answer is clear: the priority should be to save the interior occupant over the exterior person, but with the willingness to accept damage to the car in order to protect the life of a person outside of it.

In a nutshell, egotistical vehicles should not be able to rule the roads. It may seem like common sense, but it still involves engineered software that’s deciding whose lives matter more. There’s a moral imperative to these debates and others, but the march of technology still trudges on.

Split-Second Decisions

As a driver, you have to make split second decisions every time you get behind the wheel. Some are moral decisions that you don’t even know you’re making. This is what it means to be human. We can think outside the box, weigh outcomes, and make decisions in the blink of an eye.

Take this scenario: it’s 10 years from now. You’re in your autonomous car driving down a busy street. While you trust your vehicle’s on-board computer to make all the necessary decisions to get you where you need to be safely, you’re still watching your surroundings carefully. Your car speeds up to go through a green light. But at the same time, a small child runs into traffic at the same time a pickup truck passes to your left. It’s too late to avoid a collision, but now the car has to make an impossible decision: will it decide to save you (the occupant it’s been entrusted to protect) or the child? This is where things get scary. If you feel uneasy that such life-or-death decisions will be up to a machine to decide, you’re not alone.

The question of morality comes up a lot. In one recent survey on driverless cars, 58% of participants believe that AI in driverless cars cannot be taught morality. However, different generations define “morality” in different ways:

  • 89% of Baby Boomer participants said that autonomous cars should make a maneuver to save one person as opposed to three in the event of a crash.
  • 53%t of Generation X participants said autonomous cars should protect as many people as they could, even if that means putting the passengers at risk.

51% of millennials said that autonomous cars should save as many people as they could in the case of an accident.

What’s Being Done About It

Today’s self-driving cars utilize sensors, predictive modeling, pre-programmed obstacle-avoidance algorithms, and “smart” object differentiation; with such algorithmic tools, companies like Uber, Google and Tesla have been fine tuning their self-driving vehicles to handle situations like the above, says BU News Service.

Researchers are now focusing on a kind of machine learning called deep learning, utilizing a computational structure known as a neural network. This kind of network mimics how human brains learn, with data that’s shaped by forming connections and linking connected spots in a complex web. Computer scientists can view the input and output of those networks, but the triggering of new connections (the actual thinking component) takes place in so-called “hidden units.”

Autonomous vehicles are starting to utilize these deep learning models, but the mystery around the computers’ choices is making some potential drivers uneasy. That said, 53% of people feel safe crossing the road with autonomous cars on the roadway. 

Scientists are working hard to open those hidden boxes in a variety of ways, such as adding another neural network that acts like an interpreter between the car’s thought processes and its human occupant. Essentially, the car’s computer system would explain its plan to the human user, telling the person that it understood all of the occupant’s commands.

Some say any morality that’s involved in such technology rests solely on the shoulders of the programmers. But while humans are making those decisions, some worry that programmers will focus more on legal issues than moral ones, to potentially reduce the number of lawsuits from drivers and avoid the issue of making value judgments regarding pedestrians.

The question of whether ethics can be accurately taught to a computer remains to be seen. This isn’t stopping the advancement of AI for self-driving cars. Barriers are being overcome day after day, with the technology advancing at an astronomical rate. The logistics and technicality of it all is there; the ethical portion of the equation may not be there just yet but it’s making big strides.

About the author:
Giles Kirkland is an environmentally conscious car expert with passion for combining the newest technologies with a healthy lifestyle. He gives sustainable living and driving tips and shares his ideas on everything from electric vehicles to the alternative energy sources. Giles’ articles are available at Oponeo and on Twitter.