Connected Cars

AI in Self-Driving Cars: Deep Learning Deciding Life or Death

Oct 17, 2017

AI in self-driving cars

Creating a safer, less congested and ecologically friendly world is the main goal of driverless vehicles, and artificial intelligence or AI in self-driving cars is the brain behind making these modern marvels work. In order to get these automobiles on the road, the global community is doing its best to support autonomous car development. The current U.S. Department of Transportation Secretary, Elaine Chao, held a press conference at University of Michigan’s, MCity in late September to announce the simplification of restrictions on autonomous vehicle R&D and testing. A bill addressing how this will be done is currently making its way through the U.S. Senate and House. Europe is already testing autonomous shuttles in some cities — driverless public transport has long been seen as a natural fit for the technology — and companies like nuTonomy in Singapore and Uber in various cities of the United States have been trying out autonomous taxis and ride-hailing options on public roads for a few years now. The inputs and outputs AI has to access just for the normal operation of a vehicle on everyday roads are vast and complex, but add to that the unique decisions of life and death put on the neural networks, and innovators are seeking ways to make those less polarizing. After all, as much as many look forward to a day when the driverless car makes traffic jams and automobile collisions a thing of the past, one issue keeps coming to the surface: what happens when an unmanned vehicle is faced with whom to save in an impending accident?

The Spock conundrum of logic vs. emotion

left and right brain assimilating the info provided for AI

Left Brain (logic) vs. Right Brain (emotion)

Up until recently — and in some cases still — self-driving cars have been seen as a thing of fantasy and science fiction. To that point, sci-fi stories have long used logic to make choices that emotion may very well undermine. Logic based life-forms and sentient beings tend to react in ways that are seemingly counter to humans, but often end up being about salvation rather than destruction. The purpose, obviously, is to show the human side of a machine or emotionless entity in a way that gets the audience to embrace the character. However, when you break down those decisions, you realize that those choices are logical, even as they tug at our emotions.

In the classic film, Star Trek: The Wrath of Khan, as the being synonymous with bridging the gap between emotion and logic sacrifices himself for the crew, Spock opines, “The needs of the many outweigh the needs of the few… or the one.” It is a logical choice when faced with such a situation — one or few lives as opposed to masses. Arnold Schwarzenegger’s reformed terminator in Terminator 2: Judgement Day concluding that sacrificing himself to save a race he was once programmed to destroy comes from the deep learning of his neural networks. He has come to understand and care about them, and sacrifice to save is the logical choice so that the needs of the many are met. K-2SO’s going out in a hail of laser cannon fire in Rogue One: A Star Wars Story so that Jyn and Cassian get the plans to the Death Star, thereby saving billions of lives in the process is, also, a logical decision made by another sentient being. And the doe-eyed, heartstring pulling WALL-E hitching an unscheduled ride on the spaceship that has taken his beloved and suddenly catatonic EVE (EE-vah) only to discover the entire race of the planet he’s been cleaning up is being relegated to unhealthy, fat and forever lost in space drones. His giving himself over to losing his learned humanity to save them all is, once again, the logical choice. One trash-gathering robot v. the whole human race? No question.

Therefore, programming the AI in self-driving cars to react in a way that serves the greater good makes sense, right? However, road collisions are frequently one-to-one situations. They rarely enter the “needs of the many outweigh the needs of the few” category, at least on the surface. It’s that grey area that adds an extra dimension to coming to a conclusion that makes the most sense. For human drivers, this is a moral dilemma that is only realized after much thought — weighing the cause and effect — and consideration — the emotional burden of a decision that leads to possible tragedy. The “moral” only comes into play when a person is operating a vehicle, but when artificial intelligence in self-driving cars accesses digital programming to determine what to do, deep learning comes in. Understood. Except, what is deep learning?

Teaching machines how to learn

deep learning in a neural network

Deep learning basically takes massive amounts of data, layering it upon each other to build conclusions that lead to a human-like recognition of what something as abstract as an image or sound actually is for a machine. The “deep” comes from how, over time, the levels of information gathered and how it experiences this data leads to it learning more about what it is gathering, allowing the AI to correct itself and be better at recognizing and, therefore, reacting to the input appropriately. Correcting its own mistakes, just like you and me.

Heady stuff, right? Well, these are exceptional amounts of data being assimilated, and an amazing capability that is pushing automakers and tech giants closer to creating truly effective driverless solutions. HOWEVER, it still doesn’t answer the question: what happens when a machine is faced with who lives and who dies on the road?

a car stopping for a pedestrian

Here’s the dilemma—and one we discussed in our Child Safety article: You’re in your self-driving car, barrelling happily down the street, when a pedestrian runs out. On one side is oncoming traffic, the other a sheer cliff. What do you do? If you, a human, were driving, many people say they would go off the cliff, hoping to miss the pedestrian AND the many in oncoming traffic. But, if there is no one behind your wheel, if you’re not manning it, the AI in self-driving cars kicks in and makes the decision for you. And per automakers, that decision is to save the passenger in the vehicle, not the people on the road. What would be a big “Whoa!” moment that you will live with for the rest of your life is not so for something like NVIDIA’s Drive PX. Because AI in self-driving cars doesn’t have the ability to emotionalize things, it will never wake-up in the middle of the night, sweating about the choice it makes forever. And because of this lack of a moral dilemma or ethical consideration, AI in self-driving cars is basing the decision on algorithms and probabilities, patterns of recognition that feed deep learning, not emotion. And how is all of that processing getting artificial intelligence to the right decision?

Good question. Because, when you think of it, that whole “the needs of the many outweighs the needs of the few” is flipped in a driverless world. It becomes, “the needs of the ones in the car outweighs the needs of the bunches out there on the street.” It’s a choice, sure, but is it a truly logical one?

The reasoning power of AI in self-driving cars

How AI in self-driving cars may react and see things

Before we go into what kind of information artificial intelligence is taking in to help it decide who lives and who dies, perhaps we can play a bit with what we, the humans, do with the inputs we’re given.

Massachusetts Institute of Technology (MIT) worked on a self-driving car project that included considering this dilemma and created a site that allows people to test what they would do in these potential crash situations. As you move through the different scenarios, it gets harder and harder to decide what is “right,” even to the point that you wonder if there is any “right” in these instances. And if we, as thinking humans, can’t figure out what’s the right thing to do, how does AI in self-driving cars? By taking the emotional connection to the problem out of the equation and using algorithms feeding deep learning, autonomous vehicles are able to do what they need to do in order to get from Point A to Point B, efficiently and seamlessly. Collateral damage may very well be something with which to contend, but, as automakers have made clear, the technology behind each driverless car is created to manage slowing down, averting, braking in enough time to avoid loss of life and catastrophic auto accidents.

However, it’s still a rather unique “moral” problem. From ethicist Philippa Foot to philosopher Judith Jarvis Thomson and beyond, figuring out how to address this ethical issue has been a challenge. Everything about the Trolley Dilemma is based upon a series of factors you may never face. The world is moving forward with autonomous vehicles and the reality of actually having the AI in self-driving cars use its neural network to make a decision that is beyond passenger control is on the horizon. An example of a supplier of this technology is Drive PX, Jen-Hsun Huang’s company, NVIDIA’s solution offering small and large options for making a vehicle’s brain self-driving. That something like this is now available makes understanding the mechanism and reasoning behind such choices — deep learning — more pressing.

Algorithm image by Docurbs via Wikimedia Commons

Machine learning, the algorithm way

AI in self-driving cars becomes smarter thanks to algorithms. But what exactly are these? And how do they learn or contribute to the “smartness” of your car?

An algorithm is comprised of inputs that prompt specific outputs. It’s a series of bits of information fed into a centralized mechanical brain that tell it how to take that INPUT and create actionable OUTPUT to initiate an appropriate response. An algorithm is likened to a recipe—the ingredients being the inputs and the meal the outputs. It’s a helpful tool in the world of machine learning, and in the case of AI in self-driving cars, a huge influence on creating a safer, seamless experience.

How that information used to create these algorithms is truly assimilated and accessed by human beings is a concern. An example given in the article “Why AI Makes It Hard to Prove That Self-Driving Cars Are Safe” by Andrew Silver poses the worry over what is being considered and what isn’t:  “imagine if you test drive your self-driving car and want it to learn how to avoid pedestrians. So you have people in orange safety shirts stand around and you let the car loose. It might be training to recognize hands, arms, and legs—or maybe it’s training to recognize an orange shirt.”

Because algorithms use statistics to define results, they have a potential to put in or take out specific information to come to a conclusion. This incompleteness leads to a familiarity that, in instances like this, can breed danger, creating limitations. That type of rote learning can mean something as simple as getting to know a certain area of town really well while not understanding other parts at all. This would make successfully navigating those alternative roads, street signs and pedestrian interactions virtually impossible.

Difference between Traditional Modeling and Machine Learning. Image from ZEISS International

There are two unique types of algorithms used to make AI in self-driving cars effective: Computer Vision and Machine Learning. Computer Vision algorithm is the more traditional form that uses a cascading sort of learning process with encoded programming that leads to a predicted result. The newer, more innovative and precise Machine Learning algorithm, also known as deep neural networks, goes beyond basic codes and uses sample data to “learn” and assume results on what it has yet to experience, thereby broadening its parameters of output. In the case of deep learning, the data accumulated to feed those decisions goes into something called a “black box.” It holds all of that information so it can be accessed and used by the machine’s brain. However, the actual process of the inputs leading to the outputs is so intensive, comprehending what led to that decision is beyond human thought. This means that should the system react incorrectly, it’s virtually impossible for a person to take all that’s been gathered in the black box and determine what caused the wrong decision. And if they can’t figure that out, they won’t be able to fix the actual process that led to that conclusion so it won’t happen again.

AI in self-driving car as the vehicle sees it

Introduce into that deciding who on the street to save in case of an altercation. If the mechanical brain of the autonomous vehicle has input probabilities that only have a finite number of outcomes and finds itself faced with one that has never been encountered before, how does it make the decision? It pulls from what it knows, adjusting to the situation as best it can with the information it’s been given. Much like humans, actually, however with a more logical and detached view that allows that car to react in a way that protects its passenger above all else. Simple, straightforward and unfailing. Because that whole idea of “the needs of the many outweigh the needs of the few” has a lot of deep layers. Whose many? What few? And how many are affected if that few is lost? It goes on and on and on until, honestly, some sort of stance needs to be made and just as the MIT project shows, figuring out a clean, clear choice when there are so many variables involved is virtually impossible. For a machine, picking a lane comes without any baggage. But for the human at the effect of that decision? It is a weight far greater than anything a machine can comprehend. The capacity for understanding flips — people know there is sacrifice and know the outcome of that brings with it pain and confusion that must be bore. Machines only act and as much as they may or may not learn from their action, they will never feel the enormity of it.

An enduring question

It is a given that humans are fallible. There is enough data to warrant a look at the high cost of human error when it comes to driving motor cars. Something Silicon Valley is also learning through its testing — from companies like Uber to Waymo and beyond — is that people are also detrimental to a self-driving car project — the bulk of autonomous car accidents are due to human intervention. And studies have shown that machines react much faster than a man or woman ever could. But if the AI in self-driving cars isn’t sure what it’s reacting to or how it’s supposed to react, is it even possible for it to take action at all, let alone appropriate action?

where AI in self-driving cars is taking shape

A view of Silicon Valley at dusk

It’s a unique conundrum, surely, and there are no easy answers in any of this. And, so, as these various Silicon Valley giants and automakers discuss, confer, and keep coming up with such technologies as Jen-Hsun Huang’s Drive PX, deep learning of algorithms, and neural networks in general, the technology gets smarter, but the questions get harder.

, , , , , , , , , , , , , , , , ,