AI in Self-Driving Cars: Deep Learning Deciding Life or Death

AI in self-driving cars

Creating a safer, less congested and ecologically friendly world is the main goal of driverless vehicles, and artificial intelligence or AI in self-driving cars is the brain behind making these modern marvels work. In order to get these automobiles on the road, the global community is doing its best to support autonomous car development. The current U.S. Department of Transportation Secretary, Elaine Chao, held a press conference at University of Michigan’s, MCity in late September to announce the simplification of restrictions on autonomous vehicle R&D and testing. A bill addressing how this will be done is currently making its way through the U.S. Senate and House. Europe is already testing autonomous shuttles in some cities — driverless public transport has long been seen as a natural fit for the technology — and companies like nuTonomy in Singapore and Uber in various cities of the United States have been trying out autonomous taxis and ride-hailing options on public roads for a few years now. The inputs and outputs AI has to access just for the normal operation of a vehicle on everyday roads are vast and complex, but add to that the unique decisions of life and death put on the neural networks, and innovators are seeking ways to make those less polarizing. After all, as much as many look forward to a day when the driverless car makes traffic jams and automobile collisions a thing of the past, one issue keeps coming to the surface: what happens when an unmanned vehicle is faced with whom to save in an impending accident?

The Spock conundrum of logic vs. emotion

left and right brain assimilating the info provided for AI

Left Brain (logic) vs. Right Brain (emotion)

Up until recently — and in some cases still — self-driving cars have been seen as a thing of fantasy and science fiction. To that point, sci-fi stories have long used logic to make choices that emotion may very well undermine. Logic based life-forms and sentient beings tend to react in ways that are seemingly counter to humans, but often end up being about salvation rather than destruction. The purpose, obviously, is to show the human side of a machine or emotionless entity in a way that gets the audience to embrace the character. However, when you break down those decisions, you realize that those choices are logical, even as they tug at our emotions.

In the classic film, Star Trek: The Wrath of Khan, as the being synonymous with bridging the gap between emotion and logic sacrifices himself for the crew, Spock opines, “The needs of the many outweigh the needs of the few… or the one.” It is a logical choice when faced with such a situation — one or few lives as opposed to masses. Arnold Schwarzenegger’s reformed terminator in Terminator 2: Judgement Day concluding that sacrificing himself to save a race he was once programmed to destroy comes from the deep learning of his neural networks. He has come to understand and care about them, and sacrifice to save is the logical choice so that the needs of the many are met. K-2SO’s going out in a hail of laser cannon fire in Rogue One: A Star Wars Story so that Jyn and Cassian get the plans to the Death Star, thereby saving billions of lives in the process is, also, a logical decision made by another sentient being. And the doe-eyed, heartstring pulling WALL-E hitching an unscheduled ride on the spaceship that has taken his beloved and suddenly catatonic EVE (EE-vah) only to discover the entire race of the planet he’s been cleaning up is being relegated to unhealthy, fat and forever lost in space drones. His giving himself over to losing his learned humanity to save them all is, once again, the logical choice. One trash-gathering robot v. the whole human race? No question.

Therefore, programming the AI in self-driving cars to react in a way that serves the greater good makes sense, right? However, road collisions are frequently one-to-one situations. They rarely enter the “needs of the many outweigh the needs of the few” category, at least on the surface. It’s that grey area that adds an extra dimension to coming to a conclusion that makes the most sense. For human drivers, this is a moral dilemma that is only realized after much thought — weighing the cause and effect — and consideration — the emotional burden of a decision that leads to possible tragedy. The “moral” only comes into play when a person is operating a vehicle, but when artificial intelligence in self-driving cars accesses digital programming to determine what to do, deep learning comes in. Understood. Except, what is deep learning?

Teaching machines how to learn

deep learning in a neural network

Deep learning basically takes massive amounts of data, layering it upon each other to build conclusions that lead to a human-like recognition of what something as abstract as an image or sound actually is for a machine. The “deep” comes from how, over time, the levels of information gathered and how it experiences this data leads to it learning more about what it is gathering, allowing the AI to correct itself and be better at recognizing and, therefore, reacting to the input appropriately. Correcting its own mistakes, just like you and me.

Heady stuff, right? Well, these are exceptional amounts of data being assimilated, and an amazing capability that is pushing automakers and tech giants closer to creating truly effective driverless solutions. HOWEVER, it still doesn’t answer the question: what happens when a machine is faced with who lives and who dies on the road?

a car stopping for a pedestrian

Here’s the dilemma—and one we discussed in our Child Safety article: You’re in your self-driving car, barrelling happily down the street, when a pedestrian runs out. On one side is oncoming traffic, the other a sheer cliff. What do you do? If you, a human, were driving, many people say they would go off the cliff, hoping to miss the pedestrian AND the many in oncoming traffic. But, if there is no one behind your wheel, if you’re not manning it, the AI in self-driving cars kicks in and makes the decision for you. And per automakers, that decision is to save the passenger in the vehicle, not the people on the road. What would be a big “Whoa!” moment that you will live with for the rest of your life is not so for something like NVIDIA’s Drive PX. Because AI in self-driving cars doesn’t have the ability to emotionalize things, it will never wake-up in the middle of the night, sweating about the choice it makes forever. And because of this lack of a moral dilemma or ethical consideration, AI in self-driving cars is basing the decision on algorithms and probabilities, patterns of recognition that feed deep learning, not emotion. And how is all of that processing getting artificial intelligence to the right decision?

Good question. Because, when you think of it, that whole “the needs of the many outweighs the needs of the few” is flipped in a driverless world. It becomes, “the needs of the ones in the car outweighs the needs of the bunches out there on the street.” It’s a choice, sure, but is it a truly logical one?

The reasoning power of AI in self-driving cars

How AI in self-driving cars may react and see things

Before we go into what kind of information artificial intelligence is taking in to help it decide who lives and who dies, perhaps we can play a bit with what we, the humans, do with the inputs we’re given.

Massachusetts Institute of Technology (MIT) worked on a self-driving car project that included considering this dilemma and created a site that allows people to test what they would do in these potential crash situations. As you move through the different scenarios, it gets harder and harder to decide what is “right,” even to the point that you wonder if there is any “right” in these instances. And if we, as thinking humans, can’t figure out what’s the right thing to do, how does AI in self-driving cars? By taking the emotional connection to the problem out of the equation and using algorithms feeding deep learning, autonomous vehicles are able to do what they need to do in order to get from Point A to Point B, efficiently and seamlessly. Collateral damage may very well be something with which to contend, but, as automakers have made clear, the technology behind each driverless car is created to manage slowing down, averting, braking in enough time to avoid loss of life and catastrophic auto accidents.

However, it’s still a rather unique “moral” problem. From ethicist Philippa Foot to philosopher Judith Jarvis Thomson and beyond, figuring out how to address this ethical issue has been a challenge. Everything about the Trolley Dilemma is based upon a series of factors you may never face. The world is moving forward with autonomous vehicles and the reality of actually having the AI in self-driving cars use its neural network to make a decision that is beyond passenger control is on the horizon. An example of a supplier of this technology is Drive PX, Jen-Hsun Huang’s company, NVIDIA’s solution offering small and large options for making a vehicle’s brain self-driving. That something like this is now available makes understanding the mechanism and reasoning behind such choices — deep learning — more pressing.

Algorithm image by Docurbs via Wikimedia Commons

Machine learning, the algorithm way

AI in self-driving cars becomes smarter thanks to algorithms. But what exactly are these? And how do they learn or contribute to the “smartness” of your car?

An algorithm is comprised of inputs that prompt specific outputs. It’s a series of bits of information fed into a centralized mechanical brain that tell it how to take that INPUT and create actionable OUTPUT to initiate an appropriate response. An algorithm is likened to a recipe—the ingredients being the inputs and the meal the outputs. It’s a helpful tool in the world of machine learning, and in the case of AI in self-driving cars, a huge influence on creating a safer, seamless experience.

How that information used to create these algorithms is truly assimilated and accessed by human beings is a concern. An example given in the article “Why AI Makes It Hard to Prove That Self-Driving Cars Are Safe” by Andrew Silver poses the worry over what is being considered and what isn’t:  “imagine if you test drive your self-driving car and want it to learn how to avoid pedestrians. So you have people in orange safety shirts stand around and you let the car loose. It might be training to recognize hands, arms, and legs—or maybe it’s training to recognize an orange shirt.”

Because algorithms use statistics to define results, they have a potential to put in or take out specific information to come to a conclusion. This incompleteness leads to a familiarity that, in instances like this, can breed danger, creating limitations. That type of rote learning can mean something as simple as getting to know a certain area of town really well while not understanding other parts at all. This would make successfully navigating those alternative roads, street signs and pedestrian interactions virtually impossible.

Difference between Traditional Modeling and Machine Learning. Image from ZEISS International

There are two unique types of algorithms used to make AI in self-driving cars effective: Computer Vision and Machine Learning. Computer Vision algorithm is the more traditional form that uses a cascading sort of learning process with encoded programming that leads to a predicted result. The newer, more innovative and precise Machine Learning algorithm, also known as deep neural networks, goes beyond basic codes and uses sample data to “learn” and assume results on what it has yet to experience, thereby broadening its parameters of output. In the case of deep learning, the data accumulated to feed those decisions goes into something called a “black box.” It holds all of that information so it can be accessed and used by the machine’s brain. However, the actual process of the inputs leading to the outputs is so intensive, comprehending what led to that decision is beyond human thought. This means that should the system react incorrectly, it’s virtually impossible for a person to take all that’s been gathered in the black box and determine what caused the wrong decision. And if they can’t figure that out, they won’t be able to fix the actual process that led to that conclusion so it won’t happen again.

AI in self-driving car as the vehicle sees it

Introduce into that deciding who on the street to save in case of an altercation. If the mechanical brain of the autonomous vehicle has input probabilities that only have a finite number of outcomes and finds itself faced with one that has never been encountered before, how does it make the decision? It pulls from what it knows, adjusting to the situation as best it can with the information it’s been given. Much like humans, actually, however with a more logical and detached view that allows that car to react in a way that protects its passenger above all else. Simple, straightforward and unfailing. Because that whole idea of “the needs of the many outweigh the needs of the few” has a lot of deep layers. Whose many? What few? And how many are affected if that few is lost? It goes on and on and on until, honestly, some sort of stance needs to be made and just as the MIT project shows, figuring out a clean, clear choice when there are so many variables involved is virtually impossible. For a machine, picking a lane comes without any baggage. But for the human at the effect of that decision? It is a weight far greater than anything a machine can comprehend. The capacity for understanding flips — people know there is sacrifice and know the outcome of that brings with it pain and confusion that must be bore. Machines only act and as much as they may or may not learn from their action, they will never feel the enormity of it.

An enduring question

It is a given that humans are fallible. There is enough data to warrant a look at the high cost of human error when it comes to driving motor cars. Something Silicon Valley is also learning through its testing — from companies like Uber to Waymo and beyond — is that people are also detrimental to a self-driving car project — the bulk of autonomous car accidents are due to human intervention. And studies have shown that machines react much faster than a man or woman ever could. But if the AI in self-driving cars isn’t sure what it’s reacting to or how it’s supposed to react, is it even possible for it to take action at all, let alone appropriate action?

where AI in self-driving cars is taking shape

A view of Silicon Valley at dusk

It’s a unique conundrum, surely, and there are no easy answers in any of this. And, so, as these various Silicon Valley giants and automakers discuss, confer, and keep coming up with such technologies as Jen-Hsun Huang’s Drive PX, deep learning of algorithms, and neural networks in general, the technology gets smarter, but the questions get harder.

, , , , , , , , , , , , , , , , ,

Zero-Emission Vehicles: Golden State Goes All In

an zero-emission vehicle getting charged up

Per a recent study of data collected between 2013-2015 by the American Lung Association, the Golden State is the dirtiest state in the union with six of the top ten worst cities on the list located on the west coast. With more cars per capita than some countries — approximately 749 automobiles per thousand residents — it’s no wonder that California consistently pushes to lower its carbon footprint. It was the impetus for former Governor Ronald Reagan and his administration to create the California Air Resources Board (CARB) in 1967. The legacy to further support the goal of clean air and healthy living in the region lives on as shown by Governor Jerry Brown recently signing 12 bills to further strengthen California’s near-zero and zero-emission vehicle or ZEV markets.

Strengthening the rules of the zero-emission on the road

These bills cover a broad, yet clean energy focused spectrum — dedicated, on-street public parking spaces for charging a parked electric car, extending access to high-occupancy vehicle (HOV) lanes for certain clean alternative fuel vehicles, a clean-car program to help low-income residents replace their high-polluting cars with zero-emission vehicles, and more. A sweeping bill — SB 498 sponsored by Senator Nancy Skinner (D-Berkeley) —  raises the requirement for the state’s light-duty vehicle fleet to become zero-emission from the current 25 percent by 2020 to 50 percent or more by 2025. Each one of the new bills pushes for more effective and active ZEV support to get the state to a cleaner, healthier place, and move it off of that list of being the dirtiest.

Assisting the greening of commercial fleets

heavy-duty trucks go zero-emission

Heavy-duty vehicles were also addressed in the bills. Commercial automobiles in general and the greenhouse gas they generate have been a subject of much discussion across the country for years. FedEx’s commitment to clean energy and utilizing alternative fuel cells in its heavy-duty trucks have been breakthroughs in support of battling climate change. This “new normal” the delivery giant has successfully established for itself is one that other commercial companies are starting to see as one they can embrace. Bills AB 739 and AB 1073 both support that transformation by specifically dealing with ways to reduce carbon emissions associated with heavy-duty trucks and vehicles. AB 739, drafted by Assemblymember Ed Chau (D-Monterey Park), will require that at least 15 percent of specific newly purchased state heavy-duty vehicles be ZEV starting in 2025 and 30 percent or more beginning in 2030. AB 1073, drafted by Assemblymember Eduardo Garcia (D-Coachella), extends a current requirement to fund the early deployment of clean heavy-duty trucks. This last is part of California’s existing Clean Truck, Bus, and Off-Road Vehicle program.

The bills intentionally do not call out any specific type of clean energy automobile, such as the plug-in hybrid or electric vehicle. By targeting the near-zero or full ZEV market, legislation is able to cover a broad range of alternative fuel cell cars that will help stem greenhouse gas issues on a variety of levels. These zero-emission options include the plug-in electric vehicle, the plug-in hybrid electric car, hydrogen fuel cells, natural gas — basically, anything that burns clean energy and won’t add to the greenhouse gas problem.

Governor Brown’s response to concerns about the effects on the state’s residents and environment from climate change came on the heels of the head of the EPA announcing the scrapping of the Clean Power Plan. California has long considered getting rid of its petroleum cars, with the local government putting together plans for all new cars to be zero-emission only by 2050. This total ban on gasoline engines joins remaining part of the Paris Agreement even as the current administration considers pulling out as clear signs of the state’s commitment to its near-zero and zero-emission future. These green vehicle initiatives are nothing new in California, as mentioned, but strengthening them joins support of autonomous car R&D as a way to make ground transport safer, cleaner and more efficient.

Self-driving not to be outdone

autonomous vehicles get comeuppance via DMV

Following the governor’s signing of new zero-emission initiatives, the California Department of Motor Vehicles (DMV) released revisions to its autonomous vehicle regulations. The move supports the recent Department of Transportation (DOT) announcement of the loosening of restrictions and requirements for driverless auto testing on public roads and development. There have been rules in place for autonomous vehicles since 2014 in the state with 42 companies currently allowed to test their cars on West Coast roads. This welcoming atmosphere is making California a haven for automakers seeking to test and expand their self-driving capabilities, and grow the technology into a viable business that can finally be put to practical use on the road.

A focus on saving lives

Sacramento makes ZEV easier

Creating innovative legislation to further support stemming greenhouse gas emission, addressing climate change to establish a cleaner, healthier future in the state, and setting forth clearer laws to support the development and testing of autonomous vehicles on the roads are all part of California’s desire to make its state that much safer for its residents. The electric vehicle, plug-in hybrid, plug-in electric vehicle and other alternative fuel cell technologies are sought to be the norm, not the exception on West Coast roads sooner rather than later. This also includes incorporating a more equitable and accessible ground for testing and growing the autonomous vehicle market in the Golden State. With that in mind, it will be interesting to see how fast the rest of the nation follows suit. California consistently sets a certain drumbeat for environmental and technological innovation, and these recent changes certainly continue that trend.

But no matter how the rest of the nation — or the world — reacts, California remains steadfast in its mission to clean up and innovate ground travel at home. Both the new zero-emission legislation and the DMV’s autonomous vehicle changes combine to move it out of the position of being the dirtiest state and among the most congested to one where California residents can breathe and move around easier, and are assured of a comfortable, efficient and safe journey in whatever form of transportation they choose.


, , , , , , , , , , , , , , , ,