AI in Self-Driving Cars: Deep Learning Deciding Life or Death

AI in self-driving cars

Creating a safer, less congested and ecologically friendly world is the main goal of driverless vehicles, and artificial intelligence or AI in self-driving cars is the brain behind making these modern marvels work. In order to get these automobiles on the road, the global community is doing its best to support autonomous car development. The current U.S. Department of Transportation Secretary, Elaine Chao, held a press conference at University of Michigan’s, MCity in late September to announce the simplification of restrictions on autonomous vehicle R&D and testing. A bill addressing how this will be done is currently making its way through the U.S. Senate and House. Europe is already testing autonomous shuttles in some cities — driverless public transport has long been seen as a natural fit for the technology — and companies like nuTonomy in Singapore and Uber in various cities of the United States have been trying out autonomous taxis and ride-hailing options on public roads for a few years now. The inputs and outputs AI has to access just for the normal operation of a vehicle on everyday roads are vast and complex, but add to that the unique decisions of life and death put on the neural networks, and innovators are seeking ways to make those less polarizing. After all, as much as many look forward to a day when the driverless car makes traffic jams and automobile collisions a thing of the past, one issue keeps coming to the surface: what happens when an unmanned vehicle is faced with whom to save in an impending accident?

The Spock conundrum of logic vs. emotion

left and right brain assimilating the info provided for AI

Left Brain (logic) vs. Right Brain (emotion)

Up until recently — and in some cases still — self-driving cars have been seen as a thing of fantasy and science fiction. To that point, sci-fi stories have long used logic to make choices that emotion may very well undermine. Logic based life-forms and sentient beings tend to react in ways that are seemingly counter to humans, but often end up being about salvation rather than destruction. The purpose, obviously, is to show the human side of a machine or emotionless entity in a way that gets the audience to embrace the character. However, when you break down those decisions, you realize that those choices are logical, even as they tug at our emotions.

In the classic film, Star Trek: The Wrath of Khan, as the being synonymous with bridging the gap between emotion and logic sacrifices himself for the crew, Spock opines, “The needs of the many outweigh the needs of the few… or the one.” It is a logical choice when faced with such a situation — one or few lives as opposed to masses. Arnold Schwarzenegger’s reformed terminator in Terminator 2: Judgement Day concluding that sacrificing himself to save a race he was once programmed to destroy comes from the deep learning of his neural networks. He has come to understand and care about them, and sacrifice to save is the logical choice so that the needs of the many are met. K-2SO’s going out in a hail of laser cannon fire in Rogue One: A Star Wars Story so that Jyn and Cassian get the plans to the Death Star, thereby saving billions of lives in the process is, also, a logical decision made by another sentient being. And the doe-eyed, heartstring pulling WALL-E hitching an unscheduled ride on the spaceship that has taken his beloved and suddenly catatonic EVE (EE-vah) only to discover the entire race of the planet he’s been cleaning up is being relegated to unhealthy, fat and forever lost in space drones. His giving himself over to losing his learned humanity to save them all is, once again, the logical choice. One trash-gathering robot v. the whole human race? No question.

Therefore, programming the AI in self-driving cars to react in a way that serves the greater good makes sense, right? However, road collisions are frequently one-to-one situations. They rarely enter the “needs of the many outweigh the needs of the few” category, at least on the surface. It’s that grey area that adds an extra dimension to coming to a conclusion that makes the most sense. For human drivers, this is a moral dilemma that is only realized after much thought — weighing the cause and effect — and consideration — the emotional burden of a decision that leads to possible tragedy. The “moral” only comes into play when a person is operating a vehicle, but when artificial intelligence in self-driving cars accesses digital programming to determine what to do, deep learning comes in. Understood. Except, what is deep learning?

Teaching machines how to learn

deep learning in a neural network

Deep learning basically takes massive amounts of data, layering it upon each other to build conclusions that lead to a human-like recognition of what something as abstract as an image or sound actually is for a machine. The “deep” comes from how, over time, the levels of information gathered and how it experiences this data leads to it learning more about what it is gathering, allowing the AI to correct itself and be better at recognizing and, therefore, reacting to the input appropriately. Correcting its own mistakes, just like you and me.

Heady stuff, right? Well, these are exceptional amounts of data being assimilated, and an amazing capability that is pushing automakers and tech giants closer to creating truly effective driverless solutions. HOWEVER, it still doesn’t answer the question: what happens when a machine is faced with who lives and who dies on the road?

a car stopping for a pedestrian

Here’s the dilemma—and one we discussed in our Child Safety article: You’re in your self-driving car, barrelling happily down the street, when a pedestrian runs out. On one side is oncoming traffic, the other a sheer cliff. What do you do? If you, a human, were driving, many people say they would go off the cliff, hoping to miss the pedestrian AND the many in oncoming traffic. But, if there is no one behind your wheel, if you’re not manning it, the AI in self-driving cars kicks in and makes the decision for you. And per automakers, that decision is to save the passenger in the vehicle, not the people on the road. What would be a big “Whoa!” moment that you will live with for the rest of your life is not so for something like NVIDIA’s Drive PX. Because AI in self-driving cars doesn’t have the ability to emotionalize things, it will never wake-up in the middle of the night, sweating about the choice it makes forever. And because of this lack of a moral dilemma or ethical consideration, AI in self-driving cars is basing the decision on algorithms and probabilities, patterns of recognition that feed deep learning, not emotion. And how is all of that processing getting artificial intelligence to the right decision?

Good question. Because, when you think of it, that whole “the needs of the many outweighs the needs of the few” is flipped in a driverless world. It becomes, “the needs of the ones in the car outweighs the needs of the bunches out there on the street.” It’s a choice, sure, but is it a truly logical one?

The reasoning power of AI in self-driving cars

How AI in self-driving cars may react and see things

Before we go into what kind of information artificial intelligence is taking in to help it decide who lives and who dies, perhaps we can play a bit with what we, the humans, do with the inputs we’re given.

Massachusetts Institute of Technology (MIT) worked on a self-driving car project that included considering this dilemma and created a site that allows people to test what they would do in these potential crash situations. As you move through the different scenarios, it gets harder and harder to decide what is “right,” even to the point that you wonder if there is any “right” in these instances. And if we, as thinking humans, can’t figure out what’s the right thing to do, how does AI in self-driving cars? By taking the emotional connection to the problem out of the equation and using algorithms feeding deep learning, autonomous vehicles are able to do what they need to do in order to get from Point A to Point B, efficiently and seamlessly. Collateral damage may very well be something with which to contend, but, as automakers have made clear, the technology behind each driverless car is created to manage slowing down, averting, braking in enough time to avoid loss of life and catastrophic auto accidents.

However, it’s still a rather unique “moral” problem. From ethicist Philippa Foot to philosopher Judith Jarvis Thomson and beyond, figuring out how to address this ethical issue has been a challenge. Everything about the Trolley Dilemma is based upon a series of factors you may never face. The world is moving forward with autonomous vehicles and the reality of actually having the AI in self-driving cars use its neural network to make a decision that is beyond passenger control is on the horizon. An example of a supplier of this technology is Drive PX, Jen-Hsun Huang’s company, NVIDIA’s solution offering small and large options for making a vehicle’s brain self-driving. That something like this is now available makes understanding the mechanism and reasoning behind such choices — deep learning — more pressing.

Algorithm image by Docurbs via Wikimedia Commons

Machine learning, the algorithm way

AI in self-driving cars becomes smarter thanks to algorithms. But what exactly are these? And how do they learn or contribute to the “smartness” of your car?

An algorithm is comprised of inputs that prompt specific outputs. It’s a series of bits of information fed into a centralized mechanical brain that tell it how to take that INPUT and create actionable OUTPUT to initiate an appropriate response. An algorithm is likened to a recipe—the ingredients being the inputs and the meal the outputs. It’s a helpful tool in the world of machine learning, and in the case of AI in self-driving cars, a huge influence on creating a safer, seamless experience.

How that information used to create these algorithms is truly assimilated and accessed by human beings is a concern. An example given in the article “Why AI Makes It Hard to Prove That Self-Driving Cars Are Safe” by Andrew Silver poses the worry over what is being considered and what isn’t:  “imagine if you test drive your self-driving car and want it to learn how to avoid pedestrians. So you have people in orange safety shirts stand around and you let the car loose. It might be training to recognize hands, arms, and legs—or maybe it’s training to recognize an orange shirt.”

Because algorithms use statistics to define results, they have a potential to put in or take out specific information to come to a conclusion. This incompleteness leads to a familiarity that, in instances like this, can breed danger, creating limitations. That type of rote learning can mean something as simple as getting to know a certain area of town really well while not understanding other parts at all. This would make successfully navigating those alternative roads, street signs and pedestrian interactions virtually impossible.

Difference between Traditional Modeling and Machine Learning. Image from ZEISS International

There are two unique types of algorithms used to make AI in self-driving cars effective: Computer Vision and Machine Learning. Computer Vision algorithm is the more traditional form that uses a cascading sort of learning process with encoded programming that leads to a predicted result. The newer, more innovative and precise Machine Learning algorithm, also known as deep neural networks, goes beyond basic codes and uses sample data to “learn” and assume results on what it has yet to experience, thereby broadening its parameters of output. In the case of deep learning, the data accumulated to feed those decisions goes into something called a “black box.” It holds all of that information so it can be accessed and used by the machine’s brain. However, the actual process of the inputs leading to the outputs is so intensive, comprehending what led to that decision is beyond human thought. This means that should the system react incorrectly, it’s virtually impossible for a person to take all that’s been gathered in the black box and determine what caused the wrong decision. And if they can’t figure that out, they won’t be able to fix the actual process that led to that conclusion so it won’t happen again.

AI in self-driving car as the vehicle sees it

Introduce into that deciding who on the street to save in case of an altercation. If the mechanical brain of the autonomous vehicle has input probabilities that only have a finite number of outcomes and finds itself faced with one that has never been encountered before, how does it make the decision? It pulls from what it knows, adjusting to the situation as best it can with the information it’s been given. Much like humans, actually, however with a more logical and detached view that allows that car to react in a way that protects its passenger above all else. Simple, straightforward and unfailing. Because that whole idea of “the needs of the many outweigh the needs of the few” has a lot of deep layers. Whose many? What few? And how many are affected if that few is lost? It goes on and on and on until, honestly, some sort of stance needs to be made and just as the MIT project shows, figuring out a clean, clear choice when there are so many variables involved is virtually impossible. For a machine, picking a lane comes without any baggage. But for the human at the effect of that decision? It is a weight far greater than anything a machine can comprehend. The capacity for understanding flips — people know there is sacrifice and know the outcome of that brings with it pain and confusion that must be bore. Machines only act and as much as they may or may not learn from their action, they will never feel the enormity of it.

An enduring question

It is a given that humans are fallible. There is enough data to warrant a look at the high cost of human error when it comes to driving motor cars. Something Silicon Valley is also learning through its testing — from companies like Uber to Waymo and beyond — is that people are also detrimental to a self-driving car project — the bulk of autonomous car accidents are due to human intervention. And studies have shown that machines react much faster than a man or woman ever could. But if the AI in self-driving cars isn’t sure what it’s reacting to or how it’s supposed to react, is it even possible for it to take action at all, let alone appropriate action?

where AI in self-driving cars is taking shape

A view of Silicon Valley at dusk

It’s a unique conundrum, surely, and there are no easy answers in any of this. And, so, as these various Silicon Valley giants and automakers discuss, confer, and keep coming up with such technologies as Jen-Hsun Huang’s Drive PX, deep learning of algorithms, and neural networks in general, the technology gets smarter, but the questions get harder.

, , , , , , , , , , , , , , , , ,

DARPA Autonomous Vehicle Research And Self-Driving Cars

Home of DARPA

Aerial view of The Pentagon, home of DARPA

Defense Advanced Research Projects Agency (DARPA) has been on the cutting edge of innovating homeland security since the age of Sputnik, but the DARPA autonomous vehicle research is prompting a collaboration among different industries committed to changing how consumers (not just the military) travel in the decades to come.

As soon as the Russian satellite, Sputnik, was launched in 1957, the United States was on high alert. It is that momentous event that led to the creation of one of the most innovative agencies in the Federal government, thanks to President Dwight D. Eisenhower. The Defense Advanced Research Projects Agency (DARPA), which, at the time simply went by Advanced Research Projects Agency (ARPA) —the “D” wouldn’t be added until 1972—was assembled to push America to be leaders of strategic technologies rather than play catch-up. In the decades since its inception, DARPA has gone on to influence and initiate projects that have moved homeland security forward in unique and singular ways as well as establishing benchmark technologies that forever changed the face of the world. Recently, DARPA autonomous vehicle research laid the groundwork for the self-driving cars that are creating a new way of consumer travel across a variety of key industries, and thanks to them, momentum is building.

A government agency on the cutting edge

From the day it was created in 1958, DARPA has been pushing the boundaries of technology and innovation. It initiated rocket research that same year and turned over the information it gathered to create the Television and Infrared Observation Satellites (TIROS) Program to NASA in 1959, which would become the basis for today’s global weather forecasting, reporting and researching by the Department of Defense (DoD), NASA and the National Oceanographic and Atmospheric Organization (NOAO).

DARPA’s purpose

While the agency’s focus is and always has been national security and the technologies developed are heavily military and government based, DARPA’s overarching goal is to push technology forward in a global sense. The group has been instrumental in the advancement of some of the most critical innovations and technologically advanced inventions in the world. Among these are the internet—which began life as ARPAnet back in the 1960s—the GPS and the computer mouse.

DARPA is constantly changing and innovating, never staying with one team for too long in order to remain nimble and fresh. Part of that fluidity is to create access to its tools for universities, industries and small businesses in addition to the armed forces. The agency’s goal is to constantly move forward by addressing real world concerns, strategically and practically. While the bulk of its research is centered around defending the country and creating better ways to arm and support the military, DARPA makes its technologies and findings available across all manner of divisions—universities, small businesses, industry and the public—as well as encouraging input and proposals from those same communities. In the words of the organization’s website, DARPA “works within an innovation ecosystem that includes academic, corporate and governmental partners, with a constant focus on the Nation’s military Services, which work with DARPA to create new strategic opportunities and novel tactical options.”

And that is where the role of DARPA autonomous vehicle research in the creation of the self-driving car comes in.

The story of The Grand Challenges

In the early 2000s, Congress gave DARPA a mandate—implement unmanned vehicles into the military by 2015. Making actual working self-driving cars and transportation had been a quest since the days of Leonardo da Vinci and while the unmanned Mars Exploration Rovers from NASA would be launched in 2003, nothing sustainable for broader, everyday use had come to fruition as yet. To successfully pursue DARPA autonomous vehicle research, the agency felt it needed to do something more than go through the usual internal swirling of ideas or discovery process. This whole idea of pushing the boundaries of autonomous vehicle technology required inspiring and pushing the envelope in a wholly unique way. DARPA did this by creating a contest and inviting a variety of great minds to use their skills and imagination to come up with different solutions from which to choose the best possible features. The organization asked for and received Congressional approval for the event and sent out a broad net to the academic and engineering community to participate. This became a seminal moment in the self-driving car movement.

The First Grand Challenge

On July 30, 2002, DARPA took over The Petersen Automotive Museum in Los Angeles, attracting hundreds of techies and observers, to announce The First DARPA Grand Challenge. The object of the contest was to create an autonomous robotic vehicle that could complete an as-yet-to-be-determined 150-200 mile course between Los Angeles and Las Vegas for a $1 million prize. The terrain was to reflect the desert conditions of places like Fallujah where U.S. troops were engaging in combat. By the time of the actual challenge on March 13, 2004, 15 vehicles of the original 21 qualifiers were deemed road ready on a 142-mile gruelling course across the Mojave Desert between Barstow, California and just across the border of Nevada in Primm. All of the finalists used a combination of sensors, robotics and cameras to make their dream of an autonomous ground vehicle a reality. Unfortunately, out of those that ran the course, the furthest any of them got was the Carnegie Mellon University (CMU) Red Team car, which traveled 7.4 miles of the course. A successful robotic car would remain elusive and the prize money unclaimed.

The Second Grand Challenge

Photo by DARPA via Wikimedia Commons

Stanford Red Team, “Stanley,” winner of the Second Grand Challenge

But neither DARPA nor the contestants were daunted. The agency was heartened by the commitment shown by the different participants and announced the Second DARPA Grand Challenge a day later. This time it was to be a 132-mile course to be run, once again, through the Mojave Desert in the Autumn of 2005 with a prize of $2 million to the winning crew. Teams took what they learned in the first challenge and reworked their vehicles, incorporating various sensors, cameras and more to prepare. 195 teams entered and 5 successfully finished with Stanford University’s Red Team winning with their “Stanley” robotic car and earning the prize money. Now that the academic, engineering and tech community had shown a proficiency with navigating the difficult desert terrain outlined in the course, DARPA put its mind around how to encourage autonomous vehicle innovation on city streets.

The DARPA Urban Challenge

Carnegie Melon’s Tartan team wins DARPA Urban Challenge. Photo by Rob NREC via Wikimedia Commons

The third robotic vehicle challenge was conducted in 2007 and called The DARPA Urban Challenge. The call to action now required driverless vehicles to be able to navigate a complicated course on a staged environment in Victorville, California in which they would need to move through traffic and obstacles while obeying California traffic laws. Again, the prize money was $2 million. 11 teams entered and 6 finished. The “Tartan Racing” team from Carnegie Mellon University placed in first, taking the prize money and all that had been learned through each challenge to start serious research on making self-driving cars a reality for all.

Influencing unmanned vehicle innovation for all

These races sparked the imaginations of the engineering and automotive community in an expansive way. Virginia Tech, one of the finalists in the urban challenge, went on to collaborate with TORC, a company founded by alumni of the Virginia Tech robotics department, to create Grand Unmanned Support Surrogates (GUSS) for the U.S. Marine Corps. The autonomous ground vehicle is designed for mass casualty evacuations from combat/compromised areas, re-supplying of and carrying heavy loads for troops. Per a 2015 article written by Chris Urmson for the National Academy of Engineering, DARPA’s challenges threw down a gauntlet to the engineering community as a whole to take the innovation inspired by and lessons learned from the grand challenges and bring them to life in the real world. According to Urmson, technology used to develop consumer based autonomous features—LIDAR, radar, camera—were those overarching tools used to meet the DARPA Grand Challenges. While the purpose of these contests was to push forward engineering to meet the Congressional mandate for self-driving cars in the military by 2015, the benefits have been much farther reaching.

In the world of the military, unmanned is not the same as autonomous. Many of the unmanned ground vehicles (UGV) created are remote controlled or tele-operated. However, these machines can get into spots and deal with sensitive situations, such as the active mine removal capability of the Abrams Panther and small space surveillance with the urban robot (URBOT) also known as Urbie, without endangering the lives of soldiers. But, autonomous ground vehicles are making their way out of the armed forces and into the consumer world on a large scale. This is all thanks to the imagination and creativity DARPA autonomous vehicle research inspired and pushed forward with its grand challenges. The urban challenge, in particular, opened up a doorway to seeing how the world of self-driving cars could have everyday implications.

The role of DARPA autonomous vehicle research in the military

Since the first three grand challenges, DARPA has pursued a robotics challenge, a cyber-challenge and is currently ruminating over what next to present to the scientific/technology/engineering community. But the DARPA autonomous vehicle research inspiration has gone far beyond unmanned ground vehicles and the driverless car.

By U.S. Navy, Photo by John Williams

Sea Hunter, the DARPA supported ACTUV

As part of the agency’s focus on anti-submarine warfare (ASW), it has created the ACTUV or Anti-Submarine Warfare (ASW) Continuous Trail Unmanned Vessel. Its role is to quietly track diesel powered enemy subs through miles of sea for long periods of time without a single crew person aboard. With everything DARPA autonomous vehicle research has prompted, the word “vehicle” is far-reaching and addressing all of the areas that are sensitive to homeland security—land, sea, air and space.

Among these are unmanned aerial vehicles like the Tactically Exploited Reconnaissance Node (TERN), a medium-altitude, long-endurance (MALE) unmanned aircraft system that provides consistent intelligence, surveillance and reconnaissance (ISR) that can engage mobile targets anywhere around the world anytime of the day or night. There is also the dual purpose Aerial Reconfigurable Embedded System (ARES), which is part of the Transformer TX program. It’s capable of traveling by air and land. It can drop supplies from the air to specific points as well as extract soldiers and casualties from combat zones. But it can also drive on land. It is part of the Vehicle Take-Off and Landing (VTOL) Skunk Works project with Lockheed Aircraft and others.

By DARPA via Wikimedia Commons

An artist’s rendering of the HTV-2 in flight

In the realm of space, beyond the unmanned transporters to Mars, there have been the hypersonic technology vehicles (HTV) created through the Falcon project. Both the HTV-1 and the HTV-2 were tested then scrapped, but enough research was compiled to push forward other potential uses and ways to lower costs. These two vessels were unmanned spacecraft that could function without crew and gather information and drop supplies at space stations. Now working under the name of the Tactical Boost Glide (TBG) program, these types of vehicles are being considered with the parameters of cost efficiency, feasibility and effectiveness.

Drones are certainly among those unmanned vehicles to be counted as one of the things DARPA’s research has inspired. These small, economical surveillance and delivery systems serve a variety of purposes and have already infiltrated civilian life, for fun and business. But the focus now is on making it possible for UGV’s to transport human beings on a grand scale—the autonomous car and beyond—both in combat and day-to-day life.

DARPA of tomorrow

What the world of tomorrow looks like is anybody’s guess, but DARPA’s role as a leader in advanced technology for homeland security and consumer use is something the organization hopes to maintain. It has a far-reaching grasp on a variety of inventions and research is constant.

As we look ahead to unmanned transport, what DARPA has done to promote the autonomous vehicle technology most of us know today is vast. The Grand Challenges alone created an extraordinary renaissance in self-driving cars and pushed forward highly beneficial unmanned ground, air and sea vessels in the military that have implications for commercial and consumer use. While the agency has become less of a player in the tech world than in its earlier days due to the advances made in Silicon Valley and how DARPA’s initial innovations were made available to so many companies, universities and organizations, the goal has always been to inspire broader growth and forward movement that has global value in addition to protecting the U.S. It is what makes this agency such a unique player on the government stage. Its organizational make up and work practices have prompted countless organizations to imitate them, because the amount of progress made within DARPA is unparalleled. It is a highly influential agency that is as creative as it is regimented. Remaining fluid and nimble is key to its continued success and as the world of the autonomous car becomes even bigger, DARPA will keep in step and, frequently, lead the way.

, , , , , , , , , , , , , , , ,