New Training Model Helps Autonomous Cars See AI’s Blind Spots

A new training model developed by MIT and Microsoft can help identify and correct an autonomous car’s AI when it makes potentially deadly mistakes.
John Loeffler

Since their introduction several years ago, autonomous vehicles have slowly been making their way onto the road in greater and greater numbers, but the public remains wary of them despite the undeniable safety advantages they offer the public.

Autonomous vehicle companies are fully aware of the public’s skepticism. Every crash makes it more difficult to gain public trust and the fear is that if companies do not manage the autonomous vehicle roll-out properly, the backlash might close the door on self-driving car technology the way the Three Mile Island accident shut down the growth of nuclear power plants in the United States in the 1970's.

Making autonomous vehicles safer than they already are means identifying those cases that programmers might never have thought of and that the AI will fail to respond to appropriately, but that a human driver will understand intuitively as a potentially dangerous situation. New research from a joint effort by MIT and Microsoft may help to bridge this gap between machine learning and human intuition to produce the safest autonomous vehicles yet.

Reassuring a Wary Public

Were public hesitancy not a factor, every car on the road would be replaced with an autonomous vehicle within a couple of years. Every truck would be fully autonomous by now and there would be no Uber or Lyft drivers, only shuttle cabs that you would order by phone and it would pull up smoothly to the curb in a couple of minutes without a driver in sight.

Accidents would happen and people would still die as a result, but by some estimates, 90% of traffic fatalities around the world could be prevented with autonomous vehicles. Autonomous cars may need to recharge, but they don’t need to sleep, take breaks, and they are single-mindedly concerned with carrying out the instructions in their programming.

Self Driving Truck
Source: Daimler

For companies that rely on transportation to move goods and people from point A to point B, replacing drivers with self-driving cars saves on labor, insurance , and other ancillary costs that come with having a large human workforce.

The cost savings and the safety gains are simply too great to keep humans on the road and behind the wheel.

We fall asleep, we drive drunk, we get distracted, or we are simply bad at driving, and the consequences are both costly and deadly. A little over a million people die every year on the roads around the world and the move to autonomous commercial trucking alone could cut transportation costs for some companies in half.

Yet, the public is not convinced and they become more skeptical with each report of an accident involving a self-driving car.

Edge Cases: The Achilles Heel of Self-Driving Cars?

[see-also]

Whether it is fair or not, the burden of demonstrating autonomous vehicle safety is on those advocating for self-driving vehicle technology. In order to do this, companies must work to identify and address those edge cases that can cause high profile accidents that reduce public confidence in the otherwise safe technology.

What happens when a vehicle is driving down the road and it spots a weather-beaten, bent, misshapen, and faded stop sign? Though an obviously rare situation—transportation departments would have likely removed such a sign long before it got to this awful state—edge cases are exactly this kind of situation.

An edge case is a low probability event that should not happen but does happen in the real world, exactly the kinds of cases that programmers and machine learning processes might not consider.

Awful Stop Sign
Source: KNOW MALTA by Peter Grima / Flickr

In a real-world scenario, the autonomous vehicle might detect the sign and have no idea that it’s a stop sign. It doesn’t treat it as such and could decide to proceed through the intersection at speed and cause an accident.

A human driver may have a hard time identifying the stop sign too, but that is much less likely for experienced drivers. We know what a stop sign is and if it’s in anything other than complete ruin, we’ll know to stop at the intersection rather than proceed through it.

This kind of situation is exactly what researchers at MIT and Microsoft have come together to identify and solve, which could improve autonomous vehicle safety and, hopefully, reduce the kinds of accidents that might slow or prevent the adoption of autonomous vehicles on our roads.

Modeling at the Edge

In two papers presented at last year’s Autonomous Agents and Multiagent Systems conference and the upcoming Association for the Advancement of Artificial Intelligence conference, the researchers explain a new model for training autonomous systems like self-driving cars that use human input to identify and fix these “blind spots” in AI systems.

The researchers run the AI through simulated training exercises like traditional systems go through, but in this case, a human observes the machines actions and identifies when the machine is about to make or has made a mistake.

The researchers then take the machine’s training data and synthesize it with the human observer's feedback and put it through a machine-learning system. This system will then create a model which researchers can use to identify situations where the AI is missing critical information about how it should behave, especially in edge cases.

Autonomous Sight
Source: Berkeley Deep Drive

“The model helps autonomous systems better know what they don’t know,” according to Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory at MIT and the lead author of the study.

“Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”

The problem arises when a situation occurs, such as the distorted stop sign, in which the majority of cases the AI has been trained on does not reflect the real world condition that it should have been trained to recognize. In this case, it has been trained that stop signs have a certain shape, color, etc. It could even have created a list of shapes that could be stop signs and would know to stop for those, but if it cannot identify a stop sign properly, the situation could end in disaster.

“[B]ecause unacceptable actions are far rarer than acceptable actions, the system will eventually learn to predict all situations as safe, which can be extremely dangerous,” says Ramakrishnan.

Meeting the Highest Standards for Safety

By showing researchers where the AI has incomplete data, autonomous systems can be made safer at the edge where high profile accidents can occur. If they can do this, we may get to the point where public trust in autonomous systems can start growing and the rollout of autonomous vehicles can begin in earnest, making us all safer as a result.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board