How to eliminate the gaps in the artificial intelligence of an autonomous car

Autonomous vehicles are becoming increasingly and technically sophisticated, but the safety of such systems is still a big problem. To create an autonomous system capable of driving safely in laboratory conditions is one thing, but to be sure of the ability of such Internet of Things to move in the real world is another.

It is on this problem that scientists at the Massachusetts Institute of Technology (MIT) are now working, studying the differences between what autonomous systems teach and the problems that arise in reality. They created a model of situations in which what autonomous systems are taught does not coincide with events taking place on the road. An example cited by scientists is to understand the difference between a large white car and an ambulance. If an autonomous car has not been trained and is not equipped with sensors to help determine the differences between the two vehicles, it may not miss an approaching ambulance on the road. Scientists call these scenarios “gaps” in learning.

To identify these gaps, specialists used a person who monitors the device of artificial intelligence (AI) in the learning process and comments on the mistakes that the system makes. Then, people’s comments are compared with training data and situations are identified in which the AI needs more information in order to make the right and safe choice.

“The Connected Transport can help autonomous systems learn what they don’t know,” says Ramya, graduate student in the Laboratory for Informatics and Artificial Intelligence, “many times using such systems, the models they learned did not match real-world situations, and they could allow mistakes, for example, provoke an accident. The idea is to use people to combine simulation and reality, so that we can reduce some of these errors.”

This can work in the real world when a person is in the driver’s seat of an autonomous car. As long as AI drives the machine correctly, man does nothing. However, if the system makes a mistake, a person can take the wheel by transmitting the corresponding signal to the system. This will teach AI behavior in conflicting situations when there is a conflict between what the machine should do and what the person does.

The system is currently being tested in the format of video games. The next step will be its placement on the roads of the real world and testing in real cars.