Autonomous cars are still susceptible to taking wrong turns, typically due to the reason that AI training cannot take into account all the conditions. MIT and Microsoft might have something to solve the problem; they have created a model that can capture these virtual “blind spots”- as called by MIT.

In the method, artificial intelligence (AI) matches human’s behavior in a certain situation with its own behavior and then changes its action depending upon how accurately it compares the response. If a self-driving car does not know what to do when an ambulance is in a hurry on the road, it could get trained by observing a human driver going towards the side of the road.

The model would also function in real-time. If the AI has done something wrong, a human driver could direct it and mention that something is not right.

Experts even have a method to avoid the self-driving car from getting overconfident and labeling all occurrences of a given reaction as harmless. A machine learning model not only recognizes adequate and inadequate reactions but uses probability calculations to notice patterns and define if something is safe in actual or there are chances for issues. Even when a reaction is correct 90% of the time, it might still observe a blind spot that it needs to tackle.

This tech is not for practical use yet. Scholars have only experimented their model on video games which have almost ideal conditions and limited parameters. MIT and Microsoft still need to experiment with practical vehicles. If it functions it can serve in making autonomous cars practical. Previous cars still have issues with simple situations like snow, not to mention fast-moving traffic where a wrong turn can result in a crash. This could assist them to do something about tough occurrences without needing carefully-made custom solutions or risking travelers’ life.