Press "Enter" to skip to content

Self-driving cars may soon see better; emulate human drivers

Leslie 0

Human drivers are typically multitaskers. Even as they drive, they could be thinking about and responding to multiple phenomena — their own speed, safety considerations, and their own comfort and that of their passengers. While humans do this all the time without paying much attention, this poses an extremely difficult computational challenge for autonomous vehicles.

Arizona State University researchers, who took a crack at this issue, have published their results in IEEE/CAA Journal of Automatica Sinica.

“We set ourselves a challenge that is simple to state but hard to achieve with respect to trajectory planning: A passenger in a self-driving car has to feel as if it were driven by a human,” paper author and engineer Kayvan Majd of Arizona State University, said in a press release.

What makes the new optimization method a leap forward, according to the researchers, is that it ticks all the boxes of stable trajectory tracking with minimal errors with respect to position, velocity and acceleration, while keeping computational overheads down.

self-driving cars
Autonomous driving like a human

The researchers now plan to account for additional and even more realistic variables such as tyre forces and side slipping. This will allow the cars to operate at high speed and under harsh road conditions more accurately.

Improving sight too

Researchers have also demonstrated that self-driving cars could learn simply by observing human operators complete the same task. Of course, with the help of an improved sight-correcting system.

The researchers from Deakin University in Australia published their results in IEEE/CAA Journal of Automatica Sinica — a joint publication of the Institute of Electrical and Electronics Engineers (IEEE) and the Chinese Association of Automation.

The team implemented imitation learning, also called learning from demonstration. A human operator drives a vehicle outfitted with three cameras, observing the environment from the front and each side of the car.

The data is then processed through a neural network, which allows the vehicles to make decisions based on what it learned from watching the human make similar decisions.

“The expectation of this process is to generate a model solely from the images taken by the cameras,” paper author Saeid Nahavandi, Alfred Deakin Professor, pro vice-chancellor, chair of engineering and director for the Institute for Intelligent Systems Research and Innovation at Deakin University, said in a release. “The generated model is then expected to drive the car autonomously.”

Neural nets in play

The processing system is specifically a convolutional neural network. It has an input layer, an output layer and any number of processing layers between them. The input translates visual information into dots, which are then continuously compared as more visual information comes in.

By reducing the visual information, the network can quickly process changes in the environment: a shift of dots appearing ahead could indicate an obstacle in the road. This, combined with the knowledge gained from observing the human operator, means that the algorithm knows that a sudden obstacle in the road should trigger the vehicle to fully stop to avoid an accident.

That said, there are a couple of drawbacks. One is that imitation learning speeds up the training process while reducing the amount of training data required to produce a good model. In contrast, convolutional neural networks require a significant amount of training data to find an optimal configuration of layers and filters, which can help organize data, produces a properly generated model capable of driving an autonomous vehicle.

The researchers plan to study more intelligent and efficient techniques, including genetic and evolutionary algorithms to obtain the optimum set of parameters to better produce a self-learning, self-driving vehicle.