An Autonomous Car Learned how to Drive itself in 20 minutes using Reinforcement Learning

Pranav Dar 09 Jul, 2018 • 3 min read

Overview

  • A UK company, Wayve, has designed a first-ever autonomous car that works with the help of reinforcement learning
  • This approach helped them teach the car how to drive in just 15-20 minutes!
  • The system is powered by a deep neural network that has 4 convolutional layers and 3 fully connected layers
  • When the car veers off track, a safety driver guides it back. The car is then “rewarded” for learning from that mistake

 

Introduction

Self-driving cars are understandably the most attention-grabbing application of artificial intelligence. Until recently, we’ve just seen prototypes of these vehicles in showrooms or in sci-fi movies, with everything else left to our imagination. But with advances in technology, hardware and machine learning, this wonderful concept has got a life of its own.

The autonomous vehicles we have seen so far have included a ton of training data including rules, hours upon hours of learning using that data, quite a lot of hardware, . And now a UK company, Wayve, has designed a first-ever autonomous car that works with the help of reinforcement learning. Their approach enables the car to learn how to drive in just 15-20 minutes!

So how did they do this? That’s where it gets a bit more complicated. The researchers used a popular reinforcement learning algorithm called Deep Deterministic Policy Gradients (DDPG). This helped them solve the task of following the lane in front of the car. As mentioned in Wayve’s blog post, the architecture of the algorithm was a deep neural network with 4 convolutional layers and 3 fully connected layers with just under 10,000 parameters. In contrast, other state-of-the-art image classification networks have millions of parameters.

Only a single camera was used with the autonomous car to capture the surroundings and to follow the road. All the processing requirements were done in the car on just a single GPU!

Of course the trial and error process to train the car’s system did not start on the road (too much of a safety hazard in public), but in Wayve’s workspace. A lot of testing was done in simulated environments to understand the task and fine tune the hyperparameters of the reinforcement learning algorithm. The below graph shows the distance traveled by the car against the number of times it took to train the system:

As you can see in the video below, whenever the car veered off track, the human driver steered it back on to the road. The car was “rewarded” (a reinforcement learning term) every time it learned where it went wrong and corrected itself. You can read about the algorithm in more detail in this research paper.

 

Our take on this

I was just talking to my colleague last week about how almost all reinforcement learning examples and studies are done using gaming environment (Alpha Go, Dota 2, etc.). So this is a welcome change in that regard and helps to expand the scope of where and how RL can be applied in real-life scenarios.

The applications are HUGE – if the initial pilot phase of this technology goes well, one can imagine it being used for ferrying people around the city (perhaps as a fleet of taxis?). Given how quickly the reinforcement learning algorithm learns (95% accuracy in under 20 trials, compared to DeepMind’s Atari algorithm which took millions of attempts), it could teach itself to be 99% accurate within a week!

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Pranav Dar 09 Jul 2018

Senior Editor at Analytics Vidhya. Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear