DataHack Radio Episode #8: How Self-Driving Cars Work with Drive.ai’s Brody Huval

Pranav Dar 13 Jun, 2019 • 5 min read

Introduction

Self-driving cars are expected to rule the streets in the next few years. In fact, countries like the USA, China and Japan have already started using them in real-world situations! One of the leaders in this space is Andrew Ng backed Drive.ai, a self-driving car start-up based in California.

So how do these autonomous cars work? How difficult is it making one from scratch? What kind of machine learning techniques are used? In this podcast, Drive.ai’s co-founder Brody Huval sheds light on these questions put forward by Kunal, along with other really intriguing facets of autonomous vehicles. It’s a podcast you better not miss!

Check out all the key takeaways from this really cool podcast below. You can check our the research paper Brody has co-authored on highway driving and deep learning (explained in the podcast) here. Happy listening!

You can subscribe to DataHack Radio and listen to this, and all previous episodes, on any of the below platforms:

 

Brody’s Background

Brody completed his graduation in mechanical engineering from the University of Florida in 2010. Back then, AI had just started to become popular and enter the mainstream industry space. He became interested in the field and started applying to computer science programs in the United States.

His application was accepted by Stanford in 2011 for their mechanical engineering program but Brody almost immediately switched to the computer science stream to work with Andrew Ng. His first couple of projects there were in deep learning and natural language processing (NLP). In total, he spent 4 years at Stanford doing various projects and research in machine learning and deep learning. Post that, in 2015, he co-founded Drive.ai along with fellow students from Andrew Ng’s Stanford AI lab.

 

Interest in Self-Driving Cars and Founding Drive.ai

Brody’s interest in self-driving cars dates back almost 6 years ago to 2012. He and his fellow researchers worked on an autonomous driving project where they tried to replicate Google’s work in this space. Google had used 16,000 CPU cores to create neural networks that watched YouTube videos all day in order to learn everything about autonomous cars. Brody’s group decided to use 12 GPUs instead to replicate the power of those 16,000 CPUs.

At the end of this project, the team was left with a cluster of approximately 64 machines (each with 4 GPUs). Brody and his team knew at this point that they wanted to work with tons of data, and eventually they landed on the idea of self-driving cars. They would use a camera-only approach, along with other hardware equipment to automatically annotate the data. In short, they were looking to solve a meaningful problem with tons of data, and that’s how Drive.ai was born.

 

The Challenges of Annotating Data

Unsurprisingly, it took a significant amount of time to label the data! For lanes, they had a special GPS unit using which they could map a path of where the car was driving. Based on this route, the team could understand how far away the lanes were from the car.

For dynamic obstacles like pedestrians, Brody set up a mechanical pipeline and that’s when he faced the difficulties of annotating data manually. This process involved a lot of logistics and other overhead, something only folks who have set up a ML project from scratch will truly understand.

The initial testing produced mixed results. Because they had collected a ton of highway driving data, the testing on highways went pretty well. But when it came to urban areas, the system did not perform as well because of a lack of proper data. One of the biggest weak points during their testing process was the huge number of false positives regarding gauging the side of the road.

 

Camera Based Systems v LIDAR

After trying out various camera based approaches, the Drive.ai team started exploring LIDAR and RADAR based solutions. They tested out different sensors with the aim of getting a much better and improved precision and recall for their system.

Brody mentioned that he believes camera based approaches will definitely get better in urban areas with time. You need a lot of data to understand all the nuances of the various images the cameras collect, and thus it comes down to how much computational power you have.

This section was a really insightful explanation of LIDAR, RADAR and camera sensors. If you’re interested in self-driving cars, make sure you listen to this part especially carefully.

 

Other Aspects of Drive.ai’s Self-Driving Cars

Right now, Drive.ai has 7 fully functional self-driving cars on the streets in Texas which can service up to 10,000 people. They have been set up to try and solve the ‘micro-transit problem’, which means distances that are too far for walking but too close for driving. This is especially useful in Texas where it can get blisteringly hot during the summer months.

Some of the challenges Brody and his team currently face are with perception and performance. When he mentions perception, it refers to understanding where certain objects are placed, and how the system goes around these dynamic objects given the uncertainty in predictions.

Image result for drive.ai

One of the ways of dealing with constantly changing scenarios (like re-painting roads, or avoiding construction areas) is to be in touch with the local government and understanding well in advance what changes are expected to happen. Another option Drive.ai have explored is tele-operators, who operate through networks to guide the car using different routes.

Rain is also a major problem for LIDAR units, while cameras are not at their best during the night. These are acknowledged weaknesses that AI has not been able to fully solve so far. Currently Drive.ai’s cars operate during daylight and if there’s inclement weather, the company pauses the service until it clears up.

Simulators are also becoming a major part of any self-driving car setup. They help the team understand where certain things are going awry, or how long a certain scenario takes to run, etc. These are all machine learning problems.

 

Techniques used in Self-Driving Cars

There are certain aspects where classic machine learning or deep learning techniques fit. For example, deep learning algorithms work really well in the perception facet (like geospatial data) of these cars. The motion planning system, on the other hand, is a combination of classic ML and learned ML.

Brody made a great point about how reinforcement learning, as powerful as it is, hasn’t made as many advancements as deep learning. It’s getting better and the Drive.ai team do experiment with it, but to use it in a real-world use case is not a feasible option right now.

 

The Different Components of a Self-Driving Car Team

Below are the general components Brody listed down which go into making a full-fledged self-driving car project:

  • Perception team
  • Prediction team
  • Motion planning team
  • Calibration team
  • Annotation team (fastest growing and critical component of a project)
  • Mapping and localization team
  • Devops and infrastructure team
  • Vehicle hardware components team
  • Tele-operations team

 

End Notes

An absolute goldmine for all self-driving car enthusiasts! There are a ton of details in the podcast about these cars – how they’re deployed, how they’re made, the technical machine learning aspects that go into designing these cars, the major obstacles every startup in this space faces, Brody’s take on how to overcome them, etc. I personally found the camera sensors v LIDAR section particularly insightful.

Pranav Dar 13 Jun 2019

Senior Editor at Analytics Vidhya.Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

  • [tta_listen_btn class="listen"]