Andrew Ng Supported Drive.ai Launches it’s First Self-Driving Car
- Andrew Ng backed Drive.ai has announced it’s first driverless car in Texas
- It will be initially tested in a 6 month period, with a human driver to take over in case of emergencies
- The technology behind this is powered by deep learning. This car can understand hand gestures as well to ensure it does not rely only on objects for movement
The concept of driverless cars has been floating around for a number of years but it has remained a distant and elusive dream for many organizations trying to turn it into reality. Uber’s recent tragic story only hindered the efforts to turn the prototype of a self-driving car into a real-life case.
But Andrew Ng, the super professor and pioneer of so many things in the machine learning, deep learning and AI community, announced yesterday that Drive.ai would be launching the first self-driving car in Texas, USA. If you have been following Andrew Ng’s achievements in recent years, you’ll know how close to him this concept of autonomous vehicles has been.
Drive.ai was founded in 2015 by Andrew Ng’s graduate students out from Stanford’s AI lab. Drive.ai leverages deep learning to creare self-driving systems that are adaptable and scalable.
Initially, this car will not be available throughout the city. Instead, it will run on a six month testing phase – driving people on public roads between areas that are too far to walk, but have been deemed a waste for human drivers to drive (due to the relatively short distance). People who want to avail this service can schedule their rides through Drive.ai’s phone application.
With the recent scrutiny over self-driving cars and accidents, this particular testing phase will include a human driver ready to take over, in case anything goes wrong. The ultimate goal, and the plan for the next year, is of course to make the service available throughout the city to everyone on every route. This will mean phasing out the concept of the human drive entirely.
So how does this AI work?
- A full software stack was developed for self-driving, which included in-house perception, motion planning, mapping, localization, fleet management software, mobile app, communications, (using “tele-choice”, their remote assistance system) and more. With the help of these, the team is able to resolve any dependencies between systems
- Computer vision is not yet advanced enough to understand and differentiate between various hand gestures (say, from the traffic police). To combat this, the developers have used a realistic roadmap in order to interpret the hand gestures of a construction worker waving for a car to proceed. No other self-driving team has, until now, built such a realistic system
- Another noteworthy feature added is that Drive.ai is using exterior panels to communicate with a pedestrian to let them know it is safe to cross
The company has released the below video to promote their launch:
Our take on this
The negative media coverage following Uber’s incident has soured the mood around driverless cars recently. But when it comes to Andrew Ng, there’s a certain degree of confidence that this launch will go smoothly. He has been championing the case for autonomous vehicles since years so it’s great to see his dream turn into a practical use case.
As a data scientist, this is the kind of technology you want to be working on! You can check out Baidu’s open source self-driving dataset (the largest of it’s kind) and work on it to understand how this AI works.
Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!