- Google Duplex is an AI system that can hold conversations with human-like tone and perform real-world tasks
- At the heart of Duplex is a recurrent neural network built using TensorFlow Extended
- To make the AI sound more like a human, speech disfuelncies (“um”, “hmm”, etc.) have been added
When you get a call from a digital machine, you can tell it right away. A lot of marketing efforts and service calls are being routed through machines in this way. I’m sure you must have had a lot of experience getting these calls (think calling up your bank and taking ages to get through!).
But what if you couldn’t tell the difference between a human’s voice on the phone and a robot’s? We have seen a lot of improvements in recent years in natural language processing thanks to advancements in deep learning. But it can still be a frustrating experience when the voice on the other end of the line is unable to decipher what you’re trying to tell it. We have to adjust for the machine, instead of the machine adjusting for us.
Google Duplex is an AI machine intelligence system that bridges this gap. Announced at the Google IO conference yesterday in a stunning demo, it can conduct natural conversations and perform practical and realistic tasks over the phone!
The brains behind Google Duplex unveiled this technology with 2 pre-recorded examples – both of around a minute. In the first example, a woman has a conversation with the machine to set up an appointment at a hair salon. It is a truly mind-blowing back and forth conversation – you won’t be able to tell the difference between the human and the machine. In the second example, Google Duplex calls up a restaurant to reserve a table. It’s incredible technology, it really is.
How does this technology work?
At the heart of Google Duplex is a RNN, or a recurrent neural network. It has been built using TensorFlow Extended. To make the voice behind Duplex sound human-like, the developers used a combination of a text-to-speech engine and a synthesis TTS engine to vary the tone of the machine.
Speech disfluencies (“um”, “hmm”, etc.) have been added to the AI to make it sound even more human like. The machine can even understand when to give slow responses and when to respond quickly using low-confidence models or faster approximations.
The developers have used real-time supervised training to train the system whenever in new domains. This is akin to a teacher instructing a student on a subject with various examples.
Google Duplex will be integrated into Google Assistant and rolled out to the public in July. Check out the below video to see the two examples I mentioned above:
Our take on this
We have come such a long way in the field of NLP. The days of just analysing sentiments from Tweets feels like ages ago. Audio processing combined with NLP is a truly powerful thing, and Google has tapped into that potential with all it’s might. The demo at the IO conference floored the audience and it has inspired us as well.
It’s both scary and inspiring how awesome deep learning married with real life applications can be. What are your thoughts on this mind blowing AI by Google? Use the comments section below to let us know your thoughts!
Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!
You can also read this article on Analytics Vidhya's Android APP