- Researchers have developed an AI agent that dreams up scenarios and learns from them by itself (unsupervised learning)
- The structure of the model is divided into three units: vision, memory (RNN model) and controller
- On a selection of 100 randomly selected tracks, the average score of the model was almost three times higher than that of DeepMind’s initial Deep Q-Learning algorithm!
A tennis player, on the receiving end of a booming 150km/hr serve, has milliseconds to decide which way the ball is coming, how high it’ll bounce, and how he/she wants to swing the racket so as to make it go where he/she wants. The player predicts all these things subconsciously, based on the images the brain generates.
We have a tendency of creating a mental image of the world around us, based on events that are perceived by our limited senses. The decisions we make and the actions we take are built around these mental “models”. There is a VAST amount of information that we intake every single day; we observe something and proceed to remember an abstract version of it. Think about this for a minute – it is true for all of us.
Two researchers, David Ha and Jurgen Schmidhuber, have developed an AI model that not only plays video games with awesome accuracy, but also has the ability to conjure up new scenarios (or dreams), learn from them, and then apply them on the game itself. The model can be trained in an unsupervised manner to learn the “spatial and temporal representation of the environment”. The model was trained to play a car racing game and the VizDoom game.
How does the algorithm work?
The researchers trained a deep neural network (RNN) to deal with Reinforcement Learning tasks, by dividing the agent into two parts: a large world model and a small controller model. To start with, they trained a large neural network so it could learn a model of the AI’s world in an unsupervised manner, and then trained the smaller controller model to perform tasks using the previously built model. Below is the structure of the final model:
- Vision model: The system is provided with a high dimensional input observation which is usually a 2D image frame (that is part of a video sequence). The team uses Variational Autoencoder (VAE) as the vision model. It compresses the internal representation of the game into a small, representative code
- Memory model (RNN-model): Serves as the predictive model for future vectors that the vision model is expected to produce. To save previous experiences which is helpful in designing new gameplay. (that the AI has not seen before). It makes predictions about the future codes based on previous information
- Controller model: A simple single layer linear model is used in the controller unit to decisions for how to play the game or tackle the new scenario. To make decision, it uses the representations created by the vision and memory components
In the racing game, on a selection of 100 randomly selected tracks, the average score of the model was almost three times higher than that of DeepMind’s initial Deep Q-Learning algorithm!
This concept is wonderfully explained in the below video:
Our take on this
A truly jaw dropping application of AI. We are getting closer and closer towards machines imitating humans. It’s quite similar to DeepMind’s AlphaZero (which has basically come the benchmark for similar algorithms) but this AI does it’s training in an unsupervised manner. It is ideal for games where the rules are complex, and not straightforward.
Most deep learning models need gigantic data sets to be trained on and an algorithm like this really brings the data size down, consequently saving a ton of money for the researchers/organizations. And obviously, since it learn by itself, it takes a lot of the human effort out of the equation as well.
I highly recommend checking out all the resources we have provided in this article.
Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!