DeepMind’s Recurrent Neural Network Explores the role of Dopamine for Machine Learning

Pranav Dar 18 May, 2018 ā€¢ 2 min read

Overview

  • DeepMind’s latest study aims to help machines replicate the role of dopamine in our brain
  • The researchers performed six meta-learning experiments to develop their algorithm
  • At the core of the model is a recurrent neural network (RNN) which uses standard reinforcement learning techniques

 

Introduction

Machines have already started outperforming humans in some tasks, like classifying images, reading lips, forecasting sales, curating content, among other things. But there is a caveat attached to it – they require tons and tons of data to learn and train the model. Some of the best algorithms, like DeepMind’s AlphaGo, take a lot of data and hundreds of hours to understand the rules of a video game and master it. Humans can usually do this in one sitting.

DeepMind’s latest research aims to figure out how it can get machines to learn something in a few hours, replicating human behavior. The researchers behind this study believe that it might have something to do with dopamine, the brain’s pleasure signal. Dopamine has been associated with the reward prediction error signal used in AI reinforcement learning algorithms. These systems learn to act by trial and error, guided by the reward.

The researchers propose that dopamine’s role includes helping us to learn efficiently and rapidly, all the while allowing flexibility in case the context changes. They tested their theory by recreating six meta-learning experiments from the field of neuroscience. Each experiment required the model to perform tasks that use the same underlying skills. They did this by training a recurrent neural network (RNN) using standard reinforcement learning methods. Finally, the team compared their results to real world data collected from the neuroscience experiments mentioned above.

So what did the team find out? Upon running an experiment known as the ‘Harlow Experiment’, they found out that their model learned to pick up things remarkably quickly, in a manner resembling animals (2 monkeys, in this case). They figured out that the model was able to adapt to previously unseen situations really quickly.

The majority of the learning for the model took place within the RNN, which confirmed their initial theory – that dopamine plays a far more pivotal role in the meta-learning process than we previously imagined.

You can read more about the experiments on DeepMind’s blog hereĀ and read the entire research paper here.

 

Our take on this

This isn’t the first effort in this field but it’s the first to produce a solid theory towards our understanding of how machines can replicate a human brain. If a machine stops requiring massive amounts of data to learn, it will be quite an achievement – imagine the time and money it’ll save organizations and data scientists worldwide. Plus the fact that the time to learn will come down significantly, from days to a mere few hours.

This study also goes to show how ripe the field of medicine can be for deep learning to be applied. What are your thoughts on this research? Let us know in the comments section below!

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Pranav Dar 18 May 2018

Senior Editor at Analytics Vidhya. Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear