DeepMind Open Sources Dataset to Measure the Reasoning Ability of Neural Networks

Pranav Dar 07 May, 2019 • 3 min read

Overview

  • DeepMind researchers have penned down an approach to measure the reasoning ability of neural networks
  • They have also released a dataset to the general public to help make progress with this study
  • The results are not great but offer promise; 75% accuracy was reached when the training and test sets had the same abstract factors

 

Introduction

Artificial General Intelligence (AGI) has long been the stuff of science fiction, rather than hardcore research. One of the primary reasons for that is the black box nature of deep neural networks. These neural networks are making great headway in certain areas, but they are limited to doing the one task they were designed for.

The current state of neural networks prevents these models from generalizing to other tasks. For example, an algorithm that is designed for picking up objects cannot be re-programmed to also drive a vehicle. This has been a significant roadblock that is preventing researchers from reaching the AGI stage and unifying systems.

If anyone could achieve a breakthrough though, I would imagine it could be DeepMind. They have the required computational power, and the best research scientists are working there to get us closer to AGI. Their latest effort comes in the form of an approach and a challenge. They have attempted to measure the reasoning ability of neural networks in order to understand the nature of generalization.

Instead of trying to transfer knowledge from real-world scenarios to visual reasoning problems, the DeepMind research team studied knowledge transfer from one set of visual reasoning problems to another. They have designed a novel architecture that does significantly better than popular models like ResNet.

Their research makes for an intriguing read, because they have admitted that while their model is definitely proficient at certain things, it tends to be weak in a few areas as well. When the training and test questions focused on similar factors, the model did moderately well with a 75% accuracy rate. But when these two sets differed, the model had a nightmare getting the right answer.

Their research paper (link below) also shows how the model’s ability to generalize improves quite a bit when it is trained to predict symbolic explanations. The researchers have also publicly released their abstract reasoning dataset in the hopes of progressing this study with the help of the community.

You can delve into a more in-depth explanation of DeepMind’s approach, and try it out yourself using the below links:

 

Our take on this

While the results released by DeepMind are not what one would hope for, they still offer a lot of promise. No one till date has been able to crack the AGI challenge so even a hint of a solution has to be taken as progress. Neural networks are such a powerful thing, and the sooner we can dispel their black box image, the better it will be for machine learning and AI research.

The good news is that we at least have a way to measure work in this field. I’m sure this will be modified as well going forward. Meanwhile, make sure you play around with the dataset and see if you can come up with insights of your own!

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Pranav Dar 07 May 2019

Senior Editor at Analytics Vidhya.Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

Related Courses

image.name
0 Hrs 21 Lessons
4.9

Introduction to Neural Networks

Free

image.name
0 Hrs 46 Lessons
4.85

Getting Started with Neural Networks

Free

  • [tta_listen_btn class="listen"]