Check Out this Entirely Different Approach to Understand Machine Learning by IBM

Aishwarya Singh 10 May, 2019 • 3 min read

Overview

  • IBM researchers have pioneered an approach to understand machine learning models that explores the features that are not present
  • The researchers claim that such an approach is already in use in fields like healthcare and criminology
  • The research has been validated on three datasets (with impressive results) – MNIST, procurement fraud and brain images

 

Introduction

When we’re asked to describe a person, we usually mention his/her features – height, hair color, etc. At times we also mention features that are not there, but are still distinct in their absence (like “does not wear glasses”, or “not over 6 feet”).  It’s human tendency to draw conclusions from absent features.

So a team of researchers from IBM decided to explore this part of human nature and attempted to integrate that into the world of machine learning. They have used this idea to justify how a machine learning model performs the task of classification. The objective of their study was to use “the missing results” and explain the inner working of machine learning models, and to strip away the black box reputation surrounding them.

Taking another example, if a model was trained to identify a car, the model might use information such as – does it have wheels? How about headlights? The object does not have legs. The researchers claim that the features that are missing also play an important part in perceiving how a model performs and arrives at it’s final conclusion.

Based on this idea, the team performed their experiments on three different datasets, namely:

  • Handwritten Digits (using the popular MNIST dataset)
  • Procurement Fraud
  • Brain Functional Imaging

The team has also presented a paper (link below), where they have explained deep neural network classification based on the characteristics present (wheel, headlights) and absent (hands). They have created a system for “contrastive explanations” that specifically looks for missing information in the data. The contrastive explanation method has two parts:

  • Finding Pertinent Negative: To find out what is missing in the model predictions.
  • Finding Pertinent Positive: To find out features present in the input

Each experiment was evaluated with the help of domain experts and performed fairly well. You can read the research paper in full here to deep dive into the various experiments they conduced and how they arrived at their final conclusion.

 

Our take on this

This is a pretty fascinating approach to understanding models. Once of the most common issues with models today is how complex they can become (especially deep neural networks). Explaining them to the client or end user is a mammoth task and often ends in failure. This approach, while certainly nascent right now, should help strip away some of the misunderstandings around machine learning.

If you knew why you are being recommended something, there is a higher chance that you might buy it (as opposed to something that you perceived as  a random recommendation). This approach is ideal for those studies where you need to make a binary classification – like a rejected loan. Not only will this approach explain what was there in the application (like a previous default), but also what wasn’t there (lack of a college degree).

As a data scientist, does this approach appeal to you? Do you see any upside in this? Let us know in the comments below!

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Aishwarya Singh 10 May 2019

An avid reader and blogger who loves exploring the endless world of data science and artificial intelligence. Fascinated by the limitless applications of ML and AI; eager to learn and discover the depths of data science.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear