Learn everything about Analytics

Google Brain’s Image Manipulation Algorithm Fools Both Humans and Machines

Overview

  • Researchers at Google Brain developed an algorithm that can fool both humans and machines
  • The algo changes the image only slightly, but it’s enough to trick us
  • In some instances, 10 out of 10 machines mis-identified the object in the image

 

Introduction

The dangers of AI have been well documented recently. This study from Google Brain will only add to that concern.

Researchers at Google Brain have developed an algorithm that can manipulate images in such a way that neither humans, nor machines, are able to identify the object in the picture correctly.

A deep convolutional network (CNN) algorithm was tested on a slightly manipulated picture of a cat. Incredibly, it mis-identified it as a dog. See the image below for reference – the left frame is an unmodified image of a cat, and the right frame is a slight tweak of the cat’s face; enough to fool the CNN.

More importantly (and disconcertingly), humans were likewise fooled into thinking it was a dog.

Previously, it has been easy to trick CNNs into mis-identifying objects in images. The way to mess with them is to introduce a slight distraction in the image. It could be a wrongly placed pixel, white noise, etc. But these instances involved a single image classifier.

In this particular study, the developers at Google Brain created this model that can fool multiple systems by generating “adversarial” images. How did they do this? They added features that are “human meaningful”, like altering the edges of objects, playing around with the texture, altering the parts of the photo which enhanced the distraction of the object.

Some images managed to fool 10 out of 10 CNNs at a time!

You can read the research paper on the image manipulation here.

 

Our take on this

If humans are unable to tell the difference between a cat and a dog thanks to an algorithm, it’s time to take the regulating AI discussion a little more seriously. Experts have expressed concern that this technology could be misused – a politician enhancing his image on social media to look more appealing to the audience, advertisers using it to manipulate the biases in the human brain, etc.

However, this is still major progress in the AI field. On the positive side of things, it could be used for making boring photos (government announcements, traffic news, etc. come to mind) a bit more engaging to the audience.

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

You can also read this article on Analytics Vidhya's Android APP Get it on Google Play
This article is quite old and you might not get a prompt response from the author. We request you to post this comment on Analytics Vidhya's Discussion portal to get your queries resolved
%d bloggers like this:
Join 150000+ Data Scientists in our Community

Receive awesome tips, guides, infographics and become expert at:




 P.S. We only publish awesome content. We will never share your information with anyone.

Subscribe!
%d bloggers like this:
Join 150000+ Data Scientists in our Community

Receive awesome tips, guides, infographics and become expert at:




 P.S. We only publish awesome content. We will never share your information with anyone.

Subscribe!