Pranav Dar — June 25, 2018
AVbytes

Overview

  • Adobe is relying on machine learning as it attempts to detect image manipulation for deceptive purposes
  • The model is a deep neural network that was trained on tens of thousands of images
  • The research team is focusing on 3 common tampering methods – splicing, copy-move, and removal

 

Introduction

Dealing with fake news has become one of the most pressing needs of the digital age. There are so many fake videos and images flying around on social media sites, it has become extremely difficult to stem the tide. Facebook has been in the news in recent times because of various scandals but companies like Amazon have been fighting this battle since a long time (like weeding out fake reviews).

Adobe knows better than most about photoshopped images. In their latest blog post, they have acknowledged that while Photoshop has always shown it’s upside, people have also used it to doctor images for deceptive purposes. So Adobe decided to invest in machine learning and fight back against this rising menace.

Researchers at the organization have built a model that that is able to differentiate between authentic and tampered images using image manipulation detection. The team focused on the three most common tampering methods:

  • Splicing: Parts of two separate images are combined
  • Copy-move: Parts or objects in an image are copied from one place to another
  • Removal: Parts or objects in an image are completely removed, making it look like they were never there in the first place

In order to train the R-CNN (convolutional neural network) to recognize manipulated image, tens of thousands of images were used as examples. Two different techniques were meshed together to make this neural network. The first technique makes use of an RGB stream, while the second uses a noise stream filter. The below collection of images shows how the final model works:

Note that this technique is not the same as the traditional object detection techniques we have seen previously. Image manipulation detection focuses far more on tampering artefacts than the content in the image. The team has also published a full research paper describing this technique in detail which you can read here.

 

Our take on this

Even using machine learning, this is quite an ambitious project from Adobe. Adobe themselves admit that this does not solve the problem of “absolute truth” of an image. It’s a step in the right direction but one feels that we are still quite far away from what Adobe had in mind when they started this project.

I would like to see other features (and not just the 2 streams described above) be used in the neural network. Like they mention in the paper, illumination in the entirety of the image, and compression factors can and should be included if this is to be truly effective. Let me know your take on this technique in the comments section below.

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

About the Author

Pranav Dar

Senior Editor at Analytics Vidhya. Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Our Top Authors

Download Analytics Vidhya App for the Latest blog/Article

Leave a Reply Your email address will not be published. Required fields are marked *