NVIDIA’s DL Model can Complete the Missing Parts in a Photo with Incredible Results!

Pranav Dar 25 Apr, 2018 • 2 min read

Overview

  • NVIDIA’s deep learning model can fill in the missing parts of an incomplete image with realistic results
  • The researchers trained the deep neural network by generating over 55,000 incomplete parts of different shapes and sizes
  • The results they have shown so far are state-of-the-art and unparalleled in the industry

 

Introduction

Imagine you’re given half a photo and asked to fill in the other half. Even with the variety of softwares in the market, producing realistic results would be a tall order.

Researchers at NVIDIA have unveiled a state-of-the-art deep learning model that can edit images and also reconstruct incomplete ones. It can “understand” the image and fill in missing pixels. The method behind this algorithm is being called “image inpainting”.

In order to train their deep neural network, the researchers generated over 55,000 masks and holes of different shapes and sizes. No model is built without a testing dataset so they generated 25,000 such masks and holes for test purposes. In order to improve the accuracy of the reconstructed photos, these holes were divided into six different categories based on the input images.

You might be wondering at this point about the underlying algorithm. According to NVIDIA’s blog post, “using NVIDIA Tesla V100 GPUs and the cuDNN-accelerated PyTorch deep learning framework, the team trained their neural network by applying the generated masks to images from the ImageNet, Places2 and CelebA-HQ datasets.”

In the training phase, missing parts (or holes as mentioned above) are shown to the model with the complete images in order for it learn how to perform the reconstruction. While in the testing phase, those missing parts which were held out from the training phase are introduced. This leads to unbiased accuracy results.

The team claims this is the best model out there in the industry and their results prove that.

NVIDIA have also published their research paper which you can read here. Check out the two minute video below where the researchers showcase this algorithm:

 

Our take on this

Deep learning never ceases to amaze me. Except perhaps the eyes in the above video, everything else looks remarkably life-life. We have previously covered NVIDIA’s FastPhotoStyle library but this is quite a breakthrough in the image processing field.

Existing studies in this area have previously used a standard convolutional network over the corrupted image but this model uses the concept of partial convolutions. I suggest you go through their research paper to gain a better understanding of this concept.

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Pranav Dar 25 Apr 2018

Senior Editor at Analytics Vidhya. Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear