- NVIDIA’s researchers have developed a machine learning algorithm to fix bad images by learning from corrupted or grainy images only
- The neural network was trained on 50,000 images from the popular ImageNet dataset
- The results are stunning. Despite introducing synthetic noise, the neural network produced extremely good quality output images in milliseconds
How many times have we taken photos from our phone that turned out being really blurred out? Maybe the person or animal moved at the last minute, or perhaps the light was really low and the shot ended up being dark and grainy. I could go on and on.
NVIDIA has been recently making waves with their work in computer vision and image processing so it’s no surprise to see them tackle this commonly faced issue. Using the power of deep learning, they have developed an algorithm, called Noise2Noise, that can fix bad images by learning from corrupted images only. It’s a pretty unique approach to image processing.
As you might have guessed, the popular ImageNet dataset was used to train the algorithm. 50,000 images were taken for this training process. And this is where NVIDIA flexed their deep learning muscles – the model was trained using the NVIDIA Tesla P100 GPUs along with the cuDNN-accelerated TensorFlow framework (you can read their blog post here).
As you can see in the above image, the results are remarkably impressive, especially when compared to the ‘Ground truth’ image. The neural network that was trained for this purpose learned from grainy or corrupt images. To validate the results, the researchers performed experiments by introducing different varieties of synthetic noise (Gaussian, Poisson, Bernoulli and Random-valued impulse noise). The results were still outstanding – the team claims that it took milliseconds for the model to perform it’s magic and restore quality to the images.
The neural network can also remove any text present in the images which presents a significant problem for copyright images. If you can remove a watermark from the image, that presents a significant problem.
Of course there is a research paper for this approach which you can read in full here. The researchers will be presenting this paper at the International Conference on Machine Learning (ICML) in Stockholm this week. You can also check out the best papers at ICML 2018 before the conference begins!
You can see a video demonstrating this technique below:
Our take on this
Previous related studies have used a neural network (or several) to clean up low-light photos, but those approaches required using clean images for training purposes. What sets NVIDIA’s approach apart are 2 things:
- It can restore bad images using corrupted images only; you don’t need clean and HQ images to feed the neural network
- As mentioned, the paper shows that it takes milliseconds for the technique to render the image and produce the resulting output
It can potentially be used in healthcare to restore MRI scan images, or in satellite imagery to clean up and see distant objects in an area, among various other things.
Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!