NVIDIA’s Machine Learning Model Converts a Standard Video into Stunning Slow Motion

Pranav Dar 17 Jul, 2018 • 2 min read

Overview

  • NVIDIA researchers have developed a deep learning model that can convert a standard video into high quality slow motion videos
  • A convolutional neural network (CNN) is at the core of the entire system
  • This CNN was trained using 1,132 video clips with 240-frames-per-second, containing 300k video frames

 

Introduction

Converting a standard video into slow motion may sound like a simple concept, but actually requires tons of effort, time and skill to master the art. Simply recording slow motion video (like on your phone), is also a tricky affair. If you don’t record enough frames, the resulting slow-mo becomes blurry, choppy and frankly, unwatchable.

This is where the wonderful field of computer vision (CV) steps in. Researchers from NVIDIA have delved into this field, and developed a CV algorithm that can convert standard videos into high quality slow motion videos. The deep learning model, powered by convolutional neural networks, turns a 30-frames-per-second video into a jaw-dropping 240-frames-per-second slow motion video.

The system was trained (using NVIDIA Tesla V100 GPUs and cuDNN) using 1,132 video clips with 240 frames-per-second, containing 300,000 individual video frames. Post the training process, the convolutional neural network was able to predict extra frames in a 30-frames-per-second video. A separate dataset was used to validate the accuracy of the system. Using a series of clips from a popular YouTube series ‘The slow mo guys’, the system generated 4 times slower videos with a stunning high resolution accuracy.

To get a more in depth feel for the technology, you can read NVIDIA’s blog post and research paper. Also, check out the below video which shows how this deep learning system works:

 

Our take on this

This is not the first attempt to use machine learning for manipulating videos, but it’s certainly unique in the way NVIDIA have approached the challenge. Of course since it’s NVIDIA, it’s no surprise that the resulting slow-mo is gorgeous. The research paper is a must-read for any data scientist interested in working in the computer vision field – it contains a detailed explanation of how the researchers arrived at the model after several experiments.

Once NVIDIA polishes up the algorithm, and we see smartphones getting more and more computationally powerful, I expect a quick adoption rate for this technology. It’s also a much cheaper alternative compared to the other options available in the market currently.

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Pranav Dar 17 Jul 2018

Senior Editor at Analytics Vidhya. Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

KUMARSWAMY HOSMATH
KUMARSWAMY HOSMATH 17 Jul, 2018

Where and how one can try this?