Pranav Dar — Published On July 17, 2018 and Last Modified On July 17th, 2018


  • NVIDIA researchers have developed a deep learning model that can convert a standard video into high quality slow motion videos
  • A convolutional neural network (CNN) is at the core of the entire system
  • This CNN was trained using 1,132 video clips with 240-frames-per-second, containing 300k video frames



Converting a standard video into slow motion may sound like a simple concept, but actually requires tons of effort, time and skill to master the art. Simply recording slow motion video (like on your phone), is also a tricky affair. If you don’t record enough frames, the resulting slow-mo becomes blurry, choppy and frankly, unwatchable.

This is where the wonderful field of computer vision (CV) steps in. Researchers from NVIDIA have delved into this field, and developed a CV algorithm that can convert standard videos into high quality slow motion videos. The deep learning model, powered by convolutional neural networks, turns a 30-frames-per-second video into a jaw-dropping 240-frames-per-second slow motion video.

The system was trained (using NVIDIA Tesla V100 GPUs and cuDNN) using 1,132 video clips with 240 frames-per-second, containing 300,000 individual video frames. Post the training process, the convolutional neural network was able to predict extra frames in a 30-frames-per-second video. A separate dataset was used to validate the accuracy of the system. Using a series of clips from a popular YouTube series ‘The slow mo guys’, the system generated 4 times slower videos with a stunning high resolution accuracy.

To get a more in depth feel for the technology, you can read NVIDIA’s blog post and research paper. Also, check out the below video which shows how this deep learning system works:


Our take on this

This is not the first attempt to use machine learning for manipulating videos, but it’s certainly unique in the way NVIDIA have approached the challenge. Of course since it’s NVIDIA, it’s no surprise that the resulting slow-mo is gorgeous. The research paper is a must-read for any data scientist interested in working in the computer vision field – it contains a detailed explanation of how the researchers arrived at the model after several experiments.

Once NVIDIA polishes up the algorithm, and we see smartphones getting more and more computationally powerful, I expect a quick adoption rate for this technology. It’s also a much cheaper alternative compared to the other options available in the market currently.


Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!


About the Author

Pranav Dar
Pranav Dar

Senior Editor at Analytics Vidhya. Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Our Top Authors

Download Analytics Vidhya App for the Latest blog/Article

2 thoughts on "NVIDIA’s Machine Learning Model Converts a Standard Video into Stunning Slow Motion"

KUMARSWAMY HOSMATH says: July 17, 2018 at 8:52 pm
Where and how one can try this? Reply
Pranav Dar
Pranav Dar says: July 17, 2018 at 9:32 pm
Hi Kumarswamy, NVIDIA has not open sourced the code for this. I expect them to commercialize this into their products once they have polished the algorithm. What they have done, however, is release their approach in the research paper I've linked above. That will give you an excellent and detailed idea of how the algorithm was formed and how it works. Reply

Leave a Reply Your email address will not be published. Required fields are marked *