NVIDIA Develop Deep-learning System That Transforms Standard Video to High-quality Slow Motion

The system can create super high-quality slow motion video from standard 30-frame-per-second video.
Jessica Miley

NVIDIA has developed a way to transform a 30-frame-per-second video into high-quality slow-motion. Researchers from the company developed the system using deep learning. 

The new research will be presented at the annual Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah this week. The accompanying video shows how the technology creates smooth slow motion videos in contrast with manually created videos. 

“There are many memorable moments in your life that you might want to record with a camera in slow-motion because they are hard to see clearly with your eyes: the first time a baby walks, a difficult skateboard trick, a dog catching a ball,” the researchers wrote in their research paper. 

“While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices,” the team explained. The deep-learning system lets users slow down their footage after it is shot. 

The system was trained on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the neural network could predict the extra frames needed to produce slow motion. 

“Our method can generate multiple intermediate frames that are spatially and temporally coherent,” the researchers said. “Our multi-frame approach consistently outperforms state-of-the-art single frame methods.”