Paper

Deep-VFX: Deep Action Recognition Driven VFX for Short Video

Human motion is a key function to communicate information. In the application, short-form mobile video is so popular all over the world such as Tik Tok. The users would like to add more VFX so as to pursue creativity and personlity. Many special effects are added on the short video platform. These gives the users more possibility to show off these personality. The common and traditional way is to create the template of VFX. However, in order to synthesis the perfect, the users have to tedious attempt to grasp the timing and rhythm of new templates. It is not easy-to-use especially for the mobile app. This paper aims to change the VFX synthesis by motion driven instead of the traditional template matching. We propose the AI method to improve this VFX synthesis. In detail, in order to add the special effect on the human body. The skeleton extraction is essential in this system. We also propose a novel form of LSTM to find out the user's intention by action recognition. The experiment shows that our system enables to generate VFX for short video more easier and efficient.

Results in Papers With Code
(↓ scroll down to see all results)