Skeleton-aided Articulated Motion Generation

4 Jul 2017  ·  Yichao Yan, Jingwei Xu, Bingbing Ni, Xiaokang Yang ·

This work make the first attempt to generate articulated human motion sequence from a single image. On the one hand, we utilize paired inputs including human skeleton information as motion embedding and a single human image as appearance reference, to generate novel motion frames, based on the conditional GAN infrastructure. On the other hand, a triplet loss is employed to pursue appearance-smoothness between consecutive frames. As the proposed framework is capable of jointly exploiting the image appearance space and articulated/kinematic motion space, it generates realistic articulated motion sequence, in contrast to most previous video generation methods which yield blurred motion effects. We test our model on two human action datasets including KTH and Human3.6M, and the proposed framework generates very promising results on both datasets.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Gesture-to-Gesture Translation NTU Hand Digit SAMG PSNR 28.0185 # 6
IS 2.4919 # 2
AMT 2.6 # 6
Gesture-to-Gesture Translation Senz3D SAMG PSNR 26.9545 # 4
IS 3.3285 # 4
AMT 2.3 # 6

Methods