Scanning and animating characters dressed in multiple-layer garments

9 May 2017  ·  Pengpeng Hu, Taku Komura, Daniel Holden, Yueqi Zhong ·

Despite the development of user-friendly interfaces for modeling garments and putting them onto characters, preparing a character dressed in multiple layers of garments can be very time-consuming and tedious. In this paper, we propose a novel scanning-based solution for modeling and animating characters wearing multiple layers of clothes. This is achieved by making use of real clothes and human bodies. We first scan the naked body of a subject by an RGBD camera, and a statistical body model is fit to the scanned data. This results in a skinned articulated model of the subject. The subject is then asked to put on one piece of garment after another, and the articulated body model dressed up to the previous step is fit to the newly scanned data. The new garment is segmented in a semi-automatic fashion and added as an additional layer to the multi-layer garment model. During runtime, the skinned character is controlled based on the motion capture data and the multi-layer garment model is controlled by blending the movements computed by physical simulation and linear blend skinning, such that the cloth preserves its shape while it shows realistic physical motion. We present results where the character is wearing multiple layers of garments including a shirt, coat and a skirt. Our framework can be useful for preparing and animating dressed characters for computer games and films.

PDF
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here