Motion Synthesis
89 papers with code • 9 benchmarks • 13 datasets
Datasets
Latest papers with no code
OAKINK2: A Dataset of Bimanual Hands-Object Manipulation in Complex Task Completion
Based on the 3-level abstraction of OAKINK2, we explore a task-oriented framework for Complex Task Completion (CTC).
Contact-aware Human Motion Generation from Textual Descriptions
This paper addresses the problem of generating 3D interactive human motion from text.
Generative Motion Stylization within Canonical Motion Space
Our key insight is to embed motion style into a cross-modality latent space and perceive the cross-structure skeleton topologies, allowing for motion stylization within a canonical motion space.
Scaling Up Dynamic Human-Scene Interaction Modeling
Confronting the challenges of data scarcity and advanced motion synthesis in human-scene interaction modeling, we introduce the TRUMANS dataset alongside a novel HSI motion synthesis method.
DEMOS: Dynamic Environment Motion Synthesis in 3D Scenes via Local Spherical-BEV Perception
To handle this problem, we propose the first Dynamic Environment MOtion Synthesis framework (DEMOS) to predict future motion instantly according to the current scene, and use it to dynamically update the latent motion for final motion synthesis.
Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation
To generate composite animations from a multi-track timeline, we propose a new test-time denoising method.
EgoGen: An Egocentric Synthetic Data Generator
To address this challenge, we introduce EgoGen, a new synthetic data generator that can produce accurate and rich ground-truth training data for egocentric perception tasks.
MACS: Mass Conditioned 3D Hand and Object Motion Synthesis
To improve the naturalness of the synthesized 3D hand object motions, this work proposes MACS the first MAss Conditioned 3D hand and object motion Synthesis approach.
Ponymation: Learning 3D Animal Motions from Unlabeled Online Videos
We introduce Ponymation, a new method for learning a generative model of articulated 3D animal motions from raw, unlabeled online videos.
Towards Detailed Text-to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model
Besides, we introduce a multi-denoiser framework for the advanced diffusion model to ease the learning of high-dimensional model and fully explore the generative potential of the diffusion model.