Motion Synthesis

90 papers with code • 9 benchmarks • 13 datasets

Latest papers with no code

Motion Flow Matching for Human Motion Synthesis and Editing

no code yet • 14 Dec 2023

In this paper, we propose \emph{Motion Flow Matching}, a novel generative model designed for human motion generation featuring efficient sampling and effectiveness in motion editing applications.

Digital Life Project: Autonomous 3D Characters with Social Intelligence

no code yet • 7 Dec 2023

In this work, we present Digital Life Project, a framework utilizing language as the universal medium to build autonomous 3D characters, who are capable of engaging in social interactions and expressing with articulated body motions, thereby simulating life in a digital environment.

EMDM: Efficient Motion Diffusion Model for Fast and High-Quality Motion Generation

no code yet • 4 Dec 2023

We introduce Efficient Motion Diffusion Model (EMDM) for fast and high-quality human motion generation.

AAMDM: Accelerated Auto-regressive Motion Diffusion Model

no code yet • 2 Dec 2023

This paper introduces the Accelerated Auto-regressive Motion Diffusion Model (AAMDM), a novel motion synthesis framework designed to achieve quality, diversity, and efficiency all together.

OmniMotionGPT: Animal Motion Generation with Limited Data

no code yet • 30 Nov 2023

Our paper aims to generate diverse and realistic animal motion sequences from textual descriptions, without a large-scale animal text-motion dataset.

Probabilistic Speech-Driven 3D Facial Motion Synthesis: New Benchmarks, Methods, and Applications

no code yet • 30 Nov 2023

While these models can achieve high-quality lip articulation for speakers in the training set, they are unable to capture the full and diverse distribution of 3D facial motions that accompany speech in the real world.

ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions

no code yet • 28 Nov 2023

Current approaches for 3D human motion synthesis generate high-quality animations of digital humans performing a wide variety of actions and gestures.

A Unified Framework for Multimodal, Multi-Part Human Motion Synthesis

no code yet • 28 Nov 2023

The field has made significant progress in synthesizing realistic human motion driven by various modalities.

AvatarGPT: All-in-One Framework for Motion Understanding, Planning, Generation and Beyond

no code yet • 28 Nov 2023

AvatarGPT treats each task as one type of instruction fine-tuned on the shared LLM.

TLControl: Trajectory and Language Control for Human Motion Synthesis

no code yet • 28 Nov 2023

Controllable human motion synthesis is essential for applications in AR/VR, gaming, movies, and embodied AI.