Search Results for author: Ronghui Li

Found 9 papers, 3 papers with code

Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Characteristic Dance Primitives

1 code implementation15 Mar 2024 Ronghui Li, Yuxiang Zhang, Yachao Zhang, Hongwen Zhang, Jie Guo, Yan Zhang, Yebin Liu, Xiu Li

In contrast, the second-stage is the local diffusion, which parallelly generates detailed motion sequences under the guidance of the dance primitives and choreographic rules.

Motion Synthesis

MambaTalk: Efficient Holistic Gesture Synthesis with Selective State Space Models

no code implementations14 Mar 2024 Zunnan Xu, Yukang Lin, Haonan Han, Sicheng Yang, Ronghui Li, Yachao Zhang, Xiu Li

Gesture synthesis is a vital realm of human-computer interaction, with wide-ranging applications across various fields like film, robotics, and virtual reality.

Harmonious Group Choreography with Trajectory-Controllable Diffusion

no code implementations10 Mar 2024 Yuqin Dai, Wanlu Zhu, Ronghui Li, Zeping Ren, Xiangzheng Zhou, Xiu Li, Jun Li, Jian Yang

Specifically, to tackle dancer collisions, we introduce a Dance-Beat Navigator capable of generating trajectories for multiple dancers based on the music, complemented by a Distance-Consistency loss to maintain appropriate spacing among trajectories within a reasonable threshold.

CreativeSynth: Creative Blending and Synthesis of Visual Arts based on Multimodal Diffusion

1 code implementation25 Jan 2024 Nisha Huang, WeiMing Dong, Yuxin Zhang, Fan Tang, Ronghui Li, Chongyang Ma, Xiu Li, Changsheng Xu

Large-scale text-to-image generative models have made impressive strides, showcasing their ability to synthesize a vast array of high-quality images.

Image Generation Style Transfer

Exploring Multi-Modal Control in Music-Driven Dance Generation

no code implementations1 Jan 2024 Ronghui Li, Yuqin Dai, Yachao Zhang, Jun Li, Jian Yang, Jie Guo, Xiu Li

Existing music-driven 3D dance generation methods mainly concentrate on high-quality dance generation, but lack sufficient control during the generation process.

Chain of Generation: Multi-Modal Gesture Synthesis via Cascaded Conditional Control

no code implementations26 Dec 2023 Zunnan Xu, Yachao Zhang, Sicheng Yang, Ronghui Li, Xiu Li

We introduce a novel method that separates priors from speech and employs multimodal priors as constraints for generating gestures.

Gesture Generation

FineDance: A Fine-grained Choreography Dataset for 3D Full Body Dance Generation

1 code implementation ICCV 2023 Ronghui Li, Junfan Zhao, Yachao Zhang, Mingyang Su, Zeping Ren, Han Zhang, Yansong Tang, Xiu Li

To address these problems, we propose FineDance, which contains 14. 6 hours of music-dance paired data, with fine-grained hand motions, fine-grained genres (22 dance genres), and accurate posture.

Motion Synthesis Retrieval

Multi-View Spatial-Temporal Network for Continuous Sign Language Recognition

no code implementations19 Apr 2022 Ronghui Li, Lu Meng

The first part is a Multi-View Spatial-Temporal Feature Extractor Network (MSTN), which can directly extract the spatial-temporal features of RGB and skeleton data; the second is a sign language encoder network based on Transformer, which can learn long-term dependencies; the third is a Connectionist Temporal Classification (CTC) decoder network, which is used to predict the whole meaning of the continuous sign language.

Sign Language Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.