Search Results for author: Changpeng Yang

Found 5 papers, 2 papers with code

Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation

no code implementations12 Mar 2024 Likun Li, Haoqi Zeng, Changpeng Yang, Haozhe Jia, Di Xu

The objective of personalization and stylization in text-to-image is to instruct a pre-trained diffusion model to analyze new concepts introduced by users and incorporate them into expected styles.

Text-to-Image Generation

TransFace: Unit-Based Audio-Visual Speech Synthesizer for Talking Head Translation

no code implementations23 Dec 2023 Xize Cheng, Rongjie Huang, Linjun Li, Tao Jin, Zehan Wang, Aoxiong Yin, Minglei Li, Xinyu Duan, Changpeng Yang, Zhou Zhao

However, talking head translation, converting audio-visual speech (i. e., talking head video) from one language into another, still confronts several challenges compared to audio speech: (1) Existing methods invariably rely on cascading, synthesizing via both audio and text, resulting in delays and cascading errors.

Self-Supervised Learning Speech-to-Speech Translation +1

DisControlFace: Disentangled Control for Personalized Facial Image Editing

no code implementations11 Dec 2023 Haozhe Jia, Yan Li, Hengfei Cui, Di Xu, Changpeng Yang, Yuwang Wang, Tao Yu

Our DisControlNet can perform robust editing on any facial image through training on large-scale 2D in-the-wild portraits and also supports low-cost fine-tuning with few additional images to further learn diverse personalized priors of a specific person.

Cannot find the paper you are looking for? You can Submit a new open access paper.