Search Results for author: Chengfei Lv

Found 13 papers, 6 papers with code

GaussianTalker: Speaker-specific Talking Head Synthesis via 3D Gaussian Splatting

no code implementations22 Apr 2024 Hongyun Yu, Zhan Qu, Qihang Yu, Jianchuan Chen, Zhonghua Jiang, Zhiwen Chen, Shengyu Zhang, Jimin Xu, Fei Wu, Chengfei Lv, Gang Yu

In this paper, we propose GaussianTalker, a novel method for audio-driven talking head synthesis based on 3D Gaussian Splatting.

AUTOACT: Automatic Agent Learning from Scratch via Self-Planning

1 code implementation10 Jan 2024 Shuofei Qiao, Ningyu Zhang, Runnan Fang, Yujie Luo, Wangchunshu Zhou, Yuchen Eleanor Jiang, Chengfei Lv, Huajun Chen

Further analysis demonstrates the effectiveness of the division-of-labor strategy, with the trajectory quality generated by AutoAct significantly outperforming that of others.

Question Answering

FactCHD: Benchmarking Fact-Conflicting Hallucination Detection

1 code implementation18 Oct 2023 Xiang Chen, Duanzheng Song, Honghao Gui, Chenxi Wang, Ningyu Zhang, Jiang Yong, Fei Huang, Chengfei Lv, Dan Zhang, Huajun Chen

Despite their impressive generative capabilities, LLMs are hindered by fact-conflicting hallucinations in real-world applications.

Benchmarking Hallucination

Making Language Models Better Tool Learners with Execution Feedback

1 code implementation22 May 2023 Shuofei Qiao, Honghao Gui, Chengfei Lv, Qianghuai Jia, Huajun Chen, Ningyu Zhang

To meet this need, we propose Tool leaRning wIth exeCution fEedback (TRICE), a two-stage end-to-end framework that enables the model to continually learn through feedback derived from tool execution, thereby learning when and how to use tools effectively.

Language Modelling Large Language Model +1

Walle: An End-to-End, General-Purpose, and Large-Scale Production System for Device-Cloud Collaborative Machine Learning

no code implementations30 May 2022 Chengfei Lv, Chaoyue Niu, Renjie Gu, Xiaotang Jiang, Zhaode Wang, Bin Liu, Ziqi Wu, Qiulin Yao, Congyu Huang, Panos Huang, Tao Huang, Hui Shu, Jinde Song, Bin Zou, Peng Lan, Guohuan Xu, Fei Wu, Shaojie Tang, Fan Wu, Guihai Chen

Walle consists of a deployment platform, distributing ML tasks to billion-scale devices in time; a data pipeline, efficiently preparing task input; and a compute container, providing a cross-platform and high-performance execution environment, while facilitating daily task iteration.

Federated Submodel Optimization for Hot and Cold Data Features

1 code implementation16 Sep 2021 Yucheng Ding, Chaoyue Niu, Fan Wu, Shaojie Tang, Chengfei Lv, Yanghe Feng, Guihai Chen

We theoretically proved the convergence rate of FedSubAvg by deriving an upper bound under a new metric called the element-wise gradient norm.

Federated Learning

Data-Free Evaluation of User Contributions in Federated Learning

no code implementations24 Aug 2021 Hongtao Lv, Zhenzhe Zheng, Tie Luo, Fan Wu, Shaojie Tang, Lifeng Hua, Rongfei Jia, Chengfei Lv

We evaluate the performance of PCA and Fed-PCA using the MNIST dataset and a large industrial product recommendation dataset.

Federated Learning Product Recommendation

Toward Understanding the Influence of Individual Clients in Federated Learning

no code implementations20 Dec 2020 Yihao Xue, Chaoyue Niu, Zhenzhe Zheng, Shaojie Tang, Chengfei Lv, Fan Wu, Guihai Chen

Federated learning allows mobile clients to jointly train a global model without sending their private data to a central server.

Federated Learning

Secure Federated Submodel Learning

1 code implementation6 Nov 2019 Chaoyue Niu, Fan Wu, Shaojie Tang, Lifeng Hua, Rongfei Jia, Chengfei Lv, Zhihua Wu, Guihai Chen

Nevertheless, the "position" of a client's truly required submodel corresponds to her private data, and its disclosure to the cloud server during interactions inevitably breaks the tenet of federated learning.

Federated Learning Position

Cannot find the paper you are looking for? You can Submit a new open access paper.