Search Results for author: Junfeng Yao

Found 13 papers, 3 papers with code

CMU-Flownet: Exploring Point Cloud Scene Flow Estimation in Occluded Scenario

no code implementations16 Apr 2024 Jingze Chen, Junfeng Yao, Qiqin Lin, Lei LI

Occlusions hinder point cloud frame alignment in LiDAR data, a challenge inadequately addressed by scene flow models tested mainly on occlusion-free datasets.

Occlusion Estimation Occlusion Handling +1

Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal

no code implementations2 Mar 2024 Jianheng Huang, Leyang Cui, Ante Wang, Chengyi Yang, Xinting Liao, Linfeng Song, Junfeng Yao, Jinsong Su

When conducting continual learning based on a publicly-released LLM checkpoint, the availability of the original training data may be non-existent.

Continual Learning In-Context Learning

TDAG: A Multi-Agent Framework based on Dynamic Task Decomposition and Agent Generation

no code implementations15 Feb 2024 Yaoxiang Wang, Zhiyong Wu, Junfeng Yao, Jinsong Su

The emergence of Large Language Models (LLMs) like ChatGPT has inspired the development of LLM-based agents capable of addressing complex, real-world tasks.

DRSM: efficient neural 4d decomposition for dynamic reconstruction in stationary monocular cameras

no code implementations1 Feb 2024 Weixing Xie, Xiao Dong, Yong Yang, Qiqin Lin, Jingze Chen, Junfeng Yao, Xiaohu Guo

With the popularity of monocular videos generated by video sharing and live broadcasting applications, reconstructing and editing dynamic scenes in stationary monocular cameras has become a special but anticipated technology.

Dynamic Reconstruction Neural Rendering

SSFlowNet: Semi-supervised Scene Flow Estimation On Point Clouds With Pseudo Label

no code implementations23 Dec 2023 Jingze Chen, Junfeng Yao, Qiqin Lin, Rongzhou Zhou, Lei LI

This paper introduces SSFlowNet, a semi-supervised approach for scene flow estimation, that utilizes a blend of labeled and unlabeled data, optimizing the balance between the cost of labeling and the precision of model training.

Pseudo Label Scene Flow Estimation

Revisiting Non-Autoregressive Translation at Scale

1 code implementation25 May 2023 Zhihao Wang, Longyue Wang, Jinsong Su, Junfeng Yao, Zhaopeng Tu

Experimental results on the large-scale WMT20 En-De show that the asymmetric architecture (e. g. bigger encoder and smaller decoder) can achieve comparable performance with the scaling model, while maintaining the superiority of decoding speed with standard NAT models.

Translation

Improving Tree-Structured Decoder Training for Code Generation via Mutual Learning

no code implementations31 May 2021 Binbin Xie, Jinsong Su, Yubin Ge, Xiang Li, Jianwei Cui, Junfeng Yao, Bin Wang

However, such a decoder only exploits the preorder traversal based preceding actions, which are insufficient to ensure correct action predictions.

Code Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.