Search Results for author: Mengfei Li

Found 4 papers, 0 papers with code

Intensive Vision-guided Network for Radiology Report Generation

no code implementations6 Feb 2024 Fudan Zheng, Mengfei Li, Ying Wang, Weijiang Yu, Ruixuan Wang, Zhiguang Chen, Nong Xiao, Yutong Lu

Given the above limitation in feature extraction, we propose a Globally-intensive Attention (GIA) module in the medical image encoder to simulate and integrate multi-view vision perception.

RustNeRF: Robust Neural Radiance Field with Low-Quality Images

no code implementations6 Jan 2024 Mengfei Li, Ming Lu, Xiaofang Li, Shanghang Zhang

First, existing methods assume enough high-quality images are available for training the NeRF model, ignoring real-world image degradation.

Novel View Synthesis

Weakly-Supervised Emotion Transition Learning for Diverse 3D Co-speech Gesture Generation

no code implementations29 Nov 2023 Xingqun Qi, Jiahao Pan, Peng Li, Ruibin Yuan, Xiaowei Chi, Mengfei Li, Wenhan Luo, Wei Xue, Shanghang Zhang, Qifeng Liu, Yike Guo

In addition, the lack of large-scale available datasets with emotional transition speech and corresponding 3D human gestures also limits the addressing of this task.

Audio inpainting Gesture Generation

Learning from Inside: Self-driven Siamese Sampling and Reasoning for Video Question Answering

no code implementations NeurIPS 2021 Weijiang Yu, Haoteng Zheng, Mengfei Li, Lei Ji, Lijun Wu, Nong Xiao, Nan Duan

To consider the interdependent knowledge between contextual clips into the network inference, we propose a Siamese Sampling and Reasoning (SiaSamRea) approach, which consists of a siamese sampling mechanism to generate sparse and similar clips (i. e., siamese clips) from the same video, and a novel reasoning strategy for integrating the interdependent knowledge between contextual clips into the network.

Multimodal Reasoning Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.