no code implementations • 7 Apr 2024 • Yiqun Duan, Qiang Zhang, Renjing Xu
The utilization of Large Language Models (LLMs) within the realm of reinforcement learning, particularly as planners, has garnered a significant degree of attention in recent scholarly literature.
2 code implementations • 4 Mar 2024 • Yiqian Yang, Yiqun Duan, Qiang Zhang, Renjing Xu, Hui Xiong
In this paper, we explore the brain-to-text translation of MEG signals in a speech-decoding formation.
1 code implementation • 1 Dec 2023 • Xianda Guo, Juntao Lu, Chenming Zhang, Yiqi Wang, Yiqun Duan, Tian Yang, Zheng Zhu, Long Chen
Based on OpenStereo, we conducted experiments and have achieved or surpassed the performance metrics reported in the original paper.
no code implementations • 21 Sep 2023 • Jinzhao Zhou, Yiqun Duan, Yu-Cheng Chang, Yu-Kai Wang, Chin-Teng Lin
The proposed BELT method is a generic and efficient framework that bootstraps EEG representation learning using off-the-shelf large-scale pretrained language models (LMs).
2 code implementations • 12 Apr 2023 • Yi Li, Hualiang Wang, Yiqun Duan, Xiaomeng Li
Contrastive Language-Image Pre-training (CLIP) is a powerful multimodal large vision model that has demonstrated significant benefits for downstream tasks, including many zero-shot learning and text-guided vision tasks.
Ranked #2 on Open Vocabulary Semantic Segmentation on COCO-Stuff-171 (mIoU metric)
Interactive Segmentation Open Vocabulary Semantic Segmentation +4
1 code implementation • 9 Mar 2023 • Yiqun Duan, Xianda Guo, Zheng Zhu
We propose DiffusionDepth, a new approach that reformulates monocular depth estimation as a denoising diffusion process.
no code implementations • 19 Dec 2022 • Jinzhao Zhou, Yiqun Duan, Zhihong Chen, Yu-Cheng Chang, Chin-Teng Lin
Making sense of multiple modalities can yield a more comprehensive description of real-world phenomena.
1 code implementation • 1 Oct 2022 • Yiqun Duan, Zhen Wang, Yi Li, Jianhang Tang, Yu-Kai Wang, Chin-Teng Lin
Recently, various neural network approaches have been proposed to improve the accuracy of EEG signal recognition.
1 code implementation • 15 Sep 2022 • Yi Li, Hualiang Wang, Yiqun Duan, Hang Xu, Xiaomeng Li
For this problem, we propose the Explainable Contrastive Language-Image Pre-training (ECLIP), which corrects the explainability via the Masked Max Pooling.
no code implementations • CVPR 2022 • Zhen Wang, Liu Liu, Yiqun Duan, Yajing Kong, DaCheng Tao
Continual learning methods aim at training a neural network from sequential data with streaming labels, relieving catastrophic forgetting.
1 code implementation • 14 Dec 2021 • Yi Li, Yiqun Duan, Zhanghui Kuang, Yimin Chen, Wayne Zhang, Xiaomeng Li
So we try to improve WSSS in the aspect of noise mitigation.
Ranked #23 on Weakly-Supervised Semantic Segmentation on COCO 2014 val
no code implementations • 7 Jun 2021 • Haiqin Yang, Xiaoyuan Yao, Yiqun Duan, Jianping Shen, Jie Zhong, Kun Zhang
More specifically, PHED deploys Conditional Variational AutoEncoder (CVAE) on Transformer to include one aspect of attributes at one stage.
no code implementations • 1 Jan 2021 • Zhen Wang, Liu Liu, Yiqun Duan, DaCheng Tao
In this work, we formulate and study few-shot streaming label learning (FSLL), which models emerging new labels with only a few annotated examples by utilizing the knowledge learned from past labels.
no code implementations • ICLR 2019 • Yiqun Duan
In this paper, we bridge these two by proposing a new network structure with locally dense yet externally sparse connections.