no code implementations • 3 Apr 2024 • Chuang Li, Shuai Shao, Willian Mikason, Rubing Lin, Yantong Liu
By deploying a computer vision system, our research aims to improve the efficiency and accuracy of vaccine safety assessments.
no code implementations • 22 Feb 2024 • Chuang Li, Rubing Lin, Yantong Liu, Yichen Wei
Cognitive impairments in older adults represent a significant public health concern, necessitating accurate diagnostic and monitoring strategies.
no code implementations • 6 Feb 2024 • Chuang Li, Yichen Wei, Chao Qin, ShiFan Chen, Xiaolong Shao
In response to infections, host cells dictate a variety of cell death pathways, including apoptosis, pyroptosis, necrosis, and lysosomal cell death, which are essential for amplifying immune responses and controlling pathogen dissemination.
no code implementations • 14 Jan 2024 • Hengchang Hu, Qijiong Liu, Chuang Li, Min-Yen Kan
Specifically, we introduce a novel method that enhances the learning of embeddings in SR through the supervision of modality correlations.
1 code implementation • 16 Oct 2023 • Chuang Li, Yan Zhang, Min-Yen Kan, Haizhou Li
Previous zero-shot dialogue state tracking (DST) methods only apply transfer learning, ignoring unlabelled data in the target domain.
1 code implementation • 14 Sep 2023 • Chuang Li, Hengchang Hu, Yan Zhang, Min-Yen Kan, Haizhou Li
However, not all CRS approaches use human conversations as their source of interaction data; the majority of prior CRS work simulates interactions by exchanging entity-level information.
1 code implementation • 19 Aug 2023 • Song Tang, Chuang Li, Pu Zhang, RongNian Tang
In this paper, we propose a new recurrent cell, SwinLSTM, which integrates Swin Transformer blocks and the simplified LSTM, an extension that replaces the convolutional structure in ConvLSTM with the self-attention mechanism.
1 code implementation • ICCV 2023 • Song Tang, Chuang Li, Pu Zhang, RongNian Tang
In this paper, we propose a new recurrent cell, SwinLSTM, which integrates Swin Transformer blocks and the simplified LSTM, an extension that replaces the convolutional structure in ConvLSTM with the self-attention mechanism.
Ranked #7 on Video Prediction on Moving MNIST