no code implementations • 17 Apr 2024 • Jiao Ou, Jiayu Wu, Che Liu, Fuzheng Zhang, Di Zhang, Kun Gai
Existing methods target instructions from real instruction dialogues as a learning goal and fine-tune a user simulator for posing instructions.
no code implementations • 24 Mar 2024 • Yinda Chen, Che Liu, Xiaoyu Liu, Rossella Arcucci, Zhiwei Xiong
The burgeoning integration of 3D medical imaging into healthcare has led to a substantial increase in the workload of medical professionals.
no code implementations • 11 Mar 2024 • Che Liu, Zhongwei Wan, Cheng Ouyang, Anand Shah, Wenjia Bai, Rossella Arcucci
Through multimodal learning on ECG records and associated reports, MERL is capable of performing zero-shot ECG classification with text prompts, eliminating the need for training data in downstream tasks.
no code implementations • 7 Mar 2024 • Zhongwei Wan, Che Liu, Xin Wang, Chaofan Tao, Hui Shen, Zhenwu Peng, Jie Fu, Rossella Arcucci, Huaxiu Yao, Mi Zhang
Electrocardiogram (ECG) serves as the primary non-invasive diagnostic tool for cardiac conditions monitoring, are crucial in assisting clinicians.
no code implementations • 19 Feb 2024 • Hongjin Su, Shuyang Jiang, Yuhang Lai, Haoyuan Wu, Boao Shi, Che Liu, Qian Liu, Tao Yu
Recently the retrieval-augmented generation (RAG) paradigm has raised much attention for its potential in incorporating external knowledge into large language models (LLMs) without further training.
no code implementations • 16 Feb 2024 • Yihong Tang, Jiao Ou, Che Liu, Fuzheng Zhang, Di Zhang, Kun Gai
Experiments on models improved by RoleAD indicate that our adversarial dataset ameliorates this deficiency, with the improvements demonstrating a degree of generalizability in ordinary scenarios.
no code implementations • 2 Jan 2024 • Jiuming Qin, Che Liu, Sibo Cheng, Yike Guo, Rossella Arcucci
Modern healthcare often utilises radiographic images alongside textual reports for diagnostics, encouraging the use of Vision-Language Self-Supervised Learning (VL-SSL) with large pre-trained models to learn versatile medical vision representations.
3 code implementations • 6 Dec 2023 • Zhongwei Wan, Xin Wang, Che Liu, Samiul Alam, Yu Zheng, Jiachen Liu, Zhongnan Qu, Shen Yan, Yi Zhu, Quanlu Zhang, Mosharaf Chowdhury, Mi Zhang
Large Language Models (LLMs) have demonstrated remarkable capabilities in important tasks such as natural language understanding, language generation, and complex reasoning and have the potential to make a substantial impact on our society.
no code implementations • 3 Dec 2023 • Che Liu, Cheng Ouyang, Yinda Chen, Cesar César Quilodrán-Casas, Lei Ma, Jie Fu, Yike Guo, Anand Shah, Wenjia Bai, Rossella Arcucci
This underlines T3D's potential in representation learning for 3D medical image analysis.
no code implementations • 3 Dec 2023 • Che Liu, Cheng Ouyang, Sibo Cheng, Anand Shah, Wenjia Bai, Rossella Arcucci
G2D achieves superior performance across 6 medical imaging tasks and 25 diseases, particularly in semantic segmentation, which necessitates fine-grained, semantically-grounded image features.
no code implementations • 3 Nov 2023 • Jiao Ou, Junda Lu, Che Liu, Yihong Tang, Fuzheng Zhang, Di Zhang, Kun Gai
In this paper, we propose DialogBench, a dialogue evaluation benchmark that contains 12 dialogue tasks to probe the capabilities of LLMs as human-like dialogue systems should have.
no code implementations • 3 Nov 2023 • Weiying Lin, Che Liu, Xin Zhang, Zhen Wei, Sizhe Li, Xun Ma
The process begins with histogram equalization to enhance the original image, followed by the use of Mask RCNN to identify the preliminary positions and outlines of oil tanks, the ground, and areas of potential oil contamination.
1 code implementation • 24 Oct 2023 • Sibo Cheng, Che Liu, Yike Guo, Rossella Arcucci
We introduce a novel variational DA scheme, named Voronoi-tessellation Inverse operator for VariatIonal Data assimilation (VIVID), that incorporates a DL inverse operator into the assimilation objective function.
2 code implementations • 16 Oct 2023 • Tianbao Xie, Fan Zhou, Zhoujun Cheng, Peng Shi, Luoxuan Weng, Yitao Liu, Toh Jing Hua, Junning Zhao, Qian Liu, Che Liu, Leo Z. Liu, Yiheng Xu, Hongjin Su, Dongchan Shin, Caiming Xiong, Tao Yu
Language agents show potential in being capable of utilizing natural language for varied and intricate tasks in diverse environments, particularly when built upon large language models (LLMs).
no code implementations • 11 Oct 2023 • Che Liu, Sibo Cheng, Miaojing Shi, Anand Shah, Wenjia Bai, Rossella Arcucci
The framework derives multi-level visual features from the chest X-ray (CXR) images and separately aligns these features with the descriptive and the conclusive text encoded in the hierarchical medical report.
no code implementations • 11 Oct 2023 • Yuchong Sun, Che Liu, Jinwen Huang, Ruihua Song, Fuzheng Zhang, Di Zhang, Zhongyuan Wang, Kun Gai
In this paper, we address these challenges by introducing Parrot, a highly scalable solution designed to automatically generate high-quality instruction-tuning data, which are then used to enhance the effectiveness of chat models in multi-turn conversations.
1 code implementation • 10 Oct 2023 • Che Liu, Anand Shah, Wenjia Bai, Rossella Arcucci
The advent of text-guided generative models raises a compelling question: Can VLP be implemented solely with synthetic images generated from genuine radiology reports, thereby mitigating the need for extensively pairing and curating image-text datasets?
no code implementations • 6 Sep 2023 • Che Liu, Zhongwei Wan, Sibo Cheng, Mi Zhang, Rossella Arcucci
In the domain of cardiovascular healthcare, the Electrocardiogram (ECG) serves as a critical, non-invasive diagnostic tool.
1 code implementation • 17 Jul 2023 • Che Liu, Sibo Cheng, Chen Chen, Mengyun Qiao, Weitong Zhang, Anand Shah, Wenjia Bai, Rossella Arcucci
The proposed method, named Medical vision-language pre-training with Frozen language models and Latent spAce Geometry optimization (M-FLAG), leverages a frozen language model for training stability and efficiency and introduces a novel orthogonality loss to harmonize the latent space geometry.
no code implementations • 7 Jun 2023 • Yinda Chen, Che Liu, Wei Huang, Sibo Cheng, Rossella Arcucci, Zhiwei Xiong
To address these challenges, we present Generative Text-Guided 3D Vision-Language Pretraining for Unified Medical Image Segmentation (GTGM), a framework that extends of VLP to 3D medical images without relying on paired textual descriptions.
1 code implementation • NeurIPS 2023 • Zhongwei Wan, Che Liu, Mi Zhang, Jie Fu, Benyou Wang, Sibo Cheng, Lei Ma, César Quilodrán-Casas, Rossella Arcucci
Med-UniC reaches superior performance across 5 medical image tasks and 10 datasets encompassing over 30 diseases, offering a versatile framework for unifying multi-modal medical data within diverse linguistic communities.
no code implementations • 22 Mar 2023 • Jun Li, Che Liu, Sibo Cheng, Rossella Arcucci, Shenda Hong
In downstream classification tasks, METS achieves around 10% improvement in performance without using any annotated data via zero-shot classification, compared to other supervised and SSL baselines that rely on annotated data.
no code implementations • 18 Mar 2023 • Sibo Cheng, Cesar Quilodran-Casas, Said Ouala, Alban Farchi, Che Liu, Pierre Tandeo, Ronan Fablet, Didier Lucor, Bertrand Iooss, Julien Brajard, Dunhui Xiao, Tijana Janjic, Weiping Ding, Yike Guo, Alberto Carrassi, Marc Bocquet, Rossella Arcucci
Data Assimilation (DA) and Uncertainty quantification (UQ) are extensively used in analysing and reducing error propagation in high-dimensional spatial-temporal dynamics.
1 code implementation • 10 Jan 2023 • Che Liu, Sibo Cheng, Weiping Ding, Rossella Arcucci
The robust performance of SCDNN provides a new perspective to exploit knowledge across deep learning models from time and spectral domains.
1 code implementation • 27 Oct 2022 • Che Liu, Rui Wang, Junfeng Jiang, Yongbin Li, Fei Huang
In this paper, we introduce the task of learning unsupervised dialogue embeddings.
1 code implementation • EMNLP 2021 • Che Liu, Rui Wang, Jinghua Liu, Jian Sun, Fei Huang, Luo Si
Learning sentence embeddings from dialogues has drawn increasing attention due to its low annotation cost and high domain adaptability.
no code implementations • 16 May 2020 • Chao Xiong, Che Liu, Zijun Xu, Junfeng Jiang, Jieping Ye
In this work, we propose a matching network, called sequential sentence matching network (S2M), to use the sentence-level semantic information to address the problem.