1 code implementation • 21 Jan 2024 • Haoqiang Guo, Sendong Zhao, Haochun Wang, Yanrui Du, Bing Qin
The agent accentuates task-relevant features in the molecular representation by understanding the natural language description of the task, just as a tailor customizes clothes for clients.
no code implementations • 7 Dec 2023 • Yanrui Du, Sendong Zhao, Ming Ma, Yuhan Chen, Bing Qin
The jailbreak idea of our method is "Inherent Response Tendency Analysis" which identifies real-world instructions that can inherently induce LLMs to generate affirmation responses and the corresponding jailbreak strategy is "Real-World Instructions-Driven Jailbreak" which involves strategically splicing real-world instructions identified through the above analysis around the malicious instruction.
no code implementations • 20 Oct 2023 • Yanrui Du, Sendong Zhao, Haochun Wang, Yuhan Chen, Rui Bai, Zewen Qiang, MuZhen Cai, Bing Qin
Through extensive experiments on five reasoning datasets from the ERASER benchmark, we demonstrate that our framework not only establishes a more reliable link between the generated rationale and model decision but also achieves competitive results in task performance and the quality of rationale.
1 code implementation • 11 Sep 2023 • Yuhan Chen, Nuwa Xi, Yanrui Du, Haochun Wang, Jianyu Chen, Sendong Zhao, Bing Qin
Furthermore, our method shows a sustained improvement as the volume of pseudo data increases, revealing the great potential of pseudo data in advancing low-resource cross-modal molecule discovery.
1 code implementation • 8 Sep 2023 • Haochun Wang, Sendong Zhao, Zewen Qiang, Zijian Li, Nuwa Xi, Yanrui Du, MuZhen Cai, Haoqiang Guo, Yuhan Chen, Haoming Xu, Bing Qin, Ting Liu
To address this challenge, we propose knowledge-tuning, which leverages structured medical knowledge bases for the LLMs to grasp domain knowledge efficiently and facilitate reliable response generation.
1 code implementation • 8 Sep 2023 • Yanrui Du, Sendong Zhao, MuZhen Cai, Ming Ma, Danyang Zhao, Jiawei Cao, Bing Qin
We conduct several experiments to analyze the dual logic ability of LLMs by examining the consistency of the stance in responses to paired questions about the same fact.
no code implementations • 8 Sep 2023 • Yanrui Du, Sendong Zhao, Yuhan Chen, Rai Bai, Jing Liu, Hua Wu, Haifeng Wang, Bing Qin
To address this issue, it is crucial to analyze and mitigate the influence of superficial clues on STM models.
1 code implementation • 25 May 2022 • Yanrui Du, Jing Yan, Yan Chen, Jing Liu, Sendong Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, Bing Qin
In this study, we focus on the spurious correlation between word features and labels that models learn from the biased data distribution of training data.