1 code implementation • 26 Mar 2024 • Rui Pan, Xiang Liu, Shizhe Diao, Renjie Pi, Jipeng Zhang, Chi Han, Tong Zhang
Attempting to complement this deficiency, we investigate layerwise properties of LoRA on fine-tuning tasks and observe an uncommon skewness of weight norms across different layers.
1 code implementation • 5 Dec 2023 • Bowen Jin, Gang Liu, Chi Han, Meng Jiang, Heng Ji, Jiawei Han
Besides, although LLMs have shown their pure text-based reasoning ability, it is underexplored whether such ability can be generalized to graphs (i. e., graph-based reasoning).
no code implementations • 27 Nov 2023 • Chi Han, Jialiang Xu, Manling Li, Hanning Zhang, Tarek Abdelzaher, Heng Ji
Social media play a significant role in shaping public opinion and influencing ideological communities through information propagation.
no code implementations • 31 Oct 2023 • Sha Li, Chi Han, Pengfei Yu, Carl Edwards, Manling Li, Xingyao Wang, Yi R. Fung, Charles Yu, Joel R. Tetreault, Eduard H. Hovy, Heng Ji
The recent explosion of performance of large language models (LLMs) has changed the field of Natural Language Processing (NLP) more abruptly and seismically than any other shift in the field's 80-year history.
1 code implementation • 30 Aug 2023 • Chi Han, Qifan Wang, Hao Peng, Wenhan Xiong, Yu Chen, Heng Ji, Sinong Wang
As a result, their performance suffers drastically on inputs longer than those encountered during training, substantially limiting their applications in real-world tasks involving long contexts such as encoding scientific articles, code repositories, or long dialogues.
2 code implementations • 23 May 2023 • Cheng Qian, Chi Han, Yi R. Fung, Yujia Qin, Zhiyuan Liu, Heng Ji
Additionally, we introduce the Creation Challenge dataset, featuring 2K diverse questions, to emphasize the necessity and benefits of LLMs' tool creation ability.
no code implementations • 22 May 2023 • Chi Han, Jialiang Xu, Manling Li, Yi Fung, Chenkai Sun, Nan Jiang, Tarek Abdelzaher, Heng Ji
As pre-training and fine-tuning are costly and might negatively impact model performance, it is desired to efficiently adapt an existing model to different conditions such as styles, sentiments or narratives, when facing different audiences or scenarios.
no code implementations • 22 May 2023 • Chi Han, Ziqi Wang, Han Zhao, Heng Ji
Then, we empirically investigate the in-context behaviors of language models.
1 code implementation • 22 May 2023 • Chi Han, Qizheng He, Charles Yu, Xinya Du, Hanghang Tong, Heng Ji
A LERP is designed as a vector of probabilistic logical functions on the entity's neighboring sub-graph.
Ranked #9 on Link Prediction on WN18RR
no code implementations • 21 May 2023 • Ziqi Wang, Chi Han, Wenxuan Bao, Heng Ji
However, such data augmentation methods are sub-optimal for knowledge distillation since the teacher model could provide label distributions and is more tolerant to semantic shifts.
no code implementations • 19 May 2023 • Tianci Xue, Ziqi Wang, Zhenhailong Wang, Chi Han, Pengfei Yu, Heng Ji
To detect factual inconsistency, RCoT first asks LLMs to reconstruct the problem based on generated solutions.
3 code implementations • 17 Apr 2023 • Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, Maosong Sun
Considering the lack of a systematic tool learning evaluation in prior works, we experiment with 18 representative tools and show the potential of current foundation models in skillfully utilizing tools.
1 code implementation • 7 Nov 2022 • Chi Han, Hengzhi Pei, Xinya Du, Heng Ji
To this end, we propose the framework CLORE (Classification by LOgical Reasoning on Explanations).
no code implementations • 29 Sep 2021 • Heng Dong, Tonghan Wang, Jiayuan Liu, Chi Han, Chongjie Zhang
Promoting cooperation among self-interested agents is a long-standing and interdisciplinary problem, but receives less attention in multi-agent reinforcement learning (MARL).
2 code implementations • Findings (ACL) 2021 • Chi Han, Mingxuan Wang, Heng Ji, Lei LI
By projecting audio and text features to a common semantic representation, Chimera unifies MT and ST tasks and boosts the performance on ST benchmarks, MuST-C and Augmented Librispeech, to a new state-of-the-art.
no code implementations • 23 Apr 2021 • Heng Dong, Tonghan Wang, Jiayuan Liu, Chi Han, Chongjie Zhang
We propose a novel learning framework to encourage homophilic incentives and show that it achieves stable cooperation in both SSDs of public goods and tragedy of the commons.
1 code implementation • NeurIPS 2019 • Chi Han, Jiayuan Mao, Chuang Gan, Joshua B. Tenenbaum, Jiajun Wu
Humans reason with concepts and metaconcepts: we recognize red and green from visual input; we also understand that they describe the same property of objects (i. e., the color).