1 code implementation • 31 Mar 2024 • Mathieu Ravaut, Bosheng Ding, Fangkai Jiao, Hailin Chen, Xingxuan Li, Ruochen Zhao, Chengwei Qin, Caiming Xiong, Shafiq Joty
With the rise of Large Language Models (LLMs) in recent years, new opportunities are emerging, but also new challenges, and contamination is quickly becoming critical.
1 code implementation • 28 Nov 2023 • Hailin Chen, Fangkai Jiao, Xingxuan Li, Chengwei Qin, Mathieu Ravaut, Ruochen Zhao, Caiming Xiong, Shafiq Joty
Upon its release in late 2022, ChatGPT has brought a seismic shift in the entire landscape of AI, both in research and commerce.
1 code implementation • 28 Oct 2023 • Hailin Chen, Amrita Saha, Steven Hoi, Shafiq Joty
With the rise of powerful closed-sourced LLMs (ChatGPT, GPT-4), there are increasing interests in distilling the capabilies of close-sourced LLMs to smaller open-sourced LLMs.
Ranked #50 on Code Generation on HumanEval
1 code implementation • 13 Oct 2023 • Hung Le, Hailin Chen, Amrita Saha, Akash Gokul, Doyen Sahoo, Shafiq Joty
We find that by naturally encouraging the LLM to reuse the previously developed and verified sub-modules, CodeChain can significantly boost both modularity as well as correctness of the generated solutions, achieving relative pass@1 improvements of 35% on APPS and 76% on CodeContests.
Ranked #2 on Code Generation on CodeContests (Test Set pass@1 metric)
no code implementations • 6 Aug 2023 • Mathieu Ravaut, Hailin Chen, Ruochen Zhao, Chengwei Qin, Shafiq Joty, Nancy Chen
Prompt tuning (PT), a parameter-efficient technique that only tunes the additional prompt embeddings while keeping the backbone pre-trained language model (PLM) frozen, has shown promising results in language understanding tasks, especially in low-resource scenarios.
no code implementations • 20 Mar 2023 • Ruochen Zhao, Hailin Chen, Weishi Wang, Fangkai Jiao, Xuan Long Do, Chengwei Qin, Bosheng Ding, Xiaobao Guo, Minzhi Li, Xingxuan Li, Shafiq Joty
As Large Language Models (LLMs) become popular, there emerged an important trend of using multimodality to augment the LLMs' generation ability, which enables LLMs to better interact with the world.
1 code implementation • 30 Nov 2022 • Hailin Chen, Amrita Saha, Shafiq Joty, Steven C. H. Hoi
Machine learning models usually assume i. i. d data during training and testing, but data and tasks in real world often change over time.
no code implementations • 15 Aug 2018 • Feng Nie, Hailin Chen, Jinpeng Wang, Jin-Ge Yao, Chin-Yew Lin, Rong pan
Recent neural models for data-to-document generation have achieved remarkable progress in producing fluent and informative texts.
no code implementations • 8 Jul 2017 • Hailin Chen, Shengping Cui, Sebastian Li
Through this project, we researched on transfer learning methods and their applications on real world problems.