no code implementations • 11 Apr 2024 • Ruibo Liu, Jerry Wei, Fangyu Liu, Chenglei Si, Yanzhe Zhang, Jinmeng Rao, Steven Zheng, Daiyi Peng, Diyi Yang, Denny Zhou, Andrew M. Dai
The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs.
no code implementations • 5 Mar 2024 • Chenglei Si, Yanzhe Zhang, Zhengyuan Yang, Ruibo Liu, Diyi Yang
In this work, we formalize this as a Design2Code task and conduct comprehensive benchmarking.
1 code implementation • 3 Oct 2023 • Zijun Liu, Yanzhe Zhang, Peng Li, Yang Liu, Diyi Yang
We further design an automatic agent team optimization algorithm based on an unsupervised metric termed $\textit{Agent Importance Score}$, enabling the selection of best agents based on the contribution each agent makes.
1 code implementation • 29 Jun 2023 • Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans.
1 code implementation • 17 Feb 2023 • Albert Lu, Hongxin Zhang, Yanzhe Zhang, Xuezhi Wang, Diyi Yang
The limits of open-ended generative models are unclear, yet increasingly important.
1 code implementation • 7 Feb 2023 • Yanzhe Zhang, Lu Jiang, Greg Turk, Diyi Yang
Text-to-image models, which can generate high-quality images based on textual input, have recently enabled various content-creation tools.
1 code implementation • 19 Oct 2022 • Hongxin Zhang, Yanzhe Zhang, Ruiyi Zhang, Diyi Yang
Demonstration-based learning has shown great potential in stimulating pretrained language models' ability under limited data scenario.
1 code implementation • Findings (ACL) 2022 • Aaron Reich, Jiaao Chen, Aastha Agrawal, Yanzhe Zhang, Diyi Yang
We found that state-of-the-art NER systems trained on CoNLL 2003 training data drop performance dramatically on our challenging set.
2 code implementations • ACL 2022 • Yanzhe Zhang, Xuezhi Wang, Diyi Yang
Continual learning is essential for real-world deployment when there is a need to quickly adapt the model to new tasks without forgetting knowledge of old tasks.
1 code implementation • NAACL 2021 • Yufan Huang, Yanzhe Zhang, Jiaao Chen, Xuezhi Wang, Diyi Yang
Continual learning has become increasingly important as it enables NLP models to constantly learn and gain knowledge over time.