no code implementations • 19 Mar 2024 • Danqing Luo, Chen Zhang, Yan Zhang, Haizhou Li
Training or finetuning large-scale language models (LLMs) requires substantial computation resources, motivating recent efforts to explore parameter-efficient adaptation to downstream tasks.
no code implementations • 23 May 2023 • Danqing Luo, Chen Zhang, Jiahui Xu, Bin Wang, Yiming Chen, Yan Zhang, Haizhou Li
To achieve this, we treat the black-box model as a feature extractor and train a classifier with the augmented text data.