1 code implementation • 19 Mar 2024 • Sai Ashish Somayajula, Youwei Liang, Abhishek Singh, Li Zhang, Pengtao Xie
Pretrained Language Models (PLMs) have advanced Natural Language Processing (NLP) tasks significantly, but finetuning PLMs on low-resource datasets poses significant challenges such as instability and overfitting.
no code implementations • 14 Mar 2024 • Ruiyi Zhang, Rushi Qiang, Sai Ashish Somayajula, Pengtao Xie
Large-scale pretraining followed by task-specific finetuning has achieved great success in various NLP tasks.
1 code implementation • 28 Feb 2024 • Han Guo, Ramtin Hosseini, Ruiyi Zhang, Sai Ashish Somayajula, Ranak Roy Chowdhury, Rajesh K. Gupta, Pengtao Xie
Masked Autoencoder (MAE) is a notable method for self-supervised pretraining in visual representation learning.
1 code implementation • 28 Feb 2024 • Mingjia Huo, Sai Ashish Somayajula, Youwei Liang, Ruisi Zhang, Farinaz Koushanfar, Pengtao Xie
Large language models generate high-quality responses with potential misinformation, underscoring the need for regulation by distinguishing AI-generated and human-written texts.
no code implementations • 30 Nov 2021 • Ruisi Zhang, Youwei Liang, Sai Ashish Somayajula, Pengtao Xie
We introduce a training strategy called ``Differentiable Architecture Search with a Generative Model(DASGM)."