1 code implementation • 13 Oct 2023 • Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng
Through benchmarking evaluation on 2 popular LLMs- ChatGPT and Alpaca, we reveal significant gender biases in LLM-generated recommendation letters.
no code implementations • 28 Apr 2023 • George Pu, Anirudh Jain, Jihan Yin, Russell Kaplan
As foundation models continue to exponentially scale in size, efficient methods of adaptation become increasingly critical.
no code implementations • 22 Mar 2021 • George Pu, Yanlin Zhou, Dapeng Wu, Xiaolin Li
Federated learning allows distributed devices to collectively train a model without sharing or disclosing the local dataset with a central server.
1 code implementation • 17 Sep 2020 • Yanlin Zhou, George Pu, Xiyao Ma, Xiaolin Li, Dapeng Wu
DOSFL serves as an inexpensive method to quickly converge on a performant pre-trained model with less than 0. 1% communication cost of traditional methods.