Search Results for author: Noah Lee

Found 3 papers, 3 papers with code

ORPO: Monolithic Preference Optimization without Reference Model

2 code implementations12 Mar 2024 Jiwoo Hong, Noah Lee, James Thorne

While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence.

Llama

Robust Fine-Tuning of Vision-Language Models for Domain Generalization

1 code implementation3 Nov 2023 Kevin Vogt-Lowell, Noah Lee, Theodoros Tsiligkaridis, Marc Vaillant

To address these gaps, we present a new recipe for few-shot fine-tuning of the popular vision-language foundation model CLIP and evaluate its performance on challenging benchmark datasets with realistic distribution shifts from the WILDS collection.

Domain Generalization Few-Shot Learning +1

Can Large Language Models Capture Dissenting Human Voices?

1 code implementation23 May 2023 Noah Lee, Na Min An, James Thorne

Large language models (LLMs) have shown impressive achievements in solving a broad range of tasks.

Natural Language Inference Natural Language Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.