Search Results for author: Olga Golovneva

Found 11 papers, 5 papers with code

Reverse Training to Nurse the Reversal Curse

no code implementations20 Mar 2024 Olga Golovneva, Zeyuan Allen-Zhu, Jason Weston, Sainbayar Sukhbaatar

Large language models (LLMs) have a surprising failure: when trained on "A has a feature B", they do not generalize to "B is a feature of A", which is termed the Reversal Curse.

Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM

1 code implementation12 Mar 2024 Sainbayar Sukhbaatar, Olga Golovneva, Vasu Sharma, Hu Xu, Xi Victoria Lin, Baptiste Rozière, Jacob Kahn, Daniel Li, Wen-tau Yih, Jason Weston, Xian Li

We investigate efficient methods for training Large Language Models (LLMs) to possess capabilities in multiple specialized domains, such as coding, math reasoning and world knowledge.

Arithmetic Reasoning Code Generation +6

Efficient Tool Use with Chain-of-Abstraction Reasoning

no code implementations30 Jan 2024 Silin Gao, Jane Dwivedi-Yu, Ping Yu, Xiaoqing Ellen Tan, Ramakanth Pasunuru, Olga Golovneva, Koustuv Sinha, Asli Celikyilmaz, Antoine Bosselut, Tianlu Wang

LLM agents trained with our method also show more efficient tool use, with inference speed being on average ~1. 4x faster than baseline tool-augmented LLMs.

Math Mathematical Reasoning +1

PathFinder: Guided Search over Multi-Step Reasoning Paths

no code implementations8 Dec 2023 Olga Golovneva, Sean O'Brien, Ramakanth Pasunuru, Tianlu Wang, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz

Using constrained reasoning, PathFinder integrates novel quality constraints, pruning, and exploration methods to enhance the efficiency and the quality of generation.

Pathfinder

DOMINO: A Dual-System for Multi-step Visual Language Reasoning

1 code implementation4 Oct 2023 Peifang Wang, Olga Golovneva, Armen Aghajanyan, Xiang Ren, Muhao Chen, Asli Celikyilmaz, Maryam Fazel-Zarandi

By fine-tuning the System-2 module (LLaMA-2 70B) on only a small amount of data on multi-step reasoning, the accuracy of our method is further improved and surpasses the best fully-supervised end-to-end approach by 5. 7% and a pipeline approach with FlanPaLM (540B) by 7. 5% on a challenging dataset with human-authored questions.

Arithmetic Reasoning Language Modelling +2

Shepherd: A Critic for Language Model Generation

1 code implementation8 Aug 2023 Tianlu Wang, Ping Yu, Xiaoqing Ellen Tan, Sean O'Brien, Ramakanth Pasunuru, Jane Dwivedi-Yu, Olga Golovneva, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz

As large language models improve, there is increasing interest in techniques that leverage these models' capabilities to refine their own outputs.

Language Modelling

ALERT: Adapting Language Models to Reasoning Tasks

no code implementations16 Dec 2022 Ping Yu, Tianlu Wang, Olga Golovneva, Badr Alkhamissy, Gargi Ghosh, Mona Diab, Asli Celikyilmaz

Current large language models can perform reasonably well on complex tasks that require step-by-step reasoning with few-shot learning.

Few-Shot Learning Language Modelling +1

ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning

1 code implementation15 Dec 2022 Olga Golovneva, Moya Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz

Large language models show improved downstream task performance when prompted to generate step-by-step reasoning to justify their final answers.

Informativeness Text Generation

Generative Adversarial Networks for Annotated Data Augmentation in Data Sparse NLU

no code implementations ICON 2020 Olga Golovneva, Charith Peris

In this paper, we present our results on boosting NLU model performance through training data augmentation using a sequential generative adversarial network (GAN).

Data Augmentation Generative Adversarial Network +3

Evaluating Cross-Lingual Transfer Learning Approaches in Multilingual Conversational Agent Models

no code implementations COLING 2020 Lizhen Tan, Olga Golovneva

With the recent explosion in popularity of voice assistant devices, there is a growing interest in making them available to user populations in additional countries and languages.

Cross-Lingual Transfer Natural Language Understanding +1

Cannot find the paper you are looking for? You can Submit a new open access paper.