Search Results for author: Melanie Sclar

Found 9 papers, 4 papers with code

The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning

no code implementations4 Dec 2023 Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, Yejin Choi

We analyze the effect of alignment tuning by examining the token distribution shift between base LLMs and their aligned counterpart.

In-Context Learning

FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions

no code implementations24 Oct 2023 Hyunwoo Kim, Melanie Sclar, Xuhui Zhou, Ronan Le Bras, Gunhee Kim, Yejin Choi, Maarten Sap

Theory of mind (ToM) evaluations currently focus on testing models using passive narratives that inherently lack interactivity.

Question Answering

Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement

1 code implementation12 Oct 2023 Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, Xiang Ren

The ability to derive underlying principles from a handful of observations and then generalize to novel situations -- known as inductive reasoning -- is central to human intelligence.

Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker

no code implementations1 Jun 2023 Melanie Sclar, Sachin Kumar, Peter West, Alane Suhr, Yejin Choi, Yulia Tsvetkov

We present SymbolicToM, a plug-and-play approach to reason about the belief states of multiple characters in reading comprehension tasks via explicit symbolic representation.

Reading Comprehension

Faith and Fate: Limits of Transformers on Compositionality

1 code implementation NeurIPS 2023 Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi

We formulate compositional tasks as computation graphs to systematically quantify the level of complexity, and break down reasoning steps into intermediate sub-procedures.

Referee: Reference-Free Sentence Summarization with Sharper Controllability through Symbolic Knowledge Distillation

no code implementations25 Oct 2022 Melanie Sclar, Peter West, Sachin Kumar, Yulia Tsvetkov, Yejin Choi

Moreover, we uniquely propose iterative distillation of knowledge, where student models from the previous iteration of distillation serve as teacher models in the next iteration.

Knowledge Distillation Sentence +1

Symmetric Machine Theory of Mind

no code implementations29 Sep 2021 Melanie Sclar, Graham Neubig, Yonatan Bisk

Theory of mind (ToM), the ability to understand others' thoughts and desires, is a cornerstone of human intelligence.

Cannot find the paper you are looking for? You can Submit a new open access paper.