Search Results for author: Michael J. Q. Zhang

Found 10 papers, 5 papers with code

Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs

no code implementations16 Nov 2023 Michael J. Q. Zhang, Eunsol Choi

In this work, we study such behavior in LMs by proposing a task-agnostic framework for resolving ambiguity by asking users clarifying questions.

Machine Translation Natural Language Inference +1

Propagating Knowledge Updates to LMs Through Distillation

1 code implementation NeurIPS 2023 Shankar Padmanabhan, Yasumasa Onoe, Michael J. Q. Zhang, Greg Durrett, Eunsol Choi

Then, we update the model parameters so that the distribution of the LM (the student) matches the distribution of the LM conditioned on the definition (the teacher) on the transfer set.

knowledge editing Language Modelling

Selectively Answering Ambiguous Questions

no code implementations24 May 2023 Jeremy R. Cole, Michael J. Q. Zhang, Daniel Gillick, Julian Martin Eisenschlos, Bhuwan Dhingra, Jacob Eisenstein

We investigate question answering from this perspective, focusing on answering a subset of questions with a high degree of accuracy, from a set of questions in which many are inherently ambiguous.

Question Answering

Mitigating Temporal Misalignment by Discarding Outdated Facts

1 code implementation24 May 2023 Michael J. Q. Zhang, Eunsol Choi

While large language models are able to retain vast amounts of world knowledge seen during pretraining, such knowledge is prone to going out of date and is nontrivial to update.

Question Answering Retrieval +1

Can LMs Learn New Entities from Descriptions? Challenges in Propagating Injected Knowledge

1 code implementation2 May 2023 Yasumasa Onoe, Michael J. Q. Zhang, Shankar Padmanabhan, Greg Durrett, Eunsol Choi

Pre-trained language models (LMs) are used for knowledge intensive tasks like question answering, but their knowledge gets continuously outdated as the world changes.

Question Answering

DIFFQG: Generating Questions to Summarize Factual Changes

no code implementations1 Mar 2023 Jeremy R. Cole, Palak Jain, Julian Martin Eisenschlos, Michael J. Q. Zhang, Eunsol Choi, Bhuwan Dhingra

We propose representing factual changes between paired documents as question-answer pairs, where the answer to the same question differs between two versions.

Change Detection Question Generation +1

Rich Knowledge Sources Bring Complex Knowledge Conflicts: Recalibrating Models to Reflect Conflicting Evidence

no code implementations25 Oct 2022 Hung-Ting Chen, Michael J. Q. Zhang, Eunsol Choi

Question answering models can use rich knowledge sources -- up to one hundred retrieved passages and parametric knowledge in the large-scale language model (LM).

Language Modelling Question Answering +1

Entity Cloze By Date: What LMs Know About Unseen Entities

no code implementations Findings (NAACL) 2022 Yasumasa Onoe, Michael J. Q. Zhang, Eunsol Choi, Greg Durrett

Given its wide coverage on entity knowledge and temporal indexing, our dataset can be used to evaluate LMs and techniques designed to modify or extend their knowledge.

CREAK: A Dataset for Commonsense Reasoning over Entity Knowledge

2 code implementations3 Sep 2021 Yasumasa Onoe, Michael J. Q. Zhang, Eunsol Choi, Greg Durrett

We introduce CREAK, a testbed for commonsense reasoning about entity knowledge, bridging fact-checking about entities (Harry Potter is a wizard and is skilled at riding a broomstick) with commonsense inferences (if you're good at a skill you can teach others how to do it).

Fact Checking Fact Verification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.