Search Results for author: Greg Durrett

Found 97 papers, 60 papers with code

Contemporary NLP Modeling in Six Comprehensive Programming Assignments

no code implementations NAACL (TeachingNLP) 2021 Greg Durrett, Jifan Chen, Shrey Desai, Tanya Goyal, Lucas Kabela, Yasumasa Onoe, Jiacheng Xu

We present a series of programming assignments, adaptable to a range of experience levels from advanced undergraduate to PhD, to teach students design and implementation of modern NLP systems.

EchoGen: Generating Conclusions from Echocardiogram Notes

no code implementations BioNLP (ACL) 2022 Liyan Tang, Shravan Kooragayalu, Yanshan Wang, Ying Ding, Greg Durrett, Justin F. Rousseau, Yifan Peng

Generating a summary from findings has been recently explored (Zhang et al., 2018, 2020) in note types such as radiology reports that typically have short length.

Attribute

Can NLI Models Verify QA Systems’ Predictions?

1 code implementation Findings (EMNLP) 2021 Jifan Chen, Eunsol Choi, Greg Durrett

To build robust question answering systems, we need the ability to verify whether answers to questions are truly correct, not just “good enough” in the context of imperfect QA datasets.

Natural Language Inference Question Answering +1

A Block Metropolis-Hastings Sampler for Controllable Energy-based Text Generation

no code implementations7 Dec 2023 Jarad Forristal, Niloofar Mireshghallah, Greg Durrett, Taylor Berg-Kirkpatrick

Recent work has shown that energy-based language modeling is an effective framework for controllable text generation because it enables flexible integration of arbitrary discriminators.

Language Modelling Large Language Model +1

MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning

1 code implementation24 Oct 2023 Zayne Sprague, Xi Ye, Kaj Bostrom, Swarat Chaudhuri, Greg Durrett

We evaluate a range of LLMs and prompting techniques on this dataset and characterize the gaps that remain for techniques like chain-of-thought to perform robust reasoning.

QUDEVAL: The Evaluation of Questions Under Discussion Discourse Parsing

1 code implementation23 Oct 2023 Yating Wu, Ritika Mangla, Greg Durrett, Junyi Jessy Li

Questions Under Discussion (QUD) is a versatile linguistic framework in which discourse progresses as continuously asking questions and answering them.

Discourse Parsing Language Modelling +3

A Long Way to Go: Investigating Length Correlations in RLHF

1 code implementation5 Oct 2023 Prasann Singhal, Tanya Goyal, Jiacheng Xu, Greg Durrett

Furthermore, we find that even running RLHF with a reward based solely on length can reproduce most of the downstream improvements over the initial policy model, showing that reward models in these settings have a long way to go.

Question Answering

X-PARADE: Cross-Lingual Textual Entailment and Information Divergence across Paragraphs

no code implementations16 Sep 2023 Juan Diego Rodriguez, Katrin Erk, Greg Durrett

Understanding when two pieces of text convey the same information is a goal touching many subproblems in NLP, including textual entailment and fact-checking.

Fact Checking Machine Translation +1

Deductive Additivity for Planning of Natural Language Proofs

1 code implementation5 Jul 2023 Zayne Sprague, Kaj Bostrom, Swarat Chaudhuri, Greg Durrett

Specifically, we evaluate whether embedding spaces exhibit a property we call deductive additivity: the sum of premise statement embeddings should be close to embeddings of conclusions based on those premises.

Language Modelling Large Language Model +1

Propagating Knowledge Updates to LMs Through Distillation

1 code implementation NeurIPS 2023 Shankar Padmanabhan, Yasumasa Onoe, Michael J. Q. Zhang, Greg Durrett, Eunsol Choi

Then, we update the model parameters so that the distribution of the LM (the student) matches the distribution of the LM conditioned on the definition (the teacher) on the transfer set.

knowledge editing Language Modelling

EEL: Efficiently Encoding Lattices for Reranking

1 code implementation1 Jun 2023 Prasann Singhal, Jiacheng Xu, Xi Ye, Greg Durrett

Standard decoding approaches for conditional text generation tasks typically search for an output hypothesis with high model probability, but this may not yield the best hypothesis according to human judgments of quality.

Conditional Text Generation

Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses

no code implementations30 May 2023 Liyan Tang, Yifan Peng, Yanshan Wang, Ying Ding, Greg Durrett, Justin F. Rousseau

To tackle this problem, we propose a controlled text generation method that uses a novel contrastive learning strategy to encourage models to differentiate between generating likely and less likely outputs according to humans.

Contrastive Learning Decision Making +1

Coeditor: Leveraging Contextual Changes for Multi-round Code Auto-editing

no code implementations29 May 2023 Jiayi Wei, Greg Durrett, Isil Dillig

In this work, we explore a multi-round code auto-editing setting, aiming to predict edits to a code region based on recent changes within the same codebase.

Code Completion EDIT Task

Using Natural Language Explanations to Rescale Human Judgments

1 code implementation24 May 2023 Manya Wadhwa, Jifan Chen, Junyi Jessy Li, Greg Durrett

The rise of large language models (LLMs) has brought a critical need for high-quality human-labeled data, particularly for processes like human feedback and evaluation.

Question Answering

Drafting Event Schemas using Language Models

no code implementations24 May 2023 Anisha Gunjal, Greg Durrett

Past work has studied event prediction and event language modeling, sometimes mediated through structured representations of knowledge in the form of event schemas.

Descriptive Language Modelling +2

SatLM: Satisfiability-Aided Language Models Using Declarative Prompting

1 code implementation NeurIPS 2023 Xi Ye, Qiaochu Chen, Isil Dillig, Greg Durrett

In this paper, we propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of LLMs.

Arithmetic Reasoning Language Modelling

Can LMs Learn New Entities from Descriptions? Challenges in Propagating Injected Knowledge

1 code implementation2 May 2023 Yasumasa Onoe, Michael J. Q. Zhang, Shankar Padmanabhan, Greg Durrett, Eunsol Choi

Pre-trained language models (LMs) are used for knowledge intensive tasks like question answering, but their knowledge gets continuously outdated as the world changes.

Question Answering

TypeT5: Seq2seq Type Inference using Static Analysis

1 code implementation16 Mar 2023 Jiayi Wei, Greg Durrett, Isil Dillig

There has been growing interest in automatically predicting missing type annotations in programs written in Python and JavaScript.

Language Modelling Type prediction +1

WiCE: Real-World Entailment for Claims in Wikipedia

1 code implementation2 Mar 2023 Ryo Kamoi, Tanya Goyal, Juan Diego Rodriguez, Greg Durrett

Textual entailment models are increasingly applied in settings like fact-checking, presupposition verification in question answering, or summary evaluation.

Fact Checking Natural Language Inference +3

Modeling Complex Event Scenarios via Simple Entity-focused Questions

1 code implementation14 Feb 2023 Mahnaz Koupaee, Greg Durrett, Nathanael Chambers, Niranjan Balasubramanian

Event scenarios are often complex and involve multiple event sequences connected through different entity participants.

Language Modelling

Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting

1 code implementation9 Feb 2023 Xi Ye, Greg Durrett

We first generate sets of candidate explanations for each example in the prompt using a leave-one-out scheme, then find an effective combination of these explanations with a two-stage framework.

Mathematical Reasoning Natural Language Inference +1

Prompted Opinion Summarization with GPT-3.5

1 code implementation29 Nov 2022 Adithya Bhaskar, Alexander R. Fabbri, Greg Durrett

Large language models have shown impressive performance across a wide variety of tasks, including text summarization.

Opinion Summarization

Complementary Explanations for Effective In-Context Learning

1 code implementation25 Nov 2022 Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, Ves Stoyanov, Greg Durrett, Ramakanth Pasunuru

Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts, but there has been limited understanding of exactly how these explanations function or why they are effective.

In-Context Learning

Natural Language Deduction with Incomplete Information

2 code implementations1 Nov 2022 Zayne Sprague, Kaj Bostrom, Swarat Chaudhuri, Greg Durrett

A growing body of work studies how to answer a question or verify a claim by generating a natural language "proof": a chain of deductive inferences yielding the answer based on a set of premises.

Text Generation

Assessing Out-of-Domain Language Model Performance from Few Examples

no code implementations13 Oct 2022 Prasann Singhal, Jarad Forristal, Xi Ye, Greg Durrett

We address the task of predicting out-of-domain (OOD) performance in a few-shot fashion: given a few target-domain examples and a set of models with similar training performance, can we understand how these models will perform on OOD test data?

Language Modelling Natural Language Inference

Discourse Analysis via Questions and Answers: Parsing Dependency Structures of Questions Under Discussion

1 code implementation12 Oct 2022 Wei-Jen Ko, Yating Wu, Cutter Dalton, Dananjay Srinivas, Greg Durrett, Junyi Jessy Li

Human evaluation results show that QUD dependency parsing is possible for language models trained with this crowdsourced, generalizable annotation scheme.

Dependency Parsing Question Answering +1

News Summarization and Evaluation in the Era of GPT-3

1 code implementation26 Sep 2022 Tanya Goyal, Junyi Jessy Li, Greg Durrett

Finally, we evaluate models on a setting beyond generic summarization, specifically keyword-based summarization, and show how dominant fine-tuning approaches compare to prompting.

News Summarization Text Summarization

Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors

1 code implementation25 May 2022 Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng Xu, Semih Yavuz, Wojciech Kryściński, Justin F. Rousseau, Greg Durrett

We compare performance of state-of-the-art factuality metrics, including recent ChatGPT-based metrics, on this stratified benchmark and show that their performance varies significantly across different types of summarization models.

Abstractive Text Summarization

SNaC: Coherence Error Detection for Narrative Summarization

1 code implementation19 May 2022 Tanya Goyal, Junyi Jessy Li, Greg Durrett

In this work, we introduce SNaC, a narrative coherence evaluation framework rooted in fine-grained annotations for long summaries.

Benchmarking Coherence Evaluation +1

Generating Literal and Implied Subquestions to Fact-check Complex Claims

no code implementations14 May 2022 Jifan Chen, Aniruddh Sriram, Eunsol Choi, Greg Durrett

Verifying complex political claims is a challenging task, especially when politicians use various tactics to subtly misrepresent the facts.

Fact Checking

The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning

1 code implementation6 May 2022 Xi Ye, Greg Durrett

Does prompting a large language model (LLM) like GPT-3 with explanations improve in-context learning?

In-Context Learning Language Modelling +3

Entity Cloze By Date: What LMs Know About Unseen Entities

no code implementations Findings (NAACL) 2022 Yasumasa Onoe, Michael J. Q. Zhang, Eunsol Choi, Greg Durrett

Given its wide coverage on entity knowledge and temporal indexing, our dataset can be used to evaluate LMs and techniques designed to modify or extend their knowledge.

Natural Language Deduction through Search over Statement Compositions

no code implementations16 Jan 2022 Kaj Bostrom, Zayne Sprague, Swarat Chaudhuri, Greg Durrett

In settings from fact-checking to question answering, we frequently want to know whether a collection of evidence (premises) entails a hypothesis.

Fact Checking Question Answering

Massive-scale Decoding for Text Generation using Lattices

1 code implementation NAACL 2022 Jiacheng Xu, Siddhartha Reddy Jonnalagadda, Greg Durrett

Conditional neural text generation models generate high-quality outputs, but often concentrate around a mode when what we really want is a diverse set of options.

Document Summarization Machine Translation +2

Discourse Comprehension: A Question Answering Framework to Represent Sentence Connections

1 code implementation1 Nov 2021 Wei-Jen Ko, Cutter Dalton, Mark Simmons, Eliza Fisher, Greg Durrett, Junyi Jessy Li

While there has been substantial progress in text comprehension through simple factoid question answering, more holistic comprehension of a discourse still presents a major challenge (Dunietz et al., 2020).

Question Answering Reading Comprehension +1

Cross-Lingual Fine-Grained Entity Typing

no code implementations15 Oct 2021 Nila Selvaraj, Yasumasa Onoe, Greg Durrett

In this paper, we present a unified cross-lingual fine-grained entity typing model capable of handling over 100 languages and analyze this model's ability to generalize to languages and entities unseen during training.

Entity Typing

ASPECTNEWS: Aspect-Oriented Summarization of News Documents

1 code implementation ACL 2022 Ojas Ahuja, Jiacheng Xu, Akshay Gupta, Kevin Horecka, Greg Durrett

Generic summaries try to cover an entire document and query-based summaries try to answer document-specific questions.

Query-focused Summarization

Training Dynamics for Text Summarization Models

no code implementations Findings (ACL) 2022 Tanya Goyal, Jiacheng Xu, Junyi Jessy Li, Greg Durrett

Across different datasets (CNN/DM, XSum, MediaSum) and summary properties, such as abstractiveness and hallucination, we study what the model learns at different stages of its fine-tuning process.

Hallucination News Summarization +1

Can Explanations Be Useful for Calibrating Black Box Models?

2 code implementations ACL 2022 Xi Ye, Greg Durrett

Our approach first extracts a set of features combining human intuition about the task with model attributions generated by black box interpretation techniques, then uses a simple calibrator, in the form of a classifier, to predict whether the base model was correct or not.

Extractive Question-Answering Few-Shot Learning +2

Making Document-Level Information Extraction Right for the Right Reasons

no code implementations14 Oct 2021 Liyan Tang, Dhruv Rajan, Suyash Mohan, Abhijeet Pradhan, R. Nick Bryan, Greg Durrett

We show that regularization with small amounts of evidence supervision during training can substantially improve the quality of extracted evidence.

Sentence slot-filling +1

CREAK: A Dataset for Commonsense Reasoning over Entity Knowledge

2 code implementations3 Sep 2021 Yasumasa Onoe, Michael J. Q. Zhang, Eunsol Choi, Greg Durrett

We introduce CREAK, a testbed for commonsense reasoning about entity knowledge, bridging fact-checking about entities (Harry Potter is a wizard and is skilled at riding a broomstick) with commonsense inferences (if you're good at a skill you can teach others how to do it).

Fact Checking Fact Verification +1

Dissecting Generation Modes for Abstractive Summarization Models via Ablation and Attribution

1 code implementation ACL 2021 Jiacheng Xu, Greg Durrett

Despite the prominence of neural abstractive summarization models, we know little about how they actually form summaries and how to understand where their decisions come from.

Abstractive Text Summarization Language Modelling +3

Can NLI Models Verify QA Systems' Predictions?

1 code implementation18 Apr 2021 Jifan Chen, Eunsol Choi, Greg Durrett

To build robust question answering systems, we need the ability to verify whether answers to questions are truly correct, not just "good enough" in the context of imperfect QA datasets.

Natural Language Inference Question Answering

Flexible Generation of Natural Language Deductions

1 code implementation EMNLP 2021 Kaj Bostrom, Xinyu Zhao, Swarat Chaudhuri, Greg Durrett

Natural language is an attractive representation for this purpose -- it is both highly expressive and easy for humans to understand.

Sentence

Did they answer? Subjective acts and intents in conversational discourse

1 code implementation NAACL 2021 Elisa Ferracane, Greg Durrett, Junyi Jessy Li, Katrin Erk

Discourse signals are often implicit, leaving it up to the interpreter to draw the required inferences.

valid

Connecting Attributions and QA Model Behavior on Realistic Counterfactuals

1 code implementation EMNLP 2021 Xi Ye, Rohan Nair, Greg Durrett

When a model attribution technique highlights a particular part of the input, a user might understand this highlight as making a statement about counterfactuals (Miller, 2019): if that part of the input were to change, the model's prediction might change as well.

counterfactual Machine Reading Comprehension +1

Annotating and Modeling Fine-grained Factuality in Summarization

2 code implementations NAACL 2021 Tanya Goyal, Greg Durrett

Recent pre-trained abstractive summarization systems have started to achieve credible performance, but a major barrier to their use in practice is their propensity to output summaries that are not faithful to the input and that contain factual errors.

Abstractive Text Summarization Sentence

Model Agnostic Answer Reranking System for Adversarial Question Answering

no code implementations EACL 2021 Sagnik Majumder, Chinmoy Samant, Greg Durrett

While numerous methods have been proposed as defenses against adversarial examples in question answering (QA), these techniques are often model specific, require retraining of the model, and give only marginal improvements in performance over vanilla models.

Question Answering

Modeling Fine-Grained Entity Types with Box Embeddings

1 code implementation ACL 2021 Yasumasa Onoe, Michael Boratko, Andrew McCallum, Greg Durrett

Neural entity typing models typically represent fine-grained entity types as vectors in a high-dimensional space, but such spaces are not well-suited to modeling these types' complex interdependencies.

Entity Typing

Conditional Generation of Temporally-ordered Event Sequences

no code implementations ACL 2021 Shih-ting Lin, Nathanael Chambers, Greg Durrett

We propose a single model that addresses both temporal ordering, sorting given events into the order they occurred, and event infilling, predicting new events which fit into an existing temporally-ordered sequence.

Denoising Story Completion

Effective Distant Supervision for Temporal Relation Extraction

2 code implementations EACL (AdaptNLP) 2021 Xinyu Zhao, Shih-ting Lin, Greg Durrett

A principal barrier to training temporal relation extraction models in new domains is the lack of varied, high quality examples and the challenge of collecting more.

Relation Temporal Relation Extraction

Compressive Summarization with Plausibility and Salience Modeling

1 code implementation EMNLP 2020 Shrey Desai, Jiacheng Xu, Greg Durrett

Compressive summarization systems typically rely on a crafted set of syntactic rules to determine what spans of possible summary sentences can be deleted, then learn a model of what to actually delete by optimizing for content selection (ROUGE).

Sentence

Understanding Neural Abstractive Summarization Models via Uncertainty

1 code implementation EMNLP 2020 Jiacheng Xu, Shrey Desai, Greg Durrett

An advantage of seq2seq abstractive summarization models is that they generate text in a free-form manner, but this flexibility makes it difficult to interpret model behavior.

Abstractive Text Summarization Sentence +1

Evaluating Factuality in Generation with Dependency-level Entailment

1 code implementation Findings of the Association for Computational Linguistics 2020 Tanya Goyal, Greg Durrett

Experiments show that our dependency arc entailment model trained on this data can identify factual inconsistencies in paraphrasing and summarization better than sentence-level methods or those based on question generation, while additionally localizing the erroneous parts of the generation.

Natural Language Inference Question Generation +3

Inquisitive Question Generation for High Level Text Comprehension

1 code implementation EMNLP 2020 Wei-Jen Ko, Te-Yuan Chen, Yiyan Huang, Greg Durrett, Junyi Jessy Li

Inquisitive probing questions come naturally to humans in a variety of settings, but is a challenging task for automatic systems.

Question Generation Question-Generation +2

Optimal Neural Program Synthesis from Multimodal Specifications

no code implementations Findings (EMNLP) 2021 Xi Ye, Qiaochu Chen, Isil Dillig, Greg Durrett

Multimodal program synthesis, which leverages different types of user input to synthesize a desired program, is an attractive way to scale program synthesis to challenging settings; however, it requires integrating noisy signals from the user, like natural language, with hard constraints on the program's behavior.

Program Synthesis valid

Tradeoffs in Sentence Selection Techniques for Open-Domain Question Answering

no code implementations18 Sep 2020 Shih-ting Lin, Greg Durrett

Current methods in open-domain question answering (QA) usually employ a pipeline of first retrieving relevant documents, then applying strong reading comprehension (RC) models to that retrieved text.

Open-Domain Question Answering Reading Comprehension +3

Narrative Interpolation for Generating and Understanding Stories

no code implementations17 Aug 2020 Su Wang, Greg Durrett, Katrin Erk

We propose a method for controlled narrative/story generation where we are able to guide the model to produce coherent narratives with user-specified target endings by interpolation: for example, we are told that Jim went hiking and at the end Jim needed to be rescued, and we want the model to incrementally generate steps along the way.

Sentence Story Generation

Neural Syntactic Preordering for Controlled Paraphrase Generation

2 code implementations ACL 2020 Tanya Goyal, Greg Durrett

Paraphrasing natural language sentences is a multifaceted process: it might involve replacing individual words or short phrases, local rearrangement of content, or high-level restructuring like topicalization or passivization.

Machine Translation Paraphrase Generation +2

Benchmarking Multimodal Regex Synthesis with Complex Structures

no code implementations ACL 2020 Xi Ye, Qiaochu Chen, Isil Dillig, Greg Durrett

Existing datasets for regular expression (regex) generation from natural language are limited in complexity; compared to regex tasks that users post on StackOverflow, the regexes in these datasets are simple, and the language used to describe them is not diverse.

Benchmarking

Interpretable Entity Representations through Large-Scale Typing

no code implementations Findings of the Association for Computational Linguistics 2020 Yasumasa Onoe, Greg Durrett

On entity probing tasks involving recognizing entity identity, our embeddings used in parameter-free downstream models achieve competitive performance with ELMo- and BERT-based embeddings in trained models.

Entity Embeddings Entity Typing

Robust Question Answering Through Sub-part Alignment

no code implementations NAACL 2021 Jifan Chen, Greg Durrett

Current textual question answering models achieve strong performance on in-domain test sets, but often do so by fitting surface-level patterns in the data, so they fail to generalize to out-of-distribution settings.

Question Answering

LambdaNet: Probabilistic Type Inference using Graph Neural Networks

1 code implementation ICLR 2020 Jiayi Wei, Maruth Goyal, Greg Durrett, Isil Dillig

Given this program abstraction, we then use a graph neural network to propagate information between related type variables and eventually make type predictions.

Code Completion Vocal Bursts Type Prediction

Byte Pair Encoding is Suboptimal for Language Model Pretraining

no code implementations Findings of the Association for Computational Linguistics 2020 Kaj Bostrom, Greg Durrett

We analyze differences between BPE and unigram LM tokenization, finding that the latter method recovers subword units that align more closely with morphology and avoids problems stemming from BPE's greedy construction procedure.

Language Modelling

Calibration of Pre-trained Transformers

1 code implementation EMNLP 2020 Shrey Desai, Greg Durrett

Pre-trained Transformers are now ubiquitous in natural language processing, but despite their high end-task performance, little is known empirically about whether they are calibrated.

Natural Language Inference

Multi-hop Question Answering via Reasoning Chains

3 code implementations7 Oct 2019 Jifan Chen, Shih-ting Lin, Greg Durrett

Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way.

Multi-hop Question Answering Named Entity Recognition +3

Query-Focused Scenario Construction

no code implementations IJCNLP 2019 Su Wang, Greg Durrett, Katrin Erk

The news coverage of events often contains not one but multiple incompatible accounts of what happened.

Clustering

Fine-Grained Entity Typing for Domain Independent Entity Linking

1 code implementation12 Sep 2019 Yasumasa Onoe, Greg Durrett

For this problem, a domain is characterized not just by genre of text but even by factors as specific as the particular distribution of entities, as neural models tend to overfit by memorizing properties of frequent entities in a dataset.

Entity Linking Entity Typing

Effective Use of Transformer Networks for Entity Tracking

1 code implementation IJCNLP 2019 Aditya Gupta, Greg Durrett

Tracking entities in procedural language requires understanding the transformations arising from actions on entities as well as those entities' interactions.

Natural Language Understanding

Sketch-Driven Regular Expression Generation from Natural Language and Examples

1 code implementation16 Aug 2019 Xi Ye, Qiaochu Chen, Xinyu Wang, Isil Dillig, Greg Durrett

Our system achieves state-of-the-art performance on the prior datasets and solves 57% of the real-world dataset, which existing neural systems completely fail on.

Embedding time expressions for deep temporal ordering models

3 code implementations ACL 2019 Tanya Goyal, Greg Durrett

Data-driven models have demonstrated state-of-the-art performance in inferring the temporal ordering of events in text.

Learning to Denoise Distantly-Labeled Data for Entity Typing

1 code implementation NAACL 2019 Yasumasa Onoe, Greg Durrett

In this work, we propose a two-stage procedure for handling this type of data: denoise it with a learned model, then train our final model on clean and denoised distant data with standard supervised training.

Denoising Entity Typing

Understanding Dataset Design Choices for Multi-hop Reasoning

no code implementations NAACL 2019 Jifan Chen, Greg Durrett

First, we explore sentence-factored models for these tasks; by design, these models cannot do multi-hop reasoning, but they are still able to solve a large number of examples in both datasets.

Multi-hop Question Answering Multiple-choice +3

Tracking Discrete and Continuous Entity State for Process Understanding

no code implementations WS 2019 Aditya Gupta, Greg Durrett

The global discrete state structure is explicitly modeled with a neural CRF over the changing hidden representation of the entity.

Procedural Text Understanding

Neural Extractive Text Summarization with Syntactic Compression

1 code implementation IJCNLP 2019 Jiacheng Xu, Greg Durrett

In this work, we present a neural model for single-document summarization based on joint extraction and syntactic compression.

Document Summarization Extractive Text Summarization

Domain Agnostic Real-Valued Specificity Prediction

1 code implementation13 Nov 2018 Wei-Jen Ko, Greg Durrett, Junyi Jessy Li

Sentence specificity quantifies the level of detail in a sentence, characterizing the organization of information in discourse.

Dialogue Generation Informativeness +3

Picking Apart Story Salads

no code implementations EMNLP 2018 Su Wang, Eric Holgate, Greg Durrett, Katrin Erk

During natural disasters and conflicts, information about what happened is often confusing, messy, and distributed across many sources.

Clustering

Effective Use of Context in Noisy Entity Linking

1 code implementation EMNLP 2018 David Mueller, Greg Durrett

To disambiguate between closely related concepts, entity linking systems need to effectively distill cues from their context, which may be quite noisy.

Entity Linking

Spherical Latent Spaces for Stable Variational Autoencoders

1 code implementation EMNLP 2018 Jiacheng Xu, Greg Durrett

A hallmark of variational autoencoders (VAEs) for text processing is their combination of powerful encoder-decoder models, such as LSTMs, with simple latent distributions, typically multivariate Gaussians.

Language Modelling Topic Models

Modeling Semantic Plausibility by Injecting World Knowledge

1 code implementation NAACL 2018 Su Wang, Greg Durrett, Katrin Erk

Distributional data tells us that a man can swallow candy, but not that a man can swallow a paintball, since this is never attested.

World Knowledge

Capturing Semantic Similarity for Entity Linking with Convolutional Neural Networks

1 code implementation NAACL 2016 Matthew Francis-Landau, Greg Durrett, Dan Klein

A key challenge in entity linking is making effective use of contextual information to disambiguate mentions that might refer to different entities in different contexts.

Entity Linking Semantic correspondence +2

Learning-Based Single-Document Summarization with Compression and Anaphoricity Constraints

no code implementations ACL 2016 Greg Durrett, Taylor Berg-Kirkpatrick, Dan Klein

We present a discriminative model for single-document summarization that integrally combines compression and anaphoricity constraints.

Document Summarization Sentence

Neural CRF Parsing

no code implementations IJCNLP 2015 Greg Durrett, Dan Klein

This paper describes a parsing model that combines the exact dynamic programming of CRF parsing with the rich nonlinear featurization of neural net approaches.

A Joint Model for Entity Analysis: Coreference, Typing, and Linking

no code implementations TACL 2014 Greg Durrett, Dan Klein

We present a joint model of three core tasks in the entity analysis stack: coreference resolution (within-document clustering), named entity recognition (coarse semantic typing), and entity linking (matching to Wikipedia entities).

Clustering coreference-resolution +4

Cannot find the paper you are looking for? You can Submit a new open access paper.