Search Results for author: Seth Ebner

Found 7 papers, 2 papers with code

A Closer Look at Claim Decomposition

no code implementations18 Mar 2024 Miriam Wanner, Seth Ebner, Zhengping Jiang, Mark Dredze, Benjamin Van Durme

We investigate how various methods of claim decomposition -- especially LLM-based methods -- affect the result of an evaluation approach such as the recently proposed FActScore, finding that it is sensitive to the decomposition method used.

Attribute

An Augmentation Strategy for Visually Rich Documents

no code implementations20 Dec 2022 Jing Xie, James B. Wendt, Yichao Zhou, Seth Ebner, Sandeep Tata

Many business workflows require extracting important fields from form-like documents (e. g. bank statements, bills of lading, purchase orders, etc.).

Data Augmentation

Gradual Fine-Tuning for Low-Resource Domain Adaptation

2 code implementations EACL (AdaptNLP) 2021 Haoran Xu, Seth Ebner, Mahsa Yarmohammadi, Aaron Steven White, Benjamin Van Durme, Kenton Murray

Fine-tuning is known to improve NLP models by adapting an initial model trained on more plentiful but less domain-salient examples to data in a target domain.

Domain Adaptation

Reading the Manual: Event Extraction as Definition Comprehension

no code implementations EMNLP (spnlp) 2020 Yunmo Chen, Tongfei Chen, Seth Ebner, Aaron Steven White, Benjamin Van Durme

We ask whether text understanding has progressed to where we may extract event information through incremental refinement of bleached statements derived from annotation manuals.

Event Extraction

Multi-Sentence Argument Linking

no code implementations ACL 2020 Seth Ebner, Patrick Xia, Ryan Culkin, Kyle Rawlins, Benjamin Van Durme

We present a novel document-level model for finding argument spans that fill an event's roles, connecting related ideas in sentence-level semantic role labeling and coreference resolution.

coreference-resolution Semantic Role Labeling +2

Bag-of-Words Transfer: Non-Contextual Techniques for Multi-Task Learning

no code implementations WS 2019 Seth Ebner, Felicity Wang, Benjamin Van Durme

Many architectures for multi-task learning (MTL) have been proposed to take advantage of transfer among tasks, often involving complex models and training procedures.

Multi-Task Learning Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.