Search Results for author: Ansel MacLaughlin

Found 5 papers, 1 papers with code

Federated Learning with Noisy User Feedback

no code implementations NAACL 2022 Rahul Sharma, Anil Ramakrishna, Ansel MacLaughlin, Anna Rumshisky, Jimit Majmudar, Clement Chung, Salman Avestimehr, Rahul Gupta

Federated learning (FL) has recently emerged as a method for training ML models on edge devices using sensitive user data and is seen as a way to mitigate concerns over data privacy.

Federated Learning text-classification +1

Recovering Lexically and Semantically Reused Texts

1 code implementation Joint Conference on Lexical and Computational Semantics 2021 Ansel MacLaughlin, Shaobin Xu, David A. Smith

In extensive experiments, we study the relative performance of four classes of neural and bag-of-words models on three LTRD tasks {--} detecting plagiarism, modeling journalists{'} use of press releases, and identifying scientists{'} citation of earlier papers.

Semantic Textual Similarity

Content-based Models of Quotation

no code implementations EACL 2021 Ansel MacLaughlin, David Smith

We explore the task of quotability identification, in which, given a document, we aim to identify which of its passages are the most quotable, i. e. the most likely to be directly quoted by later derived documents.

Passage Ranking Sentence

Evaluating the Effectiveness of Efficient Neural Architecture Search for Sentence-Pair Tasks

no code implementations EMNLP (insights) 2020 Ansel MacLaughlin, Jwala Dhamala, Anoop Kumar, Sriram Venkatapathy, Ragav Venkatesan, Rahul Gupta

Neural Architecture Search (NAS) methods, which automatically learn entire neural model or individual neural cell architectures, have recently achieved competitive or state-of-the-art (SOTA) performance on variety of natural language processing and computer vision tasks, including language modeling, natural language inference, and image classification.

Image Classification Language Modelling +7

Context-Based Quotation Recommendation

no code implementations17 May 2020 Ansel MacLaughlin, Tao Chen, Burcu Karagol Ayan, Dan Roth

Our experiments confirm the strong performance of BERT-based methods on this task, which outperform bag-of-words and neural ranking baselines by more than 30% relative across all ranking metrics.

Open-Domain Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.