Search Results for author: Graham Neubig

Found 356 papers, 216 papers with code

Systematic Inequalities in Language Technology Performance across the World’s Languages

1 code implementation ACL 2022 Damian Blasi, Antonios Anastasopoulos, Graham Neubig

Natural language processing (NLP) systems have become a central technology in communication, education, medicine, artificial intelligence, and many other domains of research and development.

Dependency Parsing Machine Translation +4

XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalisation

2 code implementations ICML 2020 Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson

However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing.

Retrieval Sentence +1

CMU’s IWSLT 2022 Dialect Speech Translation System

no code implementations IWSLT (ACL) 2022 Brian Yan, Patrick Fernandes, Siddharth Dalmia, Jiatong Shi, Yifan Peng, Dan Berrebbi, Xinyi Wang, Graham Neubig, Shinji Watanabe

We use additional paired Modern Standard Arabic data (MSA) to directly improve the speech recognition (ASR) and machine translation (MT) components of our cascaded systems.

Knowledge Distillation Machine Translation +3

Project MAIA: Multilingual AI Agent Assistant

no code implementations EAMT 2020 André F. T. Martins, Joao Graca, Paulo Dimas, Helena Moniz, Graham Neubig

This paper presents the Multilingual Artificial Intelligence Agent Assistant (MAIA), a project led by Unbabel with the collaboration of CMU, INESC-ID and IT Lisbon.

BIG-bench Machine Learning Translation

Better Synthetic Data by Retrieving and Transforming Existing Datasets

1 code implementation22 Apr 2024 Saumya Gandhi, Ritu Gala, Vijay Viswanathan, Tongshuang Wu, Graham Neubig

Recent work has studied prompt-driven synthetic data generation using large language models, but these generated datasets tend to lack complexity and diversity.

Synthetic Data Generation

VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?

no code implementations9 Apr 2024 Junpeng Liu, YiFan Song, Bill Yuchen Lin, Wai Lam, Graham Neubig, Yuanzhi Li, Xiang Yue

Multimodal Large Language models (MLLMs) have shown promise in web-related tasks, but evaluating their performance in the web domain remains a challenge due to the lack of comprehensive benchmarks.

Optical Character Recognition (OCR)

CMULAB: An Open-Source Framework for Training and Deployment of Natural Language Processing Models

1 code implementation3 Apr 2024 Zaid Sheikh, Antonios Anastasopoulos, Shruti Rijhwani, Lindia Tjuatja, Robbie Jimerson, Graham Neubig

Effectively using Natural Language Processing (NLP) tools in under-resourced languages requires a thorough understanding of the language itself, familiarity with the latest models and training methodologies, and technical expertise to deploy these models.

Optical Character Recognition (OCR) speech-recognition +1

An Incomplete Loop: Deductive, Inductive, and Abductive Learning in Large Language Models

no code implementations3 Apr 2024 Emmy Liu, Graham Neubig, Jacob Andreas

Modern language models (LMs) can learn to perform new tasks in different ways: in instruction following, the target task is described explicitly in natural language; in few-shot prompting, the task is specified implicitly with a small number of examples; in instruction inference, LMs are presented with in-context examples and are then prompted to generate a natural language task description before making predictions.

Instruction Following Machine Translation

Evaluating Text-to-Visual Generation with Image-to-Text Generation

2 code implementations1 Apr 2024 Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, Deva Ramanan

For instance, the widely-used CLIPScore measures the alignment between a (generated) image and text prompt, but it fails to produce reliable scores for complex prompts involving compositions of objects, attributes, and relations.

Question Answering Text Generation +2

Wav2Gloss: Generating Interlinear Glossed Text from Speech

no code implementations19 Mar 2024 Taiqi He, Kwanghee Choi, Lindia Tjuatja, Nathaniel R. Robinson, Jiatong Shi, Shinji Watanabe, Graham Neubig, David R. Mortensen, Lori Levin

Thousands of the world's languages are in danger of extinction--a tremendous threat to cultural identities and human language diversity.

RAGGED: Towards Informed Design of Retrieval Augmented Generation Systems

1 code implementation14 Mar 2024 Jennifer Hsia, Afreen Shaikh, Zhiruo Wang, Graham Neubig

RAGGED offers further insights into LMs' context utilization habits, where we find that encoder-decoder models rely more on contexts and are thus more sensitive to retrieval quality, while decoder-only models tend to rely on knowledge memorized during training.

Question Answering Retrieval

SOTOPIA-$π$: Interactive Learning of Socially Intelligent Language Agents

1 code implementation13 Mar 2024 Ruiyi Wang, Haofei Yu, Wenxin Zhang, Zhengyang Qi, Maarten Sap, Graham Neubig, Yonatan Bisk, Hao Zhu

Motivated by this gap, we propose an interactive learning method, SOTOPIA-$\pi$, improving the social intelligence of language agents.

Language Modelling Large Language Model

GlossLM: Multilingual Pretraining for Low-Resource Interlinear Glossing

no code implementations11 Mar 2024 Michael Ginn, Lindia Tjuatja, Taiqi He, Enora Rice, Graham Neubig, Alexis Palmer, Lori Levin

A key aspect of language documentation is the creation of annotated text in a format such as interlinear glossed text (IGT), which captures fine-grained morphosyntactic analyses in a morpheme-by-morpheme format.

What Is Missing in Multilingual Visual Reasoning and How to Fix It

1 code implementation3 Mar 2024 Yueqi Song, Simran Khanuja, Graham Neubig

NLP models today strive for supporting multiple languages and modalities, improving accessibility for diverse users.

Image Captioning Visual Reasoning

Repetition Improves Language Model Embeddings

1 code implementation23 Feb 2024 Jacob Mitchell Springer, Suhas Kotha, Daniel Fried, Graham Neubig, aditi raghunathan

In this work, we address an architectural limitation of autoregressive models: token embeddings cannot contain information from tokens that appear later in the input.

Language Modelling

Instruction-tuned Language Models are Better Knowledge Learners

no code implementations20 Feb 2024 Zhengbao Jiang, Zhiqing Sun, Weijia Shi, Pedro Rodriguez, Chunting Zhou, Graham Neubig, Xi Victoria Lin, Wen-tau Yih, Srinivasan Iyer

The standard recipe for doing so involves continued pre-training on new documents followed by instruction-tuning on question-answer (QA) pairs.

Language Modelling Large Language Model

Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes

1 code implementation8 Feb 2024 Lucio Dery, Steven Kolawole, Jean-François Kagy, Virginia Smith, Graham Neubig, Ameet Talwalkar

Given the generational gap in available hardware between lay practitioners and the most endowed institutions, LLMs are becoming increasingly inaccessible as they grow in size.

Can Large Language Models be Trusted for Evaluation? Scalable Meta-Evaluation of LLMs as Evaluators via Agent Debate

1 code implementation30 Jan 2024 Steffi Chern, Ethan Chern, Graham Neubig, PengFei Liu

Despite the utility of Large Language Models (LLMs) across a wide range of tasks and scenarios, developing a method for reliably evaluating LLMs across varied contexts continues to be challenging.

VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks

1 code implementation24 Jan 2024 Jing Yu Koh, Robert Lo, Lawrence Jang, Vikram Duvvur, Ming Chong Lim, Po-Yu Huang, Graham Neubig, Shuyan Zhou, Ruslan Salakhutdinov, Daniel Fried

Through extensive quantitative and qualitative analysis, we identify several limitations of text-only LLM agents, and reveal gaps in the capabilities of state-of-the-art multimodal language agents.

TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks

1 code implementation23 Jan 2024 Zhiruo Wang, Daniel Fried, Graham Neubig

Language models (LMs) can solve tasks such as answering questions about tables or images by writing programs.

Math Question Answering

Fine-grained Hallucination Detection and Editing for Language Models

no code implementations12 Jan 2024 Abhika Mishra, Akari Asai, Vidhisha Balachandran, Yizhong Wang, Graham Neubig, Yulia Tsvetkov, Hannaneh Hajishirzi

On our benchmark, our automatic and human evaluations show that FAVA significantly outperforms ChatGPT and GPT-4 on fine-grained hallucination detection, and edits suggested by FAVA improve the factuality of LM-generated text.

Hallucination Retrieval

An In-depth Look at Gemini's Language Abilities

1 code implementation18 Dec 2023 Syeda Nahida Akter, Zichun Yu, Aashiq Muhamed, Tianyue Ou, Alex Bäuerle, Ángel Alexander Cabrera, Krish Dholakia, Chenyan Xiong, Graham Neubig

The recently released Google Gemini class of models are the first to comprehensively report results that rival the OpenAI GPT series across a wide variety of tasks.

Instruction Following Math +2

Alignment for Honesty

1 code implementation12 Dec 2023 Yuqing Yang, Ethan Chern, Xipeng Qiu, Graham Neubig, PengFei Liu

Recent research has made significant strides in applying alignment techniques to enhance the helpfulness and harmlessness of large language models (LLMs) in accordance with human intentions.

Multitask Learning Can Improve Worst-Group Outcomes

1 code implementation5 Dec 2023 Atharva Kulkarni, Lucio Dery, Amrith Setlur, aditi raghunathan, Ameet Talwalkar, Graham Neubig

We primarily consider the standard setting of fine-tuning a pre-trained model, where, following recent work \citep{gururangan2020don, dery2023aang}, we multitask the end task with the pre-training objective constructed from the end task data itself.

Fairness

Program-Aided Reasoners (better) Know What They Know

1 code implementation16 Nov 2023 Anubha Kabra, Sanketh Rangreji, Yash Mathur, Aman Madaan, Emmy Liu, Graham Neubig

Our analysis uncovers that prompting styles that produce lesser diversity in generations also have more calibrated results, and thus we also experiment with inducing lower generation diversity using temperature scaling and find that for certain temperatures, PAL is not only more accurate but is also more calibrated than COT.

Divergences between Language Models and Human Brains

1 code implementation15 Nov 2023 Yuchen Zhou, Emmy Liu, Graham Neubig, Michael J. Tarr, Leila Wehbe

In this work, we systematically explore the divergences between human and machine language processing by examining the differences between LM representations and human brain responses to language as measured by Magnetoencephalography (MEG) across two datasets in which subjects read and listened to narrative stories.

Emotional Intelligence

Learning to Filter Context for Retrieval-Augmented Generation

1 code implementation14 Nov 2023 Zhiruo Wang, Jun Araki, Zhengbao Jiang, Md Rizwan Parvez, Graham Neubig

To alleviate these problems, we propose FILCO, a method that improves the quality of the context provided to the generator by (1) identifying useful context based on lexical and information-theoretic approaches, and (2) training context filtering models that can filter retrieved contexts at test time.

Extractive Question-Answering Fact Verification +2

DeMuX: Data-efficient Multilingual Learning

no code implementations10 Nov 2023 Simran Khanuja, Srinivas Gowriraj, Lucio Dery, Graham Neubig

In this paper, we introduce DEMUX, a framework that prescribes the exact data-points to label from vast amounts of unlabelled multilingual data, having unknown degrees of overlap with the target set.

Active Learning

Do LLMs exhibit human-like response biases? A case study in survey design

1 code implementation7 Nov 2023 Lindia Tjuatja, Valerie Chen, Sherry Tongshuang Wu, Ameet Talwalkar, Graham Neubig

As large language models (LLMs) become more capable, there is growing excitement about the possibility of using LLMs as proxies for humans in real-world tasks where subjective labels are desired, such as in surveys and opinion polling.

Teacher Perception of Automatically Extracted Grammar Concepts for L2 Language Learning

no code implementations27 Oct 2023 Aditi Chaudhary, Arun Sampath, Ashwin Sheshadri, Antonios Anastasopoulos, Graham Neubig

This is challenging because i) it requires that such experts be accessible and have the necessary resources, and ii) describing all the intricacies of a language is time-consuming and prone to omission.

SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents

1 code implementation18 Oct 2023 Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, Maarten Sap

We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and evaluate their social intelligence.

Crossing the Threshold: Idiomatic Machine Translation through Retrieval Augmentation and Loss Weighting

1 code implementation10 Oct 2023 Emmy Liu, Aditi Chaudhary, Graham Neubig

Idioms are common in everyday language, but often pose a challenge to translators because their meanings do not follow from the meanings of their parts.

4k Machine Translation +2

It's MBR All the Way Down: Modern Generation Techniques Through the Lens of Minimum Bayes Risk

1 code implementation2 Oct 2023 Amanda Bertsch, Alex Xie, Graham Neubig, Matthew R. Gormley

Minimum Bayes Risk (MBR) decoding is a method for choosing the outputs of a machine learning system based not on the output with the highest probability, but the output with the lowest risk (expected error) among multiple candidates.

ChatGPT MT: Competitive for High- (but not Low-) Resource Languages

1 code implementation14 Sep 2023 Nathaniel R. Robinson, Perez Ogayo, David R. Mortensen, Graham Neubig

Without published experimental evidence on the matter, it is difficult for speakers of the world's diverse languages to know how and whether they can use LLMs for their languages.

Machine Translation

Prompt2Model: Generating Deployable Models from Natural Language Instructions

1 code implementation23 Aug 2023 Vijay Viswanathan, Chenyang Zhao, Amanda Bertsch, Tongshuang Wu, Graham Neubig

In this paper, we propose Prompt2Model, a general-purpose method that takes a natural language task description like the prompts provided to LLMs, and uses it to train a special-purpose model that is conducive to deployment.

Retrieval

WebArena: A Realistic Web Environment for Building Autonomous Agents

1 code implementation25 Jul 2023 Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig

Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions.

FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios

4 code implementations25 Jul 2023 I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, PengFei Liu

With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e. g., ChatGPT).

Code Generation Fact Checking +1

Large Language Models Enable Few-Shot Clustering

1 code implementation2 Jul 2023 Vijay Viswanathan, Kiril Gashteovski, Carolin Lawrence, Tongshuang Wu, Graham Neubig

In this paper, we ask whether a large language model can amplify an expert's guidance to enable query-efficient, few-shot semi-supervised text clustering.

Clustering Language Modelling +2

Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics Interface of LMs Through Agentivity

1 code implementation29 May 2023 Lindia Tjuatja, Emmy Liu, Lori Levin, Graham Neubig

Recent advances in large language models have prompted researchers to examine their abilities across a variety of linguistic tasks, but little has been done to investigate how models handle the interactions in meaning across words and larger syntactic forms -- i. e. phenomena at the intersection of syntax and semantics.

DataFinder: Scientific Dataset Recommendation from Natural Language Descriptions

1 code implementation26 May 2023 Vijay Viswanathan, Luyu Gao, Tongshuang Wu, PengFei Liu, Graham Neubig

Using this data, we compare various information retrieval algorithms on our test set and present a superior bi-encoder retriever for text-based dataset recommendation.

Information Retrieval Retrieval

Solving NLP Problems through Human-System Collaboration: A Discussion-based Approach

1 code implementation19 May 2023 Masahiro Kaneko, Graham Neubig, Naoaki Okazaki

Humans work together to solve common problems by having discussions, explaining, and agreeing or disagreeing with each other.

Natural Language Inference

Active Retrieval Augmented Generation

1 code implementation11 May 2023 Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, Graham Neubig

In this work, we provide a generalized view of active retrieval augmented generation, methods that actively decide when and what to retrieve across the course of the generation.

Retrieval Sentence

Unlimiformer: Long-Range Transformers with Unlimited Length Input

1 code implementation NeurIPS 2023 Amanda Bertsch, Uri Alon, Graham Neubig, Matthew R. Gormley

This kNN index can be kept on either the GPU or CPU memory and queried in sub-linear time; this way, we can index practically unlimited input sequences, while every attention head in every decoder layer retrieves its top-k keys, instead of attending to every key.

Book summarization

A Gold Standard Dataset for the Reviewer Assignment Problem

2 code implementations23 Mar 2023 Ivan Stelmakh, John Wieting, Graham Neubig, Nihar B. Shah

We address this challenge by collecting a novel dataset of similarity scores that we release to the research community.

Computational Language Acquisition with Theory of Mind

1 code implementation2 Mar 2023 Andy Liu, Hao Zhu, Emmy Liu, Yonatan Bisk, Graham Neubig

We also find some evidence that increasing task difficulty in the training process results in more fluent and precise utterances in evaluation.

Language Acquisition

User-Centric Evaluation of OCR Systems for Kwak'wala

no code implementations26 Feb 2023 Shruti Rijhwani, Daisy Rosenblum, Michayla King, Antonios Anastasopoulos, Graham Neubig

There has been recent interest in improving optical character recognition (OCR) for endangered languages, particularly because a large number of documents and books in these languages are not in machine-readable formats.

Optical Character Recognition Optical Character Recognition (OCR)

Learning Performance-Improving Code Edits

2 code implementations15 Feb 2023 Alexander Shypula, Aman Madaan, Yimeng Zeng, Uri Alon, Jacob Gardner, Milad Hashemi, Graham Neubig, Parthasarathy Ranganathan, Osbert Bastani, Amir Yazdanbakhsh

Next, we propose a broad range of adaptation strategies for code optimization; for prompting, these include retrieval-based few-shot prompting and chain-of-thought, and for finetuning, these include performance-conditioned generation and synthetic data augmentation based on self-play.

Code Generation Code Repair +2

Cross-Modal Fine-Tuning: Align then Refine

1 code implementation11 Feb 2023 Junhong Shen, Liam Li, Lucio M. Dery, Corey Staten, Mikhail Khodak, Graham Neubig, Ameet Talwalkar

Fine-tuning large-scale pretrained models has led to tremendous progress in well-studied modalities such as vision and NLP.

AutoML

CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code

1 code implementation10 Feb 2023 Shuyan Zhou, Uri Alon, Sumit Agarwal, Graham Neubig

We release five language-specific pretrained models to use with our publicly available code.

Code Generation

Why do Nearest Neighbor Language Models Work?

1 code implementation7 Jan 2023 Frank F. Xu, Uri Alon, Graham Neubig

Language models (LMs) compute the probability of a text by sequentially computing a representation of an already-seen context and using this representation to predict the next word.

Retrieval

EXCALIBUR: Encouraging and Evaluating Embodied Exploration

no code implementations CVPR 2023 Hao Zhu, Raghav Kapoor, So Yeon Min, Winson Han, Jiatai Li, Kaiwen Geng, Graham Neubig, Yonatan Bisk, Aniruddha Kembhavi, Luca Weihs

Humans constantly explore and learn about their environment out of curiosity, gather information, and update their models of the world.

Beyond Contrastive Learning: A Variational Generative Model for Multilingual Retrieval

1 code implementation21 Dec 2022 John Wieting, Jonathan H. Clark, William W. Cohen, Graham Neubig, Taylor Berg-Kirkpatrick

Contrastive learning has been successfully used for retrieval of semantically aligned sentences, but it often requires large batch sizes or careful engineering to work well.

Contrastive Learning Open-Domain Question Answering +4

Execution-Based Evaluation for Open-Domain Code Generation

1 code implementation20 Dec 2022 Zhiruo Wang, Shuyan Zhou, Daniel Fried, Graham Neubig

To extend the scope of coding queries to more realistic settings, we propose ODEX, the first Open-Domain EXecution-based natural language (NL) to Python code generation dataset.

Code Generation Memorization

Searching for Effective Multilingual Fine-Tuning Methods: A Case Study in Summarization

no code implementations12 Dec 2022 Yiwei Qin, Graham Neubig, PengFei Liu

Recently, a large number of tuning strategies have been proposed to adapt pre-trained language models to downstream tasks.

Text Summarization

T5Score: Discriminative Fine-tuning of Generative Evaluation Metrics

1 code implementation12 Dec 2022 Yiwei Qin, Weizhe Yuan, Graham Neubig, PengFei Liu

Both have their advantages; discriminative metrics are able to directly optimize for the problem of distinguishing between good and bad outputs, while generative metrics can be trained using abundant raw text.

Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single Transformer

1 code implementation5 Dec 2022 Zhengbao Jiang, Luyu Gao, Jun Araki, Haibo Ding, Zhiruo Wang, Jamie Callan, Graham Neubig

Systems for knowledge-intensive tasks such as open-domain question answering (QA) usually consist of two stages: efficient retrieval of relevant documents from a large corpus and detailed reading of the selected documents to generate answers.

Open-Domain Question Answering Passage Retrieval +1

PAL: Program-aided Language Models

2 code implementations18 Nov 2022 Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, PengFei Liu, Yiming Yang, Jamie Callan, Graham Neubig

Much of this success can be attributed to prompting methods such as "chain-of-thought'', which employ LLMs for both understanding the problem description by decomposing it into steps, as well as solving each step of the problem.

Arithmetic Reasoning GSM8K +2

DiffusER: Discrete Diffusion via Edit-based Reconstruction

no code implementations30 Oct 2022 Machel Reid, Vincent J. Hellendoorn, Graham Neubig

In text generation, models that generate text from scratch one token at a time are currently the dominant paradigm.

Denoising Machine Translation +2

He Said, She Said: Style Transfer for Shifting the Perspective of Dialogues

1 code implementation27 Oct 2022 Amanda Bertsch, Graham Neubig, Matthew R. Gormley

As a sample application, we demonstrate that applying perspective shifting to a dialogue summarization dataset (SAMSum) substantially improves the zero-shot performance of extractive news summarization models on this data.

coreference-resolution News Summarization +1

Language Models of Code are Few-Shot Commonsense Learners

1 code implementation13 Oct 2022 Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, Graham Neubig

In all these natural language tasks, we show that using our approach, a code generation LM (CODEX) outperforms natural-LMs that are fine-tuned on the target task (e. g., T5) and other strong LMs such as GPT-3 in the few-shot setting.

Code Generation

A Multi-dimensional Evaluation of Tokenizer-free Multilingual Pretrained Models

no code implementations13 Oct 2022 Jimin Sun, Patrick Fernandes, Xinyi Wang, Graham Neubig

Recent work on tokenizer-free multilingual pretrained models show promising results in improving cross-lingual transfer and reducing engineering overhead (Clark et al., 2022; Xue et al., 2022).

Cross-Lingual Transfer

CTC Alignments Improve Autoregressive Translation

no code implementations11 Oct 2022 Brian Yan, Siddharth Dalmia, Yosuke Higuchi, Graham Neubig, Florian Metze, Alan W Black, Shinji Watanabe

Connectionist Temporal Classification (CTC) is a widely used approach for automatic speech recognition (ASR) that performs conditionally independent monotonic alignment.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Understanding and Improving Zero-shot Multi-hop Reasoning in Generative Question Answering

no code implementations COLING 2022 Zhengbao Jiang, Jun Araki, Haibo Ding, Graham Neubig

In sum, these results demonstrate that multi-hop reasoning does not emerge naturally in generative QA models, but can be encouraged by advances in training or modeling techniques.

Generative Question Answering

Are Representations Built from the Ground Up? An Empirical Examination of Local Composition in Language Models

1 code implementation7 Oct 2022 Emmy Liu, Graham Neubig

We find that the representation of a parent phrase can be predicted with some accuracy given an affine transformation of its children.

Open-Ended Question Answering

Mega: Moving Average Equipped Gated Attention

5 code implementations21 Sep 2022 Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, Luke Zettlemoyer

The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences.

Image Classification Inductive Bias +3

Building African Voices

1 code implementation1 Jul 2022 Perez Ogayo, Graham Neubig, Alan W Black

This paper focuses on speech synthesis for low-resourced African languages, from corpus creation to sharing and deploying the Text-to-Speech (TTS) systems.

Speech Synthesis

Teacher Perception of Automatically Extracted Grammar Concepts for L2 Language Learning

no code implementations10 Jun 2022 Aditi Chaudhary, Arun Sampath, Ashwin Sheshadri, Antonios Anastasopoulos, Graham Neubig

This process is challenging because i) it requires that such experts be accessible and have the necessary resources, and ii) even if there are such experts, describing all the intricacies of a language is time-consuming and prone to omission.

AANG: Automating Auxiliary Learning

2 code implementations27 May 2022 Lucio M. Dery, Paul Michel, Mikhail Khodak, Graham Neubig, Ameet Talwalkar

Auxiliary objectives, supplementary learning signals that are introduced to help aid learning on data-starved or highly complex end-tasks, are commonplace in machine learning.

Auxiliary Learning

Learning to Model Editing Processes

1 code implementation24 May 2022 Machel Reid, Graham Neubig

We introduce baseline results and metrics on this task, finding that modeling editing processes improves performance on a variety of axes on both our proposed task and related downstream tasks compared to previous single-step models of edits.

Machine Translation Model Editing +2

Table Retrieval May Not Necessitate Table-specific Model Design

1 code implementation NAACL (SUKI) 2022 Zhiruo Wang, Zhengbao Jiang, Eric Nyberg, Graham Neubig

In this work, we focus on the task of table retrieval, and ask: "is table-specific model design necessary for table retrieval, or can a simpler text-based model be effectively used to achieve a similar result?"

Hard Attention Natural Questions +2

Quality-Aware Decoding for Neural Machine Translation

1 code implementation NAACL 2022 Patrick Fernandes, António Farinhas, Ricardo Rei, José G. C. de Souza, Perez Ogayo, Graham Neubig, André F. T. Martins

Despite the progress in machine translation quality estimation and evaluation in the last years, decoding in neural machine translation (NMT) is mostly oblivious to this and centers around finding the most probable translation according to the model (MAP decoding), approximated with beam search.

Machine Translation NMT +1

Prompt Consistency for Zero-Shot Task Generalization

1 code implementation29 Apr 2022 Chunting Zhou, Junxian He, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig

One of the most impressive results of recent NLP history is the ability of pre-trained language models to solve new tasks in a zero-shot setting.

Testing the Ability of Language Models to Interpret Figurative Language

2 code implementations NAACL 2022 Emmy Liu, Chen Cui, Kenneth Zheng, Graham Neubig

Figurative and metaphorical language are commonplace in discourse, and figurative expressions play an important role in communication and cognition.

Open-Ended Question Answering

Learning to Scaffold: Optimizing Model Explanations for Teaching

1 code implementation22 Apr 2022 Patrick Fernandes, Marcos Treviso, Danish Pruthi, André F. T. Martins, Graham Neubig

In this work, leveraging meta-learning techniques, we extend this idea to improve the quality of the explanations themselves, specifically by optimizing explanations such that student models more effectively learn to simulate the original model.

Meta-Learning

Distributionally Robust Models with Parametric Likelihood Ratios

1 code implementation ICLR 2022 Paul Michel, Tatsunori Hashimoto, Graham Neubig

As machine learning models are deployed ever more broadly, it becomes increasingly important that they are not only able to perform well on their training distribution, but also yield accurate predictions when confronted with distribution shift.

text-classification Text Classification

BRIO: Bringing Order to Abstractive Summarization

3 code implementations ACL 2022 Yixin Liu, PengFei Liu, Dragomir Radev, Graham Neubig

Abstractive summarization models are commonly trained using maximum likelihood estimation, which assumes a deterministic (one-point) target distribution in which an ideal model will assign all the probability mass to the reference summary.

Abstractive Text Summarization

AUTOLEX: An Automatic Framework for Linguistic Exploration

no code implementations25 Mar 2022 Aditi Chaudhary, Zaid Sheikh, David R Mortensen, Antonios Anastasopoulos, Graham Neubig

Each language has its own complex systems of word, phrase, and sentence construction, the guiding principles of which are often summarized in grammar descriptions for the consumption of linguists or language learners.

Sentence

Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation

1 code implementation ACL 2022 Xinyi Wang, Sebastian Ruder, Graham Neubig

The performance of multilingual pretrained models is highly dependent on the availability of monolingual or parallel text present in a target language.

MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages

1 code implementation16 Mar 2022 Zhiruo Wang, Grace Cuenca, Shuyan Zhou, Frank F. Xu, Graham Neubig

While there has been a recent burgeoning of applications at the intersection of natural and programming languages, such as code generation and code summarization, these applications are usually English-centric.

Code Generation Code Summarization

Show Me More Details: Discovering Hierarchies of Procedures from Semi-structured Web Data

1 code implementation ACL 2022 Shuyan Zhou, Li Zhang, Yue Yang, Qing Lyu, Pengcheng Yin, Chris Callison-Burch, Graham Neubig

To this end, we develop a simple and efficient method that links steps (e. g., "purchase a camera") in an article to other articles with similar goals (e. g., "how to choose a camera"), recursively constructing the KB.

Retrieval Video Retrieval

A Systematic Evaluation of Large Language Models of Code

3 code implementations26 Feb 2022 Frank F. Xu, Uri Alon, Graham Neubig, Vincent J. Hellendoorn

We aim to fill in some of these blanks through a systematic evaluation of the largest existing models: Codex, GPT-J, GPT-Neo, GPT-NeoX-20B, and CodeParrot, across various programming languages.

Language Modelling

DataLab: A Platform for Data Analysis and Intervention

no code implementations ACL 2022 Yang Xiao, Jinlan Fu, Weizhe Yuan, Vijay Viswanathan, Zhoumianze Liu, Yixin Liu, Graham Neubig, PengFei Liu

Despite data's crucial role in machine learning, most existing tools and research tend to focus on systems on top of existing data rather than how to interpret and manipulate data.

Interpreting Language Models with Contrastive Explanations

1 code implementation21 Feb 2022 Kayo Yin, Graham Neubig

Model interpretability methods are often used to explain NLP model decisions on tasks such as text classification, where the output space is relatively small.

Language Modelling text-classification +2

Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval

2 code implementations28 Jan 2022 Uri Alon, Frank F. Xu, Junxian He, Sudipta Sengupta, Dan Roth, Graham Neubig

Retrieval-based language models (R-LM) model the probability of natural language text by combining a standard language model (LM) with examples retrieved from an external datastore at test time.

Language Modelling Retrieval

Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations

1 code implementation17 Dec 2021 Siddhant Arora, Danish Pruthi, Norman Sadeh, William W. Cohen, Zachary C. Lipton, Graham Neubig

Through our evaluation, we observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.

Deception Detection

VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning

1 code implementation5 Dec 2021 Qibin Chen, Jeremy Lacomis, Edward J. Schwartz, Graham Neubig, Bogdan Vasilescu, Claire Le Goues

Machine learning-based program analysis methods use variable name representations for a wide range of tasks, such as suggesting new variable names and bug detection.

Contrastive Learning Learning Semantic Representations +1

DEEP: DEnoising Entity Pre-training for Neural Machine Translation

no code implementations ACL 2022 Junjie Hu, Hiroaki Hayashi, Kyunghyun Cho, Graham Neubig

It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus.

Denoising Multi-Task Learning +3

Lexically Aware Semi-Supervised Learning for OCR Post-Correction

1 code implementation4 Nov 2021 Shruti Rijhwani, Daisy Rosenblum, Antonios Anastasopoulos, Graham Neubig

In addition, to enforce consistency in the recognized vocabulary, we introduce a lexically-aware decoding method that augments the neural post-correction model with a count-based language model constructed from the recognized texts, implemented using weighted finite-state automata (WFSA) for efficient and effective decoding.

Language Modelling Optical Character Recognition +1

Breaking Down Multilingual Machine Translation

no code implementations Findings (ACL) 2022 Ting-Rui Chiang, Yi-Pei Chen, Yi-Ting Yeh, Graham Neubig

While multilingual training is now an essential ingredient in machine translation (MT) systems, recent work has demonstrated that it has different effects in different multilingual settings, such as many-to-one, one-to-many, and many-to-many learning.

Machine Translation Translation

Systematic Inequalities in Language Technology Performance across the World's Languages

2 code implementations13 Oct 2021 Damián Blasi, Antonios Anastasopoulos, Graham Neubig

Natural language processing (NLP) systems have become a central technology in communication, education, medicine, artificial intelligence, and many other domains of research and development.

Dependency Parsing Machine Translation +5

Towards a Unified View of Parameter-Efficient Transfer Learning

1 code implementation ICLR 2022 Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig

Furthermore, our unified framework enables the transfer of design elements across different approaches, and as a result we are able to instantiate new parameter-efficient fine-tuning methods that tune less parameters than previous methods while being more effective, achieving comparable results to fine-tuning all parameters on all four tasks.

Machine Translation text-classification +3

Capturing Structural Locality in Non-parametric Language Models

no code implementations ICLR 2022 Frank F. Xu, Junxian He, Graham Neubig, Vincent J. Hellendoorn

Structural locality is a ubiquitous feature of real-world datasets, wherein data points are organized into local hierarchies.

Symmetric Machine Theory of Mind

no code implementations29 Sep 2021 Melanie Sclar, Graham Neubig, Yonatan Bisk

Theory of mind (ToM), the ability to understand others' thoughts and desires, is a cornerstone of human intelligence.

Learning to Superoptimize Real-world Programs

no code implementations28 Sep 2021 Alex Shypula, Pengcheng Yin, Jeremy Lacomis, Claire Le Goues, Edward Schwartz, Graham Neubig

We also report that SILO's rate of superoptimization on our test set is over five times that of a standard policy gradient approach and a model pre-trained on compiler optimization demonstration.

Compiler Optimization Imitation Learning

Dependency Induction Through the Lens of Visual Perception

1 code implementation CoNLL (EMNLP) 2021 Ruisi Su, Shruti Rijhwani, Hao Zhu, Junxian He, Xinyu Wang, Yonatan Bisk, Graham Neubig

Our experiments find that concreteness is a strong indicator for learning dependency grammars, improving the direct attachment score (DAS) by over 50\% as compared to state-of-the-art models trained on pure text.

Constituency Grammar Induction Dependency Parsing

Procedures as Programs: Hierarchical Control of Situated Agents through Natural Language

no code implementations NAACL (SUKI) 2022 Shuyan Zhou, Pengcheng Yin, Graham Neubig

When humans conceive how to perform a particular task, they do so hierarchically: splitting higher-level tasks into smaller sub-tasks.

Instruction Following

When Does Translation Require Context? A Data-driven, Multilingual Exploration

no code implementations15 Sep 2021 Patrick Fernandes, Kayo Yin, Emmy Liu, André F. T. Martins, Graham Neubig

Although proper handling of discourse significantly contributes to the quality of machine translation (MT), these improvements are not adequately measured in common translation quality metrics.

Machine Translation Translation

Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative

2 code implementations ICLR 2022 Lucio M. Dery, Paul Michel, Ameet Talwalkar, Graham Neubig

In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks.

Meta-Learning

When is Wall a Pared and when a Muro? -- Extracting Rules Governing Lexical Selection

1 code implementation13 Sep 2021 Aditi Chaudhary, Kayo Yin, Antonios Anastasopoulos, Graham Neubig

Learning fine-grained distinctions between vocabulary items is a key challenge in learning a new language.

Distributionally Robust Multilingual Machine Translation

1 code implementation EMNLP 2021 Chunting Zhou, Daniel Levy, Xian Li, Marjan Ghazvininejad, Graham Neubig

Multilingual neural machine translation (MNMT) learns to translate multiple language pairs with a single model, potentially improving both the accuracy and the memory-efficiency of deployed models.

Machine Translation Translation

Efficient Nearest Neighbor Language Models

2 code implementations EMNLP 2021 Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick

Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore, which allows them to learn through explicitly memorizing the training datapoints.

Domain Adaptation Language Modelling +1

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing

1 code implementation28 Jul 2021 PengFei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig

This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning".

Language Modelling Zero-Shot Learning

Few-shot Language Coordination by Modeling Theory of Mind

no code implementations12 Jul 2021 Hao Zhu, Graham Neubig, Yonatan Bisk

Positive results from our experiments hint at the importance of explicitly modeling communication as a socio-pragmatic progress.

BARTScore: Evaluating Generated Text as Text Generation

1 code implementation NeurIPS 2021 Weizhe Yuan, Graham Neubig, PengFei Liu

In this work, we conceptualize the evaluation of generated text as a text generation problem, modeled using pre-trained sequence-to-sequence models.

Informativeness Machine Translation +3

Examining and Combating Spurious Features under Distribution Shift

1 code implementation14 Jun 2021 Chunting Zhou, Xuezhe Ma, Paul Michel, Graham Neubig

Group distributionally robust optimization (DRO) provides an effective tool to alleviate covariate shift by minimizing the worst-case training loss over a set of pre-defined groups.

CitationIE: Leveraging the Citation Graph for Scientific Information Extraction

1 code implementation ACL 2021 Vijay Viswanathan, Graham Neubig, PengFei Liu

Automatically extracting key information from scientific documents has the potential to help scientists work more efficiently and accelerate the pace of scientific progress.

Data Augmentation for Sign Language Gloss Translation

no code implementations MTSummit 2021 Amit Moryossef, Kayo Yin, Graham Neubig, Yoav Goldberg

Sign language translation (SLT) is often decomposed into video-to-gloss recognition and gloss-to-text translation, where a gloss is a sequence of transcribed spoken-language words in the order in which they are signed.

Data Augmentation Low-Resource Neural Machine Translation +3

Measuring and Increasing Context Usage in Context-Aware Machine Translation

1 code implementation ACL 2021 Patrick Fernandes, Kayo Yin, Graham Neubig, André F. T. Martins

Recent work in neural machine translation has demonstrated both the necessity and feasibility of using inter-sentential context -- context from sentences other than those currently being translated.

Document Level Machine Translation Machine Translation +1

Paraphrastic Representations at Scale

1 code implementation30 Apr 2021 John Wieting, Kevin Gimpel, Graham Neubig, Taylor Berg-Kirkpatrick

We train these models on large amounts of data, achieving significantly improved performance from the original papers proposing the methods on a suite of monolingual semantic similarity, cross-lingual semantic similarity, and bitext mining tasks.

Semantic Similarity Semantic Textual Similarity +1

MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning

2 code implementations NAACL 2021 Mengzhou Xia, Guoqing Zheng, Subhabrata Mukherjee, Milad Shokouhi, Graham Neubig, Ahmed Hassan Awadallah

Extensive experiments on real-world low-resource languages - without access to large-scale monolingual corpora or large amounts of labeled data - for tasks like cross-lingual sentiment analysis and named entity recognition show the effectiveness of our approach.

Cross-Lingual Transfer Meta-Learning +5

ExplainaBoard: An Explainable Leaderboard for NLP

1 code implementation ACL 2021 PengFei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaicheng Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, Zi-Yi Dou, Graham Neubig

In this paper, we present a new conceptualization and implementation of NLP evaluation: the ExplainaBoard, which in addition to inheriting the functionality of the standard leaderboard, also allows researchers to (i) diagnose strengths and weaknesses of a single system (e. g.~what is the best-performing system bad at?)

Machine Translation

MasakhaNER: Named Entity Recognition for African Languages

2 code implementations22 Mar 2021 David Ifeoluwa Adelani, Jade Abbott, Graham Neubig, Daniel D'souza, Julia Kreutzer, Constantine Lignos, Chester Palen-Michel, Happy Buzaaba, Shruti Rijhwani, Sebastian Ruder, Stephen Mayhew, Israel Abebe Azime, Shamsuddeen Muhammad, Chris Chinenye Emezue, Joyce Nakatumba-Nabende, Perez Ogayo, Anuoluwapo Aremu, Catherine Gitau, Derguene Mbaye, Jesujoba Alabi, Seid Muhie Yimam, Tajuddeen Gwadabe, Ignatius Ezeani, Rubungo Andre Niyongabo, Jonathan Mukiibi, Verrah Otiende, Iroro Orife, Davis David, Samba Ngom, Tosin Adewumi, Paul Rayson, Mofetoluwa Adeyemi, Gerald Muriuki, Emmanuel Anebi, Chiamaka Chukwuneke, Nkiruka Odu, Eric Peter Wairagala, Samuel Oyerinde, Clemencia Siro, Tobius Saul Bateesa, Temilola Oloyede, Yvonne Wambui, Victor Akinode, Deborah Nabagereka, Maurice Katusiime, Ayodele Awokoya, Mouhamadane MBOUP, Dibora Gebreyohannes, Henok Tilaye, Kelechi Nwaike, Degaga Wolde, Abdoulaye Faye, Blessing Sibanda, Orevaoghene Ahia, Bonaventure F. P. Dossou, Kelechi Ogueji, Thierno Ibrahima DIOP, Abdoulaye Diallo, Adewale Akinfaderin, Tendai Marengereke, Salomey Osei

We take a step towards addressing the under-representation of the African continent in NLP research by creating the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages, bringing together a variety of stakeholders.

named-entity-recognition Named Entity Recognition +2

Modeling the Second Player in Distributionally Robust Optimization

1 code implementation ICLR 2021 Paul Michel, Tatsunori Hashimoto, Graham Neubig

Distributionally robust optimization (DRO) provides a framework for training machine learning models that are able to perform well on a collection of related data distributions (the "uncertainty set").

Model Selection

Multi-view Subword Regularization

1 code implementation NAACL 2021 Xinyi Wang, Sebastian Ruder, Graham Neubig

Multilingual pretrained representations generally rely on subword segmentation algorithms to create a shared multilingual vocabulary.

Cross-Lingual Transfer Segmentation

Meta Back-translation

1 code implementation ICLR 2021 Hieu Pham, Xinyi Wang, Yiming Yang, Graham Neubig

Back-translation is an effective strategy to improve the performance of Neural Machine Translation~(NMT) by generating pseudo-parallel data.

Machine Translation Meta-Learning +2

Towards More Fine-grained and Reliable NLP Performance Prediction

1 code implementation EACL 2021 Zihuiwen Ye, PengFei Liu, Jinlan Fu, Graham Neubig

We perform an analysis of four types of NLP tasks, and both demonstrate the feasibility of fine-grained performance prediction and the necessity to perform reliability analysis for performance prediction methods in the future.

Can We Automate Scientific Reviewing?

1 code implementation30 Jan 2021 Weizhe Yuan, PengFei Liu, Graham Neubig

The rapid development of science and technology has been accompanied by an exponential growth in peer-reviewed scientific publications.

Review Generation

Learning Structural Edits via Incremental Tree Transformations

1 code implementation ICLR 2021 Ziyu Yao, Frank F. Xu, Pengcheng Yin, Huan Sun, Graham Neubig

To show the unique benefits of modeling tree edits directly, we further propose a novel edit encoder for learning to represent edits, as well as an imitation learning method that allows the editor to be more robust.

Imitation Learning

In-IDE Code Generation from Natural Language: Promise and Challenges

no code implementations27 Jan 2021 Frank F. Xu, Bogdan Vasilescu, Graham Neubig

A great part of software development involves conceptualizing or communicating the underlying procedures and logic that needs to be expressed in programs.

Code Generation Data Visualization Software Engineering

Word Alignment by Fine-tuning Embeddings on Parallel Corpora

3 code implementations EACL 2021 Zi-Yi Dou, Graham Neubig

In addition, we demonstrate that we are able to train multilingual word aligners that can obtain robust performance on different language pairs.

Cross-Lingual Transfer Translation +2

How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering

1 code implementation2 Dec 2020 Zhengbao Jiang, Jun Araki, Haibo Ding, Graham Neubig

We examine this question from the point of view of calibration, the property of a probabilistic model's predicted probabilities actually being well correlated with the probabilities of correctness.

Common Sense Reasoning Question Answering

Endangered Languages meet Modern NLP

no code implementations COLING 2020 Antonios Anastasopoulos, Christopher Cox, Graham Neubig, Hilaria Cruz

This tutorial will focus on NLP for endangered languages documentation and revitalization.

Evaluating Explanations: How much do explanations from the teacher aid students?

1 code implementation1 Dec 2020 Danish Pruthi, Rachit Bansal, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C. Lipton, Graham Neubig, William W. Cohen

While many methods purport to explain predictions by highlighting salient features, what aims these explanations serve and how they ought to be evaluated often go unstated.

Question Answering text-classification +1

Automatic Interlinear Glossing for Under-Resourced Languages Leveraging Translations

no code implementations COLING 2020 Xingyuan Zhao, Satoru Ozaki, Antonios Anastasopoulos, Graham Neubig, Lori Levin

Interlinear Glossed Text (IGT) is a widely used format for encoding linguistic information in language documentation projects and scholarly papers.

Cross-Lingual Transfer LEMMA +1

Decoding and Diversity in Machine Translation

no code implementations26 Nov 2020 Nicholas Roberts, Davis Liang, Graham Neubig, Zachary C. Lipton

This makes human-level BLEU a misleading benchmark in that modern MT systems cannot approach human-level BLEU while simultaneously maintaining human-level translation diversity.

Machine Translation NMT +1

WikiAsp: A Dataset for Multi-domain Aspect-based Summarization

1 code implementation16 Nov 2020 Hiroaki Hayashi, Prashant Budania, Peng Wang, Chris Ackerson, Raj Neervannan, Graham Neubig

In this paper, we propose WikiAsp, a large-scale dataset for multi-domain aspect-based summarization that attempts to spur research in the direction of open-domain aspect-based summarization.

Interpretable Multi-dataset Evaluation for Named Entity Recognition

2 code implementations EMNLP 2020 Jinlan Fu, PengFei Liu, Graham Neubig

With the proliferation of models for natural language processing tasks, it is even harder to understand the differences between models and their relative merits.

named-entity-recognition Named Entity Recognition +1

OCR Post Correction for Endangered Language Texts

1 code implementation EMNLP 2020 Shruti Rijhwani, Antonios Anastasopoulos, Graham Neubig

There is little to no data available to build natural language processing models for most endangered languages.

Optical Character Recognition (OCR)

Detecting Hallucinated Content in Conditional Neural Sequence Generation

2 code implementations Findings (ACL) 2021 Chunting Zhou, Graham Neubig, Jiatao Gu, Mona Diab, Paco Guzman, Luke Zettlemoyer, Marjan Ghazvininejad

Neural sequence models can generate highly fluent sentences, but recent studies have also shown that they are also prone to hallucinate additional content not supported by the input.

Abstractive Text Summarization Hallucination +1

Weakly- and Semi-supervised Evidence Extraction

1 code implementation Findings of the Association for Computational Linguistics 2020 Danish Pruthi, Bhuwan Dhingra, Graham Neubig, Zachary C. Lipton

For many prediction tasks, stakeholders desire not only predictions but also supporting evidence that a human can use to verify its correctness.

Reducing Confusion in Active Learning for Part-Of-Speech Tagging

no code implementations2 Nov 2020 Aditi Chaudhary, Antonios Anastasopoulos, Zaid Sheikh, Graham Neubig

Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost.

Active Learning Part-Of-Speech Tagging +1

On Learning Text Style Transfer with Direct Rewards

1 code implementation NAACL 2021 Yixin Liu, Graham Neubig, John Wieting

In most cases, the lack of parallel corpora makes it impossible to directly train supervised models for the text style transfer task.

Machine Translation Semantic Similarity +4

GSum: A General Framework for Guided Neural Abstractive Summarization

1 code implementation NAACL 2021 Zi-Yi Dou, PengFei Liu, Hiroaki Hayashi, Zhengbao Jiang, Graham Neubig

Neural abstractive summarization models are flexible and can produce coherent summaries, but they are sometimes unfaithful and can be difficult to control.

Abstractive Text Summarization

Explicit Alignment Objectives for Multilingual Bidirectional Encoders

no code implementations NAACL 2021 Junjie Hu, Melvin Johnson, Orhan Firat, Aditya Siddhant, Graham Neubig

Pre-trained cross-lingual encoders such as mBERT (Devlin et al., 2019) and XLMR (Conneau et al., 2020) have proven to be impressively effective at enabling transfer-learning of NLP systems from high-resource languages to low-resource languages.

Retrieval Sentence +3

Re-evaluating Evaluation in Text Summarization

1 code implementation EMNLP 2020 Manik Bhandari, Pranav Gour, Atabak Ashfaq, PengFei Liu, Graham Neubig

Automated evaluation metrics as a stand-in for manual evaluation are an essential part of the development of text-generation tasks such as text summarization.

Text Generation Text Summarization

X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models

1 code implementation EMNLP 2020 Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, Graham Neubig

We further propose a code-switching-based method to improve the ability of multilingual LMs to access knowledge, and verify its effectiveness on several benchmark languages.

Retrieval

Improving Target-side Lexical Transfer in Multilingual Neural Machine Translation

no code implementations Findings of the Association for Computational Linguistics 2020 Luyu Gao, Xinyi Wang, Graham Neubig

To improve the performance of Neural Machine Translation~(NMT) for low-resource languages~(LRL), one effective strategy is to leverage parallel data from a related high-resource language~(HRL).

Machine Translation NMT +1

Automatic Extraction of Rules Governing Morphological Agreement

1 code implementation EMNLP 2020 Aditi Chaudhary, Antonios Anastasopoulos, Adithya Pratapa, David R. Mortensen, Zaid Sheikh, Yulia Tsvetkov, Graham Neubig

Using cross-lingual transfer, even with no expert annotations in the language of interest, our framework extracts a grammatical specification which is nearly equivalent to those created with large amounts of gold-standard annotated data.

Cross-Lingual Transfer Descriptive

The Return of Lexical Dependencies: Neural Lexicalized PCFGs

3 code implementations29 Jul 2020 Hao Zhu, Yonatan Bisk, Graham Neubig

In this paper we demonstrate that $\textit{context free grammar (CFG) based methods for grammar induction benefit from modeling lexical dependencies}$.

Transliteration for Cross-Lingual Morphological Inflection

no code implementations WS 2020 Nikitha Murikinati, Antonios Anastasopoulos, Graham Neubig

Cross-lingual transfer between typologically related languages has been proven successful for the task of morphological inflection.

Cross-Lingual Transfer Morphological Inflection +1

Findings of the Fourth Workshop on Neural Generation and Translation

no code implementations WS 2020 Kenneth Heafield, Hiroaki Hayashi, Yusuke Oda, Ioannis Konstas, Andrew Finch, Graham Neubig, Xi-An Li, Alex Birch, ra

We describe the finding of the Fourth Workshop on Neural Generation and Translation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2020).

Machine Translation NMT +1

Learning Sparse Prototypes for Text Generation

1 code implementation NeurIPS 2020 Junxian He, Taylor Berg-Kirkpatrick, Graham Neubig

While effective, these methods are inefficient at test time as a result of needing to store and index the entire training corpus.

Language Modelling Prototype Selection +4

TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data

1 code implementation ACL 2020 Pengcheng Yin, Graham Neubig, Wen-tau Yih, Sebastian Riedel

Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks.

Ranked #10 on Text-To-SQL on spider (Exact Match Accuracy (Dev) metric)

Semantic Parsing Text-To-SQL

Soft Gazetteers for Low-Resource Named Entity Recognition

1 code implementation ACL 2020 Shruti Rijhwani, Shuyan Zhou, Graham Neubig, Jaime Carbonell

However, designing such features for low-resource languages is challenging, because exhaustive entity gazetteers do not exist in these languages.

Cross-Lingual Entity Linking Entity Linking +4

Predicting Performance for Natural Language Processing Tasks

1 code implementation ACL 2020 Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, Graham Neubig

Given the complexity of combinations of tasks, languages, and domains in natural language processing (NLP) research, it is computationally prohibitive to exhaustively test newly proposed models on each possible experimental setting.

Politeness Transfer: A Tag and Generate Approach

2 code implementations ACL 2020 Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnabas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W. black, Shrimai Prabhumoye

This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning.

Sentence Style Transfer +1

Practical Comparable Data Collection for Low-Resource Languages via Images

1 code implementation24 Apr 2020 Aman Madaan, Shruti Rijhwani, Antonios Anastasopoulos, Yiming Yang, Graham Neubig

We propose a method of curating high-quality comparable training data for low-resource languages with monolingual annotators.

Machine Translation Translation

AlloVera: A Multilingual Allophone Database

no code implementations LREC 2020 David R. Mortensen, Xinjian Li, Patrick Littell, Alexis Michaud, Shruti Rijhwani, Antonios Anastasopoulos, Alan W. black, Florian Metze, Graham Neubig

While phonemic representations are language specific, phonetic representations (stated in terms of (allo)phones) are much closer to a universal (language-independent) transcription.

speech-recognition Speech Recognition

Balancing Training for Multilingual Neural Machine Translation

2 code implementations ACL 2020 Xinyi Wang, Yulia Tsvetkov, Graham Neubig

When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others.

Machine Translation Translation

Weight Poisoning Attacks on Pre-trained Models

2 code implementations14 Apr 2020 Keita Kurita, Paul Michel, Graham Neubig

We show that by applying a regularization method, which we call RIPPLe, and an initialization procedure, which we call Embedding Surgery, such attacks are possible even with limited knowledge of the dataset and fine-tuning procedure.

Sentiment Analysis Sentiment Classification +1

Dynamic Data Selection and Weighting for Iterative Back-Translation

1 code implementation EMNLP 2020 Zi-Yi Dou, Antonios Anastasopoulos, Graham Neubig

Back-translation has proven to be an effective method to utilize monolingual data in neural machine translation (NMT), and iteratively conducting back-translation can further improve the model performance.

Domain Adaptation Machine Translation +3

A Set of Recommendations for Assessing Human-Machine Parity in Language Translation

1 code implementation3 Apr 2020 Samuel Läubli, Sheila Castilho, Graham Neubig, Rico Sennrich, Qinlan Shen, Antonio Toral

The quality of machine translation has increased remarkably over the past years, to the degree that it was found to be indistinguishable from professional human translation in a number of empirical investigations.

Machine Translation Translation

XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization

4 code implementations24 Mar 2020 Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson

However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing.

Cross-Lingual Transfer Retrieval +1

Improving Candidate Generation for Low-resource Cross-lingual Entity Linking

1 code implementation TACL 2020 Shuyan Zhou, Shruti Rijhawani, John Wieting, Jaime Carbonell, Graham Neubig

Cross-lingual entity linking (XEL) is the task of finding referents in a target-language knowledge base (KB) for mentions extracted from source-language texts.

Cross-Lingual Entity Linking Entity Linking +1

Differentiable Reasoning over a Virtual Knowledge Base

1 code implementation ICLR 2020 Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W. Cohen

In particular, we describe a neural module, DrKIT, that traverses textual data like a KB, softly following paths of relations between mentions of entities in the corpus.

Re-Ranking

A Probabilistic Formulation of Unsupervised Text Style Transfer

5 code implementations ICLR 2020 Junxian He, Xinyi Wang, Graham Neubig, Taylor Berg-Kirkpatrick

Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes.

Decipherment Language Modelling +6

Merging Weak and Active Supervision for Semantic Parsing

1 code implementation29 Nov 2019 Ansong Ni, Pengcheng Yin, Graham Neubig

Experiments on WikiTableQuestions with human annotators show that our method can improve the performance with only 100 active queries, especially for weakly-supervised parsers learnt from a cold start.

Active Learning Semantic Parsing

How Can We Know What Language Models Know?

1 code implementation TACL 2020 Zhengbao Jiang, Frank F. Xu, Jun Araki, Graham Neubig

Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as "Obama is a _ by profession".

Optimizing Data Usage via Differentiable Rewards

1 code implementation ICML 2020 Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anastasopoulos, Jaime Carbonell, Graham Neubig

To acquire a new skill, humans learn better and faster if a tutor, based on their current knowledge level, informs them of how much attention they should pay to particular content or practice problems.

Image Classification Machine Translation

A Bilingual Generative Transformer for Semantic Sentence Embedding

2 code implementations EMNLP 2020 John Wieting, Graham Neubig, Taylor Berg-Kirkpatrick

Semantic sentence embedding models encode natural language sentences into vectors, such that closeness in embedding space indicates closeness in the semantics between the sentences.

Semantic Similarity Semantic Textual Similarity +3

Generalizing Natural Language Analysis through Span-relation Representations

3 code implementations ACL 2020 Zhengbao Jiang, Wei Xu, Jun Araki, Graham Neubig

Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +8

Understanding Knowledge Distillation in Non-autoregressive Machine Translation

no code implementations ICLR 2020 Chunting Zhou, Graham Neubig, Jiatao Gu

We find that knowledge distillation can reduce the complexity of data sets and help NAT to model the variations in the output data.

Knowledge Distillation Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.