Search Results for author: David Chiang

Found 60 papers, 21 papers with code

Transformers as Recognizers of Formal Languages: A Survey on Expressivity

no code implementations1 Nov 2023 Lena Strobl, William Merrill, Gail Weiss, David Chiang, Dana Angluin

As transformers have gained prominence in natural language processing, some researchers have investigated theoretically what problems they can and cannot solve, by treating problems as formal languages.

BERTwich: Extending BERT's Capabilities to Model Dialectal and Noisy Text

no code implementations31 Oct 2023 Aarohi Srivastava, David Chiang

Real-world NLP applications often deal with nonstandard text (e. g., dialectal, informal, or misspelled text).

Language Modelling Masked Language Modeling

Efficient Algorithms for Recognizing Weighted Tree-Adjoining Languages

no code implementations23 Oct 2023 Alexandra Butoi, Tim Vieira, Ryan Cotterell, David Chiang

From these, we also immediately obtain stringsum and allsum algorithms for TAG, LIG, PAA, and EPDA.

TAG

Masked Hard-Attention Transformers and Boolean RASP Recognize Exactly the Star-Free Languages

no code implementations21 Oct 2023 Dana Angluin, David Chiang, Andy Yang

We consider transformer encoders with hard attention (in which all attention is focused on exactly one position) and strict future masking (in which each position only attends to positions strictly to its left), and prove that the class of languages recognized by these networks is exactly the star-free languages.

Hard Attention Position

Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns

1 code implementation3 Oct 2023 Brian DuSell, David Chiang

Attention, specifically scaled dot-product attention, has proven effective for natural language, but it does not have a mechanism for handling hierarchical patterns of arbitrary nesting depth, which limits its ability to recognize certain syntactic structures.

Language Modelling Machine Translation

Universal Automatic Phonetic Transcription into the International Phonetic Alphabet

1 code implementation7 Aug 2023 Chihiro Taguchi, Yusuke Sakai, Parisa Haghani, David Chiang

This paper presents a state-of-the-art model for transcribing speech in any language into the International Phonetic Alphabet (IPA).

Convergence and Diversity in the Control Hierarchy

no code implementations6 Jun 2023 Alexandra Butoi, Ryan Cotterell, David Chiang

Furthermore, using an even stricter notion of equivalence called d-strong equivalence, we make precise the intuition that a CFG controlling a CFG is a TAG, a PDA controlling a PDA is an embedded PDA, and a PDA controlling a CFG is a LIG.

TAG

Fine-Tuning BERT with Character-Level Noise for Zero-Shot Transfer to Dialects and Closely-Related Languages

no code implementations30 Mar 2023 Aarohi Srivastava, David Chiang

In this work, we induce character-level noise in various forms when fine-tuning BERT to enable zero-shot cross-lingual transfer to unseen dialects and languages.

Sentence Zero-Shot Cross-Lingual Transfer

Tighter Bounds on the Expressivity of Transformer Encoders

no code implementations25 Jan 2023 David Chiang, Peter Cholak, Anand Pillay

Characterizing neural networks in terms of better-understood formal systems has the potential to yield new insights into the power and limitations of these networks.

Bridging Graph Position Encodings for Transformers with Weighted Graph-Walking Automata

no code implementations13 Dec 2022 Patrick Soga, David Chiang

A current goal in the graph neural network literature is to enable transformers to operate on graph-structured data, given their success on language and vision tasks.

Machine Translation Position

Algorithms for Weighted Pushdown Automata

1 code implementation13 Oct 2022 Alexandra Butoi, Brian DuSell, Tim Vieira, Ryan Cotterell, David Chiang

Weighted pushdown automata (WPDAs) are at the core of many natural language processing tasks, like syntax-based statistical machine translation and transition-based dependency parsing.

Machine Translation Transition-Based Dependency Parsing

The Surprising Computational Power of Nondeterministic Stack RNNs

2 code implementations4 Oct 2022 Brian DuSell, David Chiang

Second, it can recognize languages with much larger alphabet sizes than one might expect given the size of its stack alphabet.

Language Modelling

Overcoming a Theoretical Limitation of Self-Attention

1 code implementation ACL 2022 David Chiang, Peter Cholak

We examine this limitation using two languages: PARITY, the language of bit strings with an odd number of 1s, and FIRST, the language of bit strings starting with a 1.

LEMMA Machine Translation +1

Learning Hierarchical Structures with Differentiable Nondeterministic Stacks

1 code implementation ICLR 2022 Brian DuSell, David Chiang

Learning hierarchical structures in sequential data -- from simple algorithmic patterns to natural language -- in a reliable, generalizable way remains a challenging problem for neural language models.

Inductive Bias Language Modelling

Data Augmentation by Concatenation for Low-Resource Translation: A Mystery and a Solution

no code implementations ACL (IWSLT) 2021 Toan Q. Nguyen, Kenton Murray, David Chiang

In this paper, we investigate the driving factors behind concatenation, a simple but effective data augmentation method for low-resource neural machine translation.

Data Augmentation Low-Resource Neural Machine Translation +2

Named Tensor Notation

1 code implementation25 Feb 2021 David Chiang, Alexander M. Rush, Boaz Barak

We propose a notation for tensors with named axes, which relieves the author, reader, and future implementers of machine learning models from the burden of keeping track of the order of axes and the purpose of each.

Factor Graph Grammars

1 code implementation NeurIPS 2020 David Chiang, Darcey Riley

We propose the use of hyperedge replacement graph grammars for factor graphs, or factor graph grammars (FGGs) for short.

Translating Recursive Probabilistic Programs to Factor Graph Grammars

no code implementations22 Oct 2020 David Chiang, Chung-chieh Shan

It is natural for probabilistic programs to use conditionals to express alternative substructures in models, and loops (recursion) to express repeated substructures in models.

Translation

Learning Context-Free Languages with Nondeterministic Stack RNNs

1 code implementation CONLL 2020 Brian DuSell, David Chiang

We present a differentiable stack data structure that simultaneously and tractably encodes an exponential number of stack configurations, based on Lang's algorithm for simulating nondeterministic pushdown automata.

Representing Unordered Data Using Complex-Weighted Multiset Automata

no code implementations2 Jan 2020 Justin DeBenedetto, David Chiang

Unordered, variable-sized inputs arise in many settings across multiple fields.

Efficiency through Auto-Sizing: Notre Dame NLP's Submission to the WNGT 2019 Efficiency Task

no code implementations WS 2019 Kenton Murray, Brian DuSell, David Chiang

We investigated the impact of auto-sizing (Murray and Chiang, 2015; Murray et al., 2019) to the Transformer network (Vaswani et al., 2017) with the goal of substantially reducing the number of parameters in the model.

Accelerating Sparse Matrix Operations in Neural Networks on Graphics Processing Units

no code implementations ACL 2019 Arturo Argueta, David Chiang

Operations using sparse structures are common in natural language models at the input and output layers, because these models operate on sequences over discrete alphabets.

Measuring Human Perception to Improve Handwritten Document Transcription

no code implementations7 Apr 2019 Samuel Grieggs, Bingyu Shen, Greta Rauch, Pei Li, Jiaqi Ma, David Chiang, Brian Price, Walter J. Scheirer

The subtleties of human perception, as measured by vision scientists through the use of psychophysics, are important clues to the internal workings of visual recognition.

Neural Machine Translation of Text from Non-Native Speakers

2 code implementations NAACL 2019 Antonios Anastasopoulos, Alison Lui, Toan Nguyen, David Chiang

Neural Machine Translation (NMT) systems are known to degrade when confronted with noisy data, especially when the system is trained only on clean data.

Machine Translation NMT +1

Part-of-Speech Tagging on an Endangered Language: a Parallel Griko-Italian Resource

1 code implementation COLING 2018 Antonis Anastasopoulos, Marika Lekakou, Josep Quer, Eleni Zimianiti, Justin DeBenedetto, David Chiang

Most work on part-of-speech (POS) tagging is focused on high resource languages, or examines low-resource and active learning settings through simulated studies.

Active Learning Cross-Lingual Transfer +4

Composing Finite State Transducers on GPUs

no code implementations ACL 2018 Arturo Argueta, David Chiang

Weighted finite-state transducers (FSTs) are frequently used in language processing to handle tasks such as part-of-speech tagging and speech recognition.

Part-Of-Speech Tagging speech-recognition +1

Leveraging translations for speech transcription in low-resource settings

1 code implementation23 Mar 2018 Antonis Anastasopoulos, David Chiang

Recently proposed data collection frameworks for endangered language documentation aim not only to collect speech in the language of interest, but also to collect translations into a high-resource language that will render the collected resource interpretable.

Weighted DAG Automata for Semantic Graphs

no code implementations CL 2018 David Chiang, Frank Drewes, Daniel Gildea, Adam Lopez, Giorgio Satta

Graphs have a variety of uses in natural language processing, particularly as representations of linguistic meaning.

Tied Multitask Learning for Neural Speech Translation

no code implementations NAACL 2018 Antonios Anastasopoulos, David Chiang

We explore multitask models for neural translation of speech, augmenting them in order to reflect two intuitive notions.

Translation

Spoken Term Discovery for Language Documentation using Translations

no code implementations WS 2017 Antonios Anastasopoulos, Sameer Bansal, David Chiang, Sharon Goldwater, Adam Lopez

Vast amounts of speech data collected for language documentation and research remain untranscribed and unsearchable, but often a small amount of speech may have text translations available.

Translation

Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation

no code implementations IJCNLP 2017 Toan Q. Nguyen, David Chiang

We present a simple method to improve neural translation of a low-resource language pair using parallel data from a related, also low-resource, language pair.

Machine Translation Transfer Learning +2

Improved Neural Machine Translation with a Syntax-Aware Encoder and Decoder

1 code implementation ACL 2017 Huadong Chen, Shu-Jian Huang, David Chiang, Jia-Jun Chen

Most neural machine translation (NMT) models are based on the sequential encoder-decoder framework, which makes no use of syntactic information.

Machine Translation NMT +1

A case study on using speech-to-translation alignments for language documentation

no code implementations WS 2017 Antonios Anastasopoulos, David Chiang

For many low-resource or endangered languages, spoken language resources are more likely to be annotated with translations than with transcriptions.

speech-recognition Speech Recognition +1

DyNet: The Dynamic Neural Network Toolkit

4 code implementations15 Jan 2017 Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, Pengcheng Yin

In the static declaration strategy that is used in toolkits like Theano, CNTK, and TensorFlow, the user first defines a computation graph (a symbolic representation of the computation), and then examples are fed into an engine that executes this computation and computes its derivatives.

graph construction

Decoding with Finite-State Transducers on GPUs

no code implementations EACL 2017 Arturo Argueta, David Chiang

Weighted finite automata and transducers (including hidden Markov models and conditional random fields) are widely used in natural language processing (NLP) to perform tasks such as morphological analysis, part-of-speech tagging, chunking, named entity recognition, speech recognition, and others.

Chunking Morphological Analysis +6

Growing Graphs with Hyperedge Replacement Graph Grammars

1 code implementation10 Aug 2016 Salvador Aguiñaga, Rodrigo Palacios, David Chiang, Tim Weninger

In experiments on large real world networks, we show that random graphs, generated from extracted graph grammars, exhibit a wide range of properties that are very similar to the original graphs.

Cannot find the paper you are looking for? You can Submit a new open access paper.