3 code implementations • 15 Jun 2021 • Jiquan Ngiam, Benjamin Caine, Vijay Vasudevan, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David Weiss, Ben Sapp, Zhifeng Chen, Jonathon Shlens
In this work, we formulate a model for predicting the behavior of all agents jointly, producing consistent futures that account for interactions between agents.
no code implementations • 11 Jan 2020 • Jeffrey Ling, Nicholas FitzGerald, Zifei Shan, Livio Baldini Soares, Thibault Févry, David Weiss, Tom Kwiatkowski
Language modeling tasks, in which words, or word-pieces, are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases.
Ranked #1 on Entity Linking on CoNLL-Aida
no code implementations • ICLR Workshop LLD 2019 • Jeffrey Ling, Nicholas FitzGerald, Livio Baldini Soares, David Weiss, Tom Kwiatkowski
Language modeling tasks, in which words are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases.
no code implementations • EMNLP 2018 • Yuan Zhang, Jason Riesa, Daniel Gillick, Anton Bakalov, Jason Baldridge, David Weiss
We address fine-grained multilingual language identification: providing a language code for every token in a sentence, including codemixed text containing multiple languages.
1 code implementation • EMNLP 2018 • Ji Ma, Kuzman Ganchev, David Weiss
A wide variety of neural-network architectures have been proposed for the task of Chinese word segmentation.
no code implementations • 14 Aug 2018 • Heike Adel, Anton Bryl, David Weiss, Aliaksei Severyn
We study cross-lingual sequence tagging with little or no labeled data in the target language.
1 code implementation • EMNLP 2018 • Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, Andrew McCallum
Unlike previous models which require significant pre-processing to prepare linguistic features, LISA can incorporate syntax using merely raw tokens as input, encoding the sequence only once to simultaneously perform parsing, predicate detection and role labeling for all predicates.
1 code implementation • EMNLP 2017 • Jan A. Botha, Emily Pitler, Ji Ma, Anton Bakalov, Alex Salcianu, David Weiss, Ryan Mcdonald, Slav Petrov
We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models.
no code implementations • 15 Mar 2017 • Chris Alberti, Daniel Andor, Ivan Bogatyy, Michael Collins, Dan Gillick, Lingpeng Kong, Terry Koo, Ji Ma, Mark Omernick, Slav Petrov, Chayut Thanapirom, Zora Tung, David Weiss
We describe a baseline dependency parsing system for the CoNLL2017 Shared Task.
1 code implementation • 13 Mar 2017 • Lingpeng Kong, Chris Alberti, Daniel Andor, Ivan Bogatyy, David Weiss
In this work, we present a compact, modular framework for constructing novel recurrent neural architectures.
no code implementations • ACL 2016 • Yuan Zhang, David Weiss
Traditional syntax models typically leverage part-of-speech (POS) information by constructing features from hand-tuned templates.
1 code implementation • ACL 2016 • Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, Michael Collins
Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models.
Ranked #16 on Dependency Parsing on Penn Treebank
no code implementations • IJCNLP 2015 • David Weiss, Chris Alberti, Michael Collins, Slav Petrov
We present structured perceptron training for neural network transition-based dependency parsing.
Ranked #17 on Dependency Parsing on Penn Treebank
no code implementations • CVPR 2014 • Andrea Vedaldi, Siddharth Mahendran, Stavros Tsogkas, Subhransu Maji, Ross Girshick, Juho Kannala, Esa Rahtu, Iasonas Kokkinos, Matthew B. Blaschko, David Weiss, Ben Taskar, Karen Simonyan, Naomi Saphra, Sammy Mohamed
We show that the collected data can be used to study the relation between part detection and attribute prediction by diagnosing the performance of classifiers that pool information from different parts of an object.
no code implementations • CVPR 2013 • David Weiss, Ben Taskar
We propose SCALPEL, a flexible method for object segmentation that integrates rich region-merging cues with midand high-level information about object layout, class, and scale into the segmentation process.
no code implementations • NeurIPS 2010 • David Weiss, Benjamin Sapp, Ben Taskar
For many structured prediction problems, complex models often require adopting approximate inference techniques such as variational methods or sampling, which generally provide no satisfactory accuracy guarantees.