Search Results for author: David Belanger

Found 24 papers, 8 papers with code

The Next Decade of Telecommunications Artificial Intelligence

no code implementations19 Jan 2021 Ye Ouyang, Lilei Wang, Aidong Yang, Maulik Shah, David Belanger, Tongqing Gao, Leping Wei, Yaqin Zhang

It has been an exciting journey since the mobile communications and artificial intelligence were conceived 37 years and 64 years ago.

Management

Is Transfer Learning Necessary for Protein Landscape Prediction?

no code implementations31 Oct 2020 Amir Shanehsazzadeh, David Belanger, David Dohan

In this paper, we show that CNN models trained solely using supervised learning both compete with and sometimes outperform the best models from TAPE that leverage expensive pretraining on large protein datasets.

Benchmarking Representation Learning +1

Fixed-Length Protein Embeddings using Contextual Lenses

1 code implementation15 Oct 2020 Amir Shanehsazzadeh, David Belanger, David Dohan

We consider transformer (BERT) protein language models that are pretrained on the TrEMBL data set and learn fixed-length embeddings on top of them with contextual lenses.

Rethinking Attention with Performers

12 code implementations ICLR 2021 Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller

We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness.

D4RL Image Generation +2

Population-Based Black-Box Optimization for Biological Sequence Design

no code implementations ICML 2020 Christof Angermueller, David Belanger, Andreea Gane, Zelda Mariet, David Dohan, Kevin Murphy, Lucy Colwell, D. Sculley

The cost and latency of wet-lab experiments requires methods that find good sequences in few experimental rounds of large batches of sequences--a setting that off-the-shelf black-box optimization methods are ill-equipped to handle.

Rapid Prediction of Electron-Ionization Mass Spectrometry using Neural Networks

no code implementations21 Nov 2018 Jennifer N. Wei, David Belanger, Ryan P. Adams, D. Sculley

When confronted with a substance of unknown identity, researchers often perform mass spectrometry on the sample and compare the observed spectrum to a library of previously-collected spectra to identify the molecule.

BIG-bench Machine Learning

Learning Latent Permutations with Gumbel-Sinkhorn Networks

2 code implementations ICLR 2018 Gonzalo Mena, David Belanger, Scott Linderman, Jasper Snoek

Permutations and matchings are core building blocks in a variety of latent variable models, as they allow us to align, canonicalize, and sort data.

Low-Rank Hidden State Embeddings for Viterbi Sequence Labeling

no code implementations2 Aug 2017 Dung Thai, Shikhar Murty, Trapit Bansal, Luke Vilnis, David Belanger, Andrew McCallum

In textual information extraction and other sequence labeling tasks it is now common to use recurrent neural networks (such as LSTM) to form rich embedded representations of long-term input co-occurrence patterns.

named-entity-recognition Named Entity Recognition +1

End-to-End Learning for Structured Prediction Energy Networks

no code implementations ICML 2017 David Belanger, Bishan Yang, Andrew McCallum

Structured Prediction Energy Networks (SPENs) are a simple, yet expressive family of structured prediction models (Belanger and McCallum, 2016).

Image Denoising Semantic Role Labeling +1

Synthesizing Normalized Faces from Facial Identity Features

1 code implementation CVPR 2017 Forrester Cole, David Belanger, Dilip Krishnan, Aaron Sarna, Inbar Mosseri, William T. Freeman

We present a method for synthesizing a frontal, neutral-expression image of a person's face given an input face photograph.

Chains of Reasoning over Entities, Relations, and Text using Recurrent Neural Networks

2 code implementations EACL 2017 Rajarshi Das, Arvind Neelakantan, David Belanger, Andrew McCallum

Our goal is to combine the rich multistep inference of symbolic logical reasoning with the generalization capabilities of neural networks.

Logical Reasoning

Structured Prediction Energy Networks

no code implementations19 Nov 2015 David Belanger, Andrew McCallum

This deep architecture captures dependencies between labels that would lead to intractable graphical models, and performs structure learning by automatically learning discriminative features of the structured output.

General Classification Multi-Label Classification +1

Multilingual Relation Extraction using Compositional Universal Schema

1 code implementation NAACL 2016 Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, Andrew McCallum

In response, this paper introduces significant further improvements to the coverage and flexibility of universal schema relation extraction: predictions for entities unseen in training and multilingual transfer learning to domains with no annotation.

Relation Relation Extraction +4

Bethe Projections for Non-Local Inference

no code implementations4 Mar 2015 Luke Vilnis, David Belanger, Daniel Sheldon, Andrew McCallum

Many inference problems in structured prediction are naturally solved by augmenting a tractable dependency structure with complex, non-local auxiliary objectives.

Handwriting Recognition Structured Prediction +1

A Linear Dynamical System Model for Text

no code implementations13 Feb 2015 David Belanger, Sham Kakade

Finally, the Kalman filter updates can be seen as a linear recurrent neural network.

Language Modelling Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.