Search Results for author: David Berthelot

Found 16 papers, 11 papers with code

AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation

5 code implementations ICLR 2022 David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, Alex Kurakin

We extend semi-supervised learning to the problem of domain adaptation to learn significantly higher-accuracy models that train on one data distribution and test on a different one.

Semi-supervised Domain Adaptation Unsupervised Domain Adaptation

Assessing Post-Disaster Damage from Satellite Imagery using Semi-Supervised Learning Techniques

no code implementations24 Nov 2020 Jihyeon Lee, Joseph Z. Xu, Kihyuk Sohn, Wenhan Lu, David Berthelot, Izzeddin Gur, Pranav Khaitan, Ke-Wei, Huang, Kyriacos Koupparis, Bernhard Kowatsch

To respond to disasters such as earthquakes, wildfires, and armed conflicts, humanitarian organizations require accurate and timely data in the form of damage assessments, which indicate what buildings and population centers have been most affected.

BIG-bench Machine Learning Disaster Response +1

ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring

1 code implementation ICLR 2020 David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel

We improve the recently-proposed ``MixMatch semi-supervised learning algorithm by introducing two new techniques: distribution alignment and augmentation anchoring.

Creating High Resolution Images with a Latent Adversarial Generator

1 code implementation4 Mar 2020 David Berthelot, Peyman Milanfar, Ian Goodfellow

That is to say, instead of generating an arbitrary image as a sample from the manifold of natural images, we propose to sample images from a particular "subspace" of natural images, directed by a low-resolution image from the same subspace.

Image Super-Resolution Vocal Bursts Intensity Prediction

Semi-Supervised Class Discovery

no code implementations10 Feb 2020 Jeremy Nixon, Jeremiah Liu, David Berthelot

One promising approach to dealing with datapoints that are outside of the initial training distribution (OOD) is to create new classes that capture similarities in the datapoints previously rejected as uncategorizable.

Combining MixMatch and Active Learning for Better Accuracy with Fewer Labels

1 code implementation2 Dec 2019 Shuang Song, David Berthelot, Afshin Rostamizadeh

This analysis can be used to measure the relative value of labeled/unlabeled data at different points of the learning curve, where we find that although the incremental value of labeled data can be as much as 20x that of unlabeled, it quickly diminishes to less than 3x once more than 2, 000 labeled example are observed.

Active Learning

High Accuracy and High Fidelity Extraction of Neural Networks

no code implementations3 Sep 2019 Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot

In a model extraction attack, an adversary steals a copy of a remotely deployed machine learning model, given oracle prediction access.

Model extraction Vocal Bursts Intensity Prediction

Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer

7 code implementations ICLR 2019 David Berthelot, Colin Raffel, Aurko Roy, Ian Goodfellow

Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code.

BEGAN: Boundary Equilibrium Generative Adversarial Networks

18 code implementations31 Mar 2017 David Berthelot, Thomas Schumm, Luke Metz

We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks.

Ranked #68 on Image Generation on CIFAR-10 (Inception score metric)

Image Generation

WikiReading: A Novel Large-scale Language Understanding Task over Wikipedia

2 code implementations ACL 2016 Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, David Berthelot

The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs).

Document Classification General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.