Search Results for author: Michael Mathieu

Found 14 papers, 10 papers with code

Disentangling factors of variation in deep representations using adversarial training

3 code implementations10 Nov 2016 Michael Mathieu, Junbo Zhao, Pablo Sprechmann, Aditya Ramesh, Yann Lecun

During training, the only available source of supervision comes from our ability to distinguish among different observations belonging to the same class.

Disentanglement

Energy-based Generative Adversarial Network

3 code implementations11 Sep 2016 Junbo Zhao, Michael Mathieu, Yann Lecun

We introduce the "Energy-based Generative Adversarial Network" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions.

Generative Adversarial Network

Deep multi-scale video prediction beyond mean square error

5 code implementations17 Nov 2015 Michael Mathieu, Camille Couprie, Yann Lecun

Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics.

Optical Flow Estimation Video Prediction

Learning to Linearize Under Uncertainty

no code implementations NeurIPS 2015 Ross Goroshin, Michael Mathieu, Yann Lecun

Training deep feature hierarchies to solve supervised learning tasks has achieved state of the art performance on many problems in computer vision.

Stacked What-Where Auto-encoders

2 code implementations8 Jun 2015 Junbo Zhao, Michael Mathieu, Ross Goroshin, Yann Lecun

The objective function includes reconstruction terms that induce the hidden states in the Deconvnet to be similar to those of the Convnet.

Semi-Supervised Image Classification

Learning Longer Memory in Recurrent Neural Networks

5 code implementations24 Dec 2014 Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, Marc'Aurelio Ranzato

In this paper, we show that learning longer term patterns in real data, such as in natural language, is perfectly possible using gradient descent.

Language Modelling

Fast Convolutional Nets With fbfft: A GPU Performance Evaluation

2 code implementations24 Dec 2014 Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann Lecun

We examine the performance profile of Convolutional Neural Network training on the current generation of NVIDIA Graphics Processing Units.

The Loss Surfaces of Multilayer Networks

1 code implementation30 Nov 2014 Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, Yann Lecun

We show that for large-size decoupled networks the lowest critical values of the random loss function form a layered structure and they are located in a well-defined band lower-bounded by the global minimum.

Fast Approximation of Rotations and Hessians matrices

no code implementations29 Apr 2014 Michael Mathieu, Yann Lecun

A new method to represent and approximate rotation matrices is introduced.

OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks

4 code implementations21 Dec 2013 Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, Yann Lecun

This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks.

General Classification Image Classification +2

Fast Training of Convolutional Networks through FFTs

no code implementations20 Dec 2013 Michael Mathieu, Mikael Henaff, Yann Lecun

Convolutional networks are one of the most widely employed architectures in computer vision and machine learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.