Search Results for author: Julien Mairal

Found 85 papers, 43 papers with code

Functional Bilevel Optimization for Machine Learning

no code implementations29 Mar 2024 Ieva Petrulionyte, Julien Mairal, Michael Arbel

In this paper, we introduce a new functional point of view on bilevel optimization problems for machine learning, where the inner objective is minimized over a function space.

Bilevel Optimization

Fast Semi-supervised Unmixing using Non-convex Optimization

1 code implementation23 Jan 2024 Behnood Rasti, Alexandre Zouaoui, Julien Mairal, Jocelyn Chanussot

Our experimental results validate that enforcing the convexity constraint outperforms the sparsity prior for the endmember library.

Towards Real-World Focus Stacking with Deep Learning

1 code implementation29 Nov 2023 Alexandre Araujo, Jean Ponce, Julien Mairal

Focus stacking is widely used in micro, macro, and landscape photography to reconstruct all-in-focus images from multiple frames obtained with focus bracketing, that is, with shallow depth of field and different focus planes.

Vision Transformers Need Registers

3 code implementations28 Sep 2023 Timothée Darcet, Maxime Oquab, Julien Mairal, Piotr Bojanowski

Transformers have recently emerged as a powerful tool for learning visual representations.

Object Discovery

SUnAA: Sparse Unmixing using Archetypal Analysis

1 code implementation9 Aug 2023 Behnood Rasti, Alexandre Zouaoui, Julien Mairal, Jocelyn Chanussot

Unlike most conventional sparse unmixing methods, here the minimization problem is non-convex.

GloptiNets: Scalable Non-Convex Optimization with Certificates

1 code implementation NeurIPS 2023 Gaspard Beugnot, Julien Mairal, Alessandro Rudi

We present a novel approach to non-convex optimization with certificates, which handles smooth functions on the hypercube or on the torus.

Combining multi-spectral data with statistical and deep-learning models for improved exoplanet detection in direct imaging at high contrast

no code implementations21 Jun 2023 Olivier Flasseur, Théo Bodrito, Julien Mairal, Jean Ponce, Maud Langlois, Anne-Marie Lagrange

Exoplanet detection by direct imaging is a difficult task: the faint signals from the objects of interest are buried under a spatially structured nuisance component induced by the host star.

SLACK: Stable Learning of Augmentations with Cold-start and KL regularization

no code implementations CVPR 2023 Juliette Marrie, Michael Arbel, Diane Larlus, Julien Mairal

Data augmentation is known to improve the generalization capabilities of neural networks, provided that the set of transformations is chosen with care, a selection often performed manually.

Bilevel Optimization Data Augmentation

Self-Attention in Colors: Another Take on Encoding Graph Structure in Transformers

1 code implementation21 Apr 2023 Romain Menegaux, Emmanuel Jehanno, Margot Selosse, Julien Mairal

We introduce a novel self-attention mechanism, which we call CSA (Chromatic Self-Attention), which extends the notion of attention scores to attention _filters_, independently modulating the feature channels.

Graph Regression

Sequential Counterfactual Risk Minimization

1 code implementation23 Feb 2023 Houssam Zenati, Eustache Diemert, Matthieu Martin, Julien Mairal, Pierre Gaillard

Counterfactual Risk Minimization (CRM) is a framework for dealing with the logged bandit feedback problem, where the goal is to improve a logging policy using offline data.

counterfactual

Learning Reward Functions for Robotic Manipulation by Observing Humans

no code implementations16 Nov 2022 Minttu Alakuijala, Gabriel Dulac-Arnold, Julien Mairal, Jean Ponce, Cordelia Schmid

Unlike prior work on leveraging human videos to teach robots, our method, Human Offline Learned Distances (HOLD) requires neither a priori data from the robot environment, nor a set of task-specific human demonstrations, nor a predefined notion of correspondence across morphologies, yet it is able to accelerate training of several manipulation tasks on a simulated robot arm compared to using only a sparse reward obtained from task completion.

Contrastive Learning

Entropic Descent Archetypal Analysis for Blind Hyperspectral Unmixing

1 code implementation22 Sep 2022 Alexandre Zouaoui, Gedeon Muhawenayo, Behnood Rasti, Jocelyn Chanussot, Julien Mairal

In this paper, we introduce a new algorithm based on archetypal analysis for blind hyperspectral unmixing, assuming linear mixing of endmembers.

Hyperspectral Unmixing Model Selection

High Dynamic Range and Super-Resolution from Raw Image Bursts

no code implementations29 Jul 2022 Bruno Lecouat, Thomas Eboli, Jean Ponce, Julien Mairal

Photographs captured by smartphones and mid-range cameras have limited spatial resolution and dynamic range, with noisy response in underexposed regions and color artefacts in saturated areas.

Image Restoration Super-Resolution +1

On the Benefits of Large Learning Rates for Kernel Methods

no code implementations28 Feb 2022 Gaspard Beugnot, Julien Mairal, Alessandro Rudi

This paper studies an intriguing phenomenon related to the good generalization performance of estimators obtained by using large learning rates within gradient descent algorithms.

Efficient Kernel UCB for Contextual Bandits

1 code implementation11 Feb 2022 Houssam Zenati, Alberto Bietti, Eustache Diemert, Julien Mairal, Matthieu Martin, Pierre Gaillard

While standard methods require a O(CT^3) complexity where T is the horizon and the constant C is related to optimizing the UCB rule, we propose an efficient contextual algorithm for large-scale problems.

Computational Efficiency Multi-Armed Bandits

Self-Supervised Models are Continual Learners

1 code implementation CVPR 2022 Enrico Fini, Victor G. Turrisi da Costa, Xavier Alameda-Pineda, Elisa Ricci, Karteek Alahari, Julien Mairal

Self-supervised models have been shown to produce comparable or better visual representations than their supervised counterparts when trained offline on unlabeled data at scale.

Continual Learning Representation Learning

Amortized Implicit Differentiation for Stochastic Bilevel Optimization

no code implementations ICLR 2022 Michael Arbel, Julien Mairal

We study a class of algorithms for solving bilevel optimization problems in both stochastic and deterministic settings when the inner-level objective is strongly convex.

Bilevel Optimization

A Trainable Spectral-Spatial Sparse Coding Model for Hyperspectral Image Restoration

2 code implementations NeurIPS 2021 Théo Bodrito, Alexandre Zouaoui, Jocelyn Chanussot, Julien Mairal

Hyperspectral imaging offers new perspectives for diverse applications, ranging from the monitoring of the environment using airborne or satellite remote sensing, precision farming, food safety, planetary exploration, or astrophysics.

Denoising Hyperspectral Image Denoising +1

Beyond Tikhonov: Faster Learning with Self-Concordant Losses via Iterative Regularization

no code implementations NeurIPS 2021 Gaspard Beugnot, Julien Mairal, Alessandro Rudi

The theory of spectral filtering is a remarkable tool to understand the statistical properties of learning with kernels.

Residual Reinforcement Learning from Demonstrations

no code implementations15 Jun 2021 Minttu Alakuijala, Gabriel Dulac-Arnold, Julien Mairal, Jean Ponce, Cordelia Schmid

Residual reinforcement learning (RL) has been proposed as a way to solve challenging robotic tasks by adapting control actions from a conventional feedback controller to maximize a reward signal.

reinforcement-learning Reinforcement Learning (RL)

GraphiT: Encoding Graph Structure in Transformers

1 code implementation10 Jun 2021 Grégoire Mialon, Dexiong Chen, Margot Selosse, Julien Mairal

We show that viewing graphs as sets of node features and incorporating structural and positional information into a transformer architecture is able to outperform representations learned with classical graph neural networks (GNNs).

Beyond Tikhonov: faster learning with self-concordant losses, via iterative regularization

no code implementations NeurIPS 2021 Gaspard Beugnot, Julien Mairal, Alessandro Rudi

The theory of spectral filtering is a remarkable tool to understand the statistical properties of learning with kernels.

Emerging Properties in Self-Supervised Vision Transformers

26 code implementations ICCV 2021 Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin

In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets).

Copy Detection Image Retrieval +7

Lucas-Kanade Reloaded: End-to-End Super-Resolution from Raw Image Bursts

no code implementations ICCV 2021 Bruno Lecouat, Jean Ponce, Julien Mairal

This presentation addresses the problem of reconstructing a high-resolution image from multiple lower-resolution snapshots captured from slightly different viewpoints in space and time.

Super-Resolution

A Flexible Framework for Designing Trainable Priors with Adaptive Smoothing and Game Encoding

1 code implementation NeurIPS 2020 Bruno Lecouat, Jean Ponce, Julien Mairal

We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems, and whose architectures are derived from an optimization algorithm.

Image Denoising Stereo Matching

A Trainable Optimal Transport Embedding for Feature Aggregation and its Relationship to Attention

1 code implementation ICLR 2021 Grégoire Mialon, Dexiong Chen, Alexandre d'Aspremont, Julien Mairal

We address the problem of learning on sets of features, motivated by the need of performing pooling operations in long biological sequences of varying sizes, with long-range dependencies, and possibly few labeled data.

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments

16 code implementations NeurIPS 2020 Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin

In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much.

Contrastive Learning Data Augmentation +2

Selecting Relevant Features from a Multi-domain Representation for Few-shot Classification

1 code implementation ECCV 2020 Nikita Dvornik, Cordelia Schmid, Julien Mairal

Popular approaches for few-shot classification consist of first learning a generic data representation based on a large annotated dataset, before adapting the representation to new classes given only a few labeled samples.

feature selection Few-Shot Image Classification +2

Convolutional Kernel Networks for Graph-Structured Data

1 code implementation ICML 2020 Dexiong Chen, Laurent Jacob, Julien Mairal

On the other hand, our model can also be trained end-to-end on large-scale data, leading to new types of graph convolutional neural networks.

Graph Classification

Pruning Convolutional Neural Networks with Self-Supervision

no code implementations10 Jan 2020 Mathilde Caron, Ari Morcos, Piotr Bojanowski, Julien Mairal, Armand Joulin

In this work, we investigate the use of standard pruning methods, developed primarily for supervised learning, for networks trained without labels (i. e. on self-supervised tasks).

Fully Trainable and Interpretable Non-Local Sparse Models for Image Restoration

1 code implementation ECCV 2020 Bruno Lecouat, Jean Ponce, Julien Mairal

Non-local self-similarity and sparsity principles have proven to be powerful priors for natural image modeling.

Demosaicking Denoising

Screening Data Points in Empirical Risk Minimization via Ellipsoidal Regions and Safe Loss Functions

1 code implementation5 Dec 2019 Grégoire Mialon, Alexandre d'Aspremont, Julien Mairal

We design simple screening tests to automatically discard data samples in empirical risk minimization without losing optimization guarantees.

regression

Finding Winning Tickets with Limited (or No) Supervision

no code implementations25 Sep 2019 Mathilde Caron, Ari Morcos, Piotr Bojanowski, Julien Mairal, Armand Joulin

The lottery ticket hypothesis argues that neural networks contain sparse subnetworks, which, if appropriately initialized (the winning tickets), are capable of matching the accuracy of the full network when trained in isolation.

Recurrent Kernel Networks

1 code implementation NeurIPS 2019 Dexiong Chen, Laurent Jacob, Julien Mairal

Substring kernels are classical tools for representing biological sequences or text.

A Generic Acceleration Framework for Stochastic Composite Optimization

1 code implementation NeurIPS 2019 Andrei Kulunchakov, Julien Mairal

In this paper, we introduce various mechanisms to obtain accelerated first-order stochastic optimization algorithms when the objective function is convex or strongly convex.

Stochastic Optimization

On the Inductive Bias of Neural Tangent Kernels

1 code implementation NeurIPS 2019 Alberto Bietti, Julien Mairal

State-of-the-art neural networks are heavily over-parameterized, making the optimization algorithm a crucial ingredient for learning predictive models with good generalization properties.

Inductive Bias

Estimate Sequences for Variance-Reduced Stochastic Composite Optimization

no code implementations7 May 2019 Andrei Kulunchakov, Julien Mairal

In this paper, we propose a unified view of gradient-based algorithms for stochastic convex composite optimization by extending the concept of estimate sequence introduced by Nesterov.

Unsupervised Pre-Training of Image Features on Non-Curated Data

2 code implementations ICCV 2019 Mathilde Caron, Piotr Bojanowski, Julien Mairal, Armand Joulin

Our goal is to bridge the performance gap between unsupervised methods trained on curated data, which are costly to obtain, and massive raw datasets that are easily available.

Clustering Self-Supervised Image Classification +1

Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise

no code implementations25 Jan 2019 Andrei Kulunchakov, Julien Mairal

In this paper, we propose a unified view of gradient-based algorithms for stochastic convex composite optimization by extending the concept of estimate sequence introduced by Nesterov.

Stochastic Optimization

A Kernel Perspective for Regularizing Deep Neural Networks

1 code implementation30 Sep 2018 Alberto Bietti, Grégoire Mialon, Dexiong Chen, Julien Mairal

We propose a new point of view for regularizing deep neural networks by using the norm of a reproducing kernel Hilbert space (RKHS).

On Regularization and Robustness of Deep Neural Networks

no code implementations27 Sep 2018 Alberto Bietti*, Grégoire Mialon*, Julien Mairal

In this work, we study the connection between regularization and robustness of deep neural networks by viewing them as elements of a reproducing kernel Hilbert space (RKHS) of functions and by regularizing them using the RKHS norm.

Extracting representations of cognition across neuroimaging studies improves brain decoding

1 code implementation17 Sep 2018 Arthur Mensch, Julien Mairal, Bertrand Thirion, Gaël Varoquaux

Analyzing data across studies could bring more statistical power; yet the current brain-imaging analytic framework cannot be used at scale as it requires casting all cognitive tasks in a unified theoretical framework.

Brain Decoding

On the Importance of Visual Context for Data Augmentation in Scene Understanding

no code implementations6 Sep 2018 Nikita Dvornik, Julien Mairal, Cordelia Schmid

In this work, we consider object detection, semantic and instance segmentation and augment the training images by blending objects in existing scenes, using instance segmentation annotations.

Data Augmentation Instance Segmentation +7

Modeling Visual Context is Key to Augmenting Object Detection Datasets

2 code implementations ECCV 2018 Nikita Dvornik, Julien Mairal, Cordelia Schmid

For this approach to be successful, we show that modeling appropriately the visual context surrounding objects is crucial to place them in the right environment.

Data Augmentation object-detection +1

Unsupervised Learning of Artistic Styles with Archetypal Style Analysis

no code implementations NeurIPS 2018 Daan Wynen, Cordelia Schmid, Julien Mairal

In this paper, we introduce an unsupervised learning approach to automatically discover, summarize, and manipulate artistic styles from large collections of paintings.

Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice

1 code implementation15 Dec 2017 Hongzhou Lin, Julien Mairal, Zaid Harchaoui

One of the keys to achieve acceleration in theory and in practice is to solve these sub-problems with appropriate accuracy by using the right stopping criterion and the right warm-start strategy.

Invariance and Stability of Deep Convolutional Representations

no code implementations NeurIPS 2017 Alberto Bietti, Julien Mairal

In this paper, we study deep signal representations that are near-invariant to groups of transformations and stable to the action of diffeomorphisms without losing signal information.

Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations

1 code implementation9 Jun 2017 Alberto Bietti, Julien Mairal

The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals.

Generalization Bounds

Catalyst Acceleration for Gradient-Based Non-Convex Optimization

no code implementations31 Mar 2017 Courtney Paquette, Hongzhou Lin, Dmitriy Drusvyatskiy, Julien Mairal, Zaid Harchaoui

We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorithms originally designed for minimizing convex functions.

Stochastic Subsampling for Factorizing Huge Matrices

1 code implementation19 Jan 2017 Arthur Mensch, Julien Mairal, Bertrand Thirion, Gael Varoquaux

We present a matrix-factorization algorithm that scales to input matrices with both huge number of rows and columns.

Dictionary Learning

Subsampled online matrix factorization with convergence guarantees

1 code implementation30 Nov 2016 Arthur Mensch, Julien Mairal, Gaël Varoquaux, Bertrand Thirion

We present a matrix factorization algorithm that scales to input matrices that are large in both dimensions (i. e., that contains morethan 1TB of data).

An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration

1 code implementation4 Oct 2016 Hongzhou Lin, Julien Mairal, Zaid Harchaoui

We propose an inexact variable-metric proximal point algorithm to accelerate gradient-based optimization algorithms.

Dictionary Learning for Massive Matrix Factorization

1 code implementation3 May 2016 Arthur Mensch, Julien Mairal, Bertrand Thirion, Gaël Varoquaux

Sparse matrix factorization is a popular tool to obtain interpretable data decompositions, which are also effective to perform data completion or denoising.

Collaborative Filtering Dictionary Learning +2

Convolutional Patch Representations for Image Retrieval: an Unsupervised Approach

no code implementations1 Mar 2016 Mattis Paulin, Julien Mairal, Matthijs Douze, Zaid Harchaoui, Florent Perronnin, Cordelia Schmid

Convolutional neural networks (CNNs) have recently received a lot of attention due to their ability to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision.

Image Classification Image Retrieval +1

DOLPHIn - Dictionary Learning for Phase Retrieval

no code implementations6 Feb 2016 Andreas M. Tillmann, Yonina C. Eldar, Julien Mairal

We propose a new algorithm to learn a dictionary for reconstructing and sparsely encoding signals from measurements without phase.

Dictionary Learning Retrieval

Sparse Modeling for Image and Vision Processing

no code implementations12 Nov 2014 Julien Mairal, Francis Bach, Jean Ponce

In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications.

Model Selection

Convolutional Kernel Networks

no code implementations NeurIPS 2014 Julien Mairal, Piotr Koniusz, Zaid Harchaoui, Cordelia Schmid

An important goal in visual recognition is to devise image representations that are invariant to particular transformations.

Image Classification

Fast and Robust Archetypal Analysis for Representation Learning

1 code implementation CVPR 2014 Yuansi Chen, Julien Mairal, Zaid Harchaoui

We revisit a pioneer unsupervised learning technique called archetypal analysis, which is related to successful data analysis methods such as sparse coding and non-negative matrix factorization.

General Classification Representation Learning

On learning to localize objects with minimal supervision

no code implementations5 Mar 2014 Hyun Oh Song, Ross Girshick, Stefanie Jegelka, Julien Mairal, Zaid Harchaoui, Trevor Darrell

Learning to localize objects with minimal supervision is an important problem in computer vision, since large fully annotated datasets are extremely costly to obtain.

Weakly Supervised Object Detection

Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning

no code implementations18 Feb 2014 Julien Mairal

We present convergence guarantees for non-convex and convex optimization when the upper bounds approximate the objective up to a smooth error; we call such upper bounds "first-order surrogate functions".

BIG-bench Machine Learning

Stochastic Majorization-Minimization Algorithms for Large-Scale Optimization

no code implementations NeurIPS 2013 Julien Mairal

Majorization-minimization algorithms consist of iteratively minimizing a majorizing surrogate of an objective function.

Optimization with First-Order Surrogate Functions

no code implementations14 May 2013 Julien Mairal

In this paper, we study optimization methods consisting of iteratively minimizing surrogates of an objective function.

BIG-bench Machine Learning

Supervised Feature Selection in Graphs with Path Coding Penalties and Network Flows

no code implementations20 Apr 2012 Julien Mairal, Bin Yu

We consider supervised learning problems where the features are embedded in a graph, such as gene expressions in a gene network.

feature selection

Network Flow Algorithms for Structured Sparsity

no code implementations NeurIPS 2010 Julien Mairal, Rodolphe Jenatton, Francis R. Bach, Guillaume R. Obozinski

Our algorithm scales up to millions of groups and variables, and opens up a whole new range of applications for structured sparse models.

Task-Driven Dictionary Learning

no code implementations27 Sep 2010 Julien Mairal, Francis Bach, Jean Ponce

Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience and signal processing.

Classification Dictionary Learning +2

Supervised Dictionary Learning

no code implementations NeurIPS 2008 Julien Mairal, Jean Ponce, Guillermo Sapiro, Andrew Zisserman, Francis R. Bach

It is now well established that sparse signal models are well suited to restoration tasks and can effectively be learned from audio, image, and video data.

Dictionary Learning General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.