Search Results for author: Benjamin D. Haeffele

Found 19 papers, 5 papers with code

Wave Physics-informed Matrix Factorizations

no code implementations21 Dec 2023 Harsha Vardhan Tetali, Joel B. Harley, Benjamin D. Haeffele

With the recent success of representation learning methods, which includes deep learning as a special case, there has been considerable interest in developing techniques that incorporate known physical constraints into the learned representation.

Representation Learning

White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?

1 code implementation22 Nov 2023 Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Hao Bai, Yuexiang Zhai, Benjamin D. Haeffele, Yi Ma

This leads to a family of white-box transformer-like deep network architectures, named CRATE, which are mathematically fully interpretable.

Data Compression Denoising +1

White-Box Transformers via Sparse Rate Reduction

1 code implementation NeurIPS 2023 Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Benjamin D. Haeffele, Yi Ma

Particularly, we show that the standard transformer block can be derived from alternating optimization on complementary parts of this objective: the multi-head self-attention operator can be viewed as a gradient descent step to compress the token sets by minimizing their lossy coding rate, and the subsequent multi-layer perceptron can be viewed as attempting to sparsify the representation of the tokens.

Representation Learning

Unsupervised Manifold Linearizing and Clustering

no code implementations ICCV 2023 Tianjiao Ding, Shengbang Tong, Kwan Ho Ryan Chan, Xili Dai, Yi Ma, Benjamin D. Haeffele

We consider the problem of simultaneously clustering and learning a linear representation of data lying close to a union of low-dimensional manifolds, a fundamental task in machine learning and computer vision.

Clustering Deep Clustering

Learning Globally Smooth Functions on Manifolds

no code implementations1 Oct 2022 Juan Cervino, Luiz F. O. Chamon, Benjamin D. Haeffele, Rene Vidal, Alejandro Ribeiro

To do so, it shows that under typical conditions the problem of learning a Lipschitz continuous function on a manifold is equivalent to a dynamically weighted manifold regularization problem.

Interpretable by Design: Learning Predictors by Composing Interpretable Queries

1 code implementation3 Jul 2022 Aditya Chattopadhyay, Stewart Slocum, Benjamin D. Haeffele, Rene Vidal, Donald Geman

There is a growing concern about typically opaque decision-making with high-performance machine learning algorithms.

Decision Making

Efficient Maximal Coding Rate Reduction by Variational Forms

no code implementations CVPR 2022 Christina Baek, Ziyang Wu, Kwan Ho Ryan Chan, Tianjiao Ding, Yi Ma, Benjamin D. Haeffele

The principle of Maximal Coding Rate Reduction (MCR$^2$) has recently been proposed as a training objective for learning discriminative low-dimensional structures intrinsic to high-dimensional data to allow for more robust training than standard approaches, such as cross-entropy minimization.

Image Classification

Wave-Informed Matrix Factorization with Global Optimality Guarantees

no code implementations19 Jul 2021 Harsha Vardhan Tetali, Joel B. Harley, Benjamin D. Haeffele

With the recent success of representation learning methods, which includes deep learning as a special case, there has been considerable interest in developing representation learning techniques that can incorporate known physical constraints into the learned representation.

Dictionary Learning Representation Learning

Doubly Stochastic Subspace Clustering

1 code implementation30 Nov 2020 Derek Lim, René Vidal, Benjamin D. Haeffele

Many state-of-the-art subspace clustering methods follow a two-step process by first constructing an affinity matrix between data points and then applying spectral clustering to this affinity.

Clustering Image Clustering

A Critique of Self-Expressive Deep Subspace Clustering

no code implementations ICLR 2021 Benjamin D. Haeffele, Chong You, René Vidal

To extend this approach to data supported on a union of non-linear manifolds, numerous studies have proposed learning an embedding of the original data using a neural network which is regularized by a self-expressive loss function on the data in the embedded space to encourage a union of linear subspaces prior on the data in the embedded space.

Clustering

On the Regularization Properties of Structured Dropout

no code implementations CVPR 2020 Ambar Pal, Connor Lane, René Vidal, Benjamin D. Haeffele

We also show that the global minimizer for DropBlock can be computed in closed form, and that DropConnect is equivalent to Dropout.

Global Optimality in Separable Dictionary Learning with Applications to the Analysis of Diffusion MRI

no code implementations15 Jul 2018 Evan Schwab, Benjamin D. Haeffele, René Vidal, Nicolas Charon

In the classical setting, signals are represented as vectors and the dictionary learning problem is posed as a matrix factorization problem where the data matrix is approximately factorized into a dictionary matrix and a sparse matrix of coefficients.

Denoising Dictionary Learning

Multi-Cell Detection and Classification Using a Generative Convolutional Model

no code implementations CVPR 2018 Florence Yellin, Benjamin D. Haeffele, Sophie Roth, René Vidal

This paper proposes a new approach to detecting, counting and classifying white blood cell populations in holographic images, which capitalizes on the fact that the variability in a mixture of blood cells is constrained by physiology.

Cell Detection Classification +2

An Analysis of Dropout for Matrix Factorization

no code implementations10 Oct 2017 Jacopo Cavazza, Connor Lane, Benjamin D. Haeffele, Vittorio Murino, René Vidal

While the resulting regularizer is closely related to a variational form of the nuclear norm, suggesting that dropout may limit the size of the factorization, we show that it is possible to trivially lower the objective value by doubling the size of the factorization.

Structured Low-Rank Matrix Factorization: Global Optimality, Algorithms, and Applications

no code implementations25 Aug 2017 Benjamin D. Haeffele, Rene Vidal

Recently, convex formulations of low-rank matrix factorization problems have received considerable attention in machine learning.

Video Segmentation Video Semantic Segmentation

Global Optimality in Neural Network Training

no code implementations CVPR 2017 Benjamin D. Haeffele, Rene Vidal

The past few years have seen a dramatic increase in the performance of recognition systems thanks to the introduction of deep networks for representation learning.

Representation Learning

Global Optimality in Tensor Factorization, Deep Learning, and Beyond

no code implementations24 Jun 2015 Benjamin D. Haeffele, Rene Vidal

Techniques involving factorization are found in a wide range of applications and have enjoyed significant empirical success in many fields.

Cannot find the paper you are looking for? You can Submit a new open access paper.