no code implementations • 21 Dec 2023 • Harsha Vardhan Tetali, Joel B. Harley, Benjamin D. Haeffele
With the recent success of representation learning methods, which includes deep learning as a special case, there has been considerable interest in developing techniques that incorporate known physical constraints into the learned representation.
1 code implementation • 22 Nov 2023 • Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Hao Bai, Yuexiang Zhai, Benjamin D. Haeffele, Yi Ma
This leads to a family of white-box transformer-like deep network architectures, named CRATE, which are mathematically fully interpretable.
1 code implementation • NeurIPS 2023 • Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Benjamin D. Haeffele, Yi Ma
Particularly, we show that the standard transformer block can be derived from alternating optimization on complementary parts of this objective: the multi-head self-attention operator can be viewed as a gradient descent step to compress the token sets by minimizing their lossy coding rate, and the subsequent multi-layer perceptron can be viewed as attempting to sparsify the representation of the tokens.
1 code implementation • 6 Feb 2023 • Aditya Chattopadhyay, Kwan Ho Ryan Chan, Benjamin D. Haeffele, Donald Geman, René Vidal
We then demonstrate that the IP strategy is the optimal solution to this problem.
no code implementations • ICCV 2023 • Tianjiao Ding, Shengbang Tong, Kwan Ho Ryan Chan, Xili Dai, Yi Ma, Benjamin D. Haeffele
We consider the problem of simultaneously clustering and learning a linear representation of data lying close to a union of low-dimensional manifolds, a fundamental task in machine learning and computer vision.
no code implementations • 1 Oct 2022 • Juan Cervino, Luiz F. O. Chamon, Benjamin D. Haeffele, Rene Vidal, Alejandro Ribeiro
To do so, it shows that under typical conditions the problem of learning a Lipschitz continuous function on a manifold is equivalent to a dynamically weighted manifold regularization problem.
1 code implementation • 3 Jul 2022 • Aditya Chattopadhyay, Stewart Slocum, Benjamin D. Haeffele, Rene Vidal, Donald Geman
There is a growing concern about typically opaque decision-making with high-performance machine learning algorithms.
no code implementations • CVPR 2022 • Christina Baek, Ziyang Wu, Kwan Ho Ryan Chan, Tianjiao Ding, Yi Ma, Benjamin D. Haeffele
The principle of Maximal Coding Rate Reduction (MCR$^2$) has recently been proposed as a training objective for learning discriminative low-dimensional structures intrinsic to high-dimensional data to allow for more robust training than standard approaches, such as cross-entropy minimization.
no code implementations • 22 Jan 2022 • Paris V. Giampouras, Benjamin D. Haeffele, René Vidal
Robust subspace recovery (RSR) is a fundamental problem in robust representation learning.
no code implementations • 19 Jul 2021 • Harsha Vardhan Tetali, Joel B. Harley, Benjamin D. Haeffele
With the recent success of representation learning methods, which includes deep learning as a special case, there has been considerable interest in developing representation learning techniques that can incorporate known physical constraints into the learned representation.
1 code implementation • 30 Nov 2020 • Derek Lim, René Vidal, Benjamin D. Haeffele
Many state-of-the-art subspace clustering methods follow a two-step process by first constructing an affinity matrix between data points and then applying spectral clustering to this affinity.
Ranked #1 on Image Clustering on UMist
no code implementations • ICLR 2021 • Benjamin D. Haeffele, Chong You, René Vidal
To extend this approach to data supported on a union of non-linear manifolds, numerous studies have proposed learning an embedding of the original data using a neural network which is regularized by a self-expressive loss function on the data in the embedded space to encourage a union of linear subspaces prior on the data in the embedded space.
no code implementations • CVPR 2020 • Ambar Pal, Connor Lane, René Vidal, Benjamin D. Haeffele
We also show that the global minimizer for DropBlock can be computed in closed form, and that DropConnect is equivalent to Dropout.
no code implementations • 15 Jul 2018 • Evan Schwab, Benjamin D. Haeffele, René Vidal, Nicolas Charon
In the classical setting, signals are represented as vectors and the dictionary learning problem is posed as a matrix factorization problem where the data matrix is approximately factorized into a dictionary matrix and a sparse matrix of coefficients.
no code implementations • CVPR 2018 • Florence Yellin, Benjamin D. Haeffele, Sophie Roth, René Vidal
This paper proposes a new approach to detecting, counting and classifying white blood cell populations in holographic images, which capitalizes on the fact that the variability in a mixture of blood cells is constrained by physiology.
no code implementations • 10 Oct 2017 • Jacopo Cavazza, Connor Lane, Benjamin D. Haeffele, Vittorio Murino, René Vidal
While the resulting regularizer is closely related to a variational form of the nuclear norm, suggesting that dropout may limit the size of the factorization, we show that it is possible to trivially lower the objective value by doubling the size of the factorization.
no code implementations • 25 Aug 2017 • Benjamin D. Haeffele, Rene Vidal
Recently, convex formulations of low-rank matrix factorization problems have received considerable attention in machine learning.
no code implementations • CVPR 2017 • Benjamin D. Haeffele, Rene Vidal
The past few years have seen a dramatic increase in the performance of recognition systems thanks to the introduction of deep networks for representation learning.
no code implementations • 24 Jun 2015 • Benjamin D. Haeffele, Rene Vidal
Techniques involving factorization are found in a wide range of applications and have enjoyed significant empirical success in many fields.