Search Results for author: Benjamin David Haeffele

Found 5 papers, 1 papers with code

Variational Information Pursuit with Large Language and Multimodal Models for Interpretable Predictions

no code implementations24 Aug 2023 Kwan Ho Ryan Chan, Aditya Chattopadhyay, Benjamin David Haeffele, Rene Vidal

Variational Information Pursuit (V-IP) is a framework for making interpretable predictions by design by sequentially selecting a short chain of task-relevant, user-defined and interpretable queries about the data that are most informative for the task.

Semantic Similarity Semantic Textual Similarity

Image Clustering via the Principle of Rate Reduction in the Age of Pretrained Models

1 code implementation8 Jun 2023 Tianzhe Chu, Shengbang Tong, Tianjiao Ding, Xili Dai, Benjamin David Haeffele, René Vidal, Yi Ma

In this paper, we propose a novel image clustering pipeline that leverages the powerful feature representation of large pre-trained models such as CLIP and cluster images effectively and efficiently at scale.

Clustering Image Clustering +1

Implicit Bias of Projected Subgradient Method Gives Provable Robust Recovery of Subspaces of Unknown Codimension

no code implementations ICLR 2022 Paris Giampouras, Benjamin David Haeffele, Rene Vidal

In particular, we show that 1) all of the problem instances will converge to a vector in the null space of the subspace and 2) the ensemble of problem instance solutions will be sufficiently diverse to fully span the null space of the subspace (and thus reveal the true codimension of the subspace) even when the true subspace dimension is unknown.

Representation Learning

Quantifying Task Complexity Through Generalized Information Measures

no code implementations1 Jan 2021 Aditya Chattopadhyay, Benjamin David Haeffele, Donald Geman, Rene Vidal

In this paper, we propose to measure the complexity of a learning task by the minimum expected number of questions that need to be answered to solve the task.

Classification General Classification +1

Implicit Acceleration of Gradient Flow in Overparameterized Linear Models

no code implementations1 Jan 2021 Salma Tarmoun, Guilherme França, Benjamin David Haeffele, Rene Vidal

More precisely, gradient flow preserves the difference of the Gramian~matrices of the input and output weights and we show that the amount of acceleration depends on both the magnitude of that difference (which is fixed at initialization) and the spectrum of the data.

Cannot find the paper you are looking for? You can Submit a new open access paper.