Search Results for author: Romain Cosentino

Found 12 papers, 3 papers with code

Learnable Group Transform For Time-Series

1 code implementation ICML 2020 Romain Cosentino, Behnaam Aazhang

This framework allows us to generalize classical time-frequency transformations such as the Wavelet Transform, and to efficiently learn the representation of signals.

Time Series Time Series Analysis

Characterizing Large Language Model Geometry Solves Toxicity Detection and Generation

1 code implementation4 Dec 2023 Randall Balestriero, Romain Cosentino, Sarath Shekkizhar

We obtain in closed form (i) the intrinsic dimension in which the Multi-Head Attention embeddings are constrained to exist and (ii) the partition and per-region affine mappings of the per-layer feedforward networks.

Language Modelling Large Language Model

The Geometry of Self-supervised Learning Models and its Impact on Transfer Learning

no code implementations18 Sep 2022 Romain Cosentino, Sarath Shekkizhar, Mahdi Soltanolkotabi, Salman Avestimehr, Antonio Ortega

Self-supervised learning (SSL) has emerged as a desirable paradigm in computer vision due to the inability of supervised models to learn representations that can generalize in domains with limited labels.

Data Augmentation Self-Supervised Learning +1

Toward a Geometrical Understanding of Self-supervised Contrastive Learning

no code implementations13 May 2022 Romain Cosentino, Anirvan Sengupta, Salman Avestimehr, Mahdi Soltanolkotabi, Antonio Ortega, Ted Willke, Mariano Tepper

When used for transfer learning, the projector is discarded since empirical results show that its representation generalizes more poorly than the encoder's.

Contrastive Learning Data Augmentation +2

Spatial Transformer K-Means

no code implementations16 Feb 2022 Romain Cosentino, Randall Balestriero, Yanis Bahroun, Anirvan Sengupta, Richard Baraniuk, Behnaam Aazhang

This enables (i) the reduction of intrinsic nuisances associated with the data, reducing the complexity of the clustering task and increasing performances and producing state-of-the-art results, (ii) clustering in the input space of the data, leading to a fully interpretable clustering algorithm, and (iii) the benefit of convergence guarantees.

Clustering

Sparse Multi-Family Deep Scattering Network

no code implementations14 Dec 2020 Romain Cosentino, Randall Balestriero

The SMF-DSN enhances the DSN by (i) increasing the diversity of the scattering coefficients and (ii) improves its robustness with respect to non-stationary noise.

Translation

Deep Autoencoders: From Understanding to Generalization Guarantees

no code implementations20 Sep 2020 Romain Cosentino, Randall Balestriero, Richard Baraniuk, Behnaam Aazhang

Our regularizations leverage recent advances in the group of transformation learning to enable AEs to better approximate the data manifold without explicitly defining the group underlying the manifold.

Denoising

The Geometry of Deep Networks: Power Diagram Subdivision

1 code implementation NeurIPS 2019 Randall Balestriero, Romain Cosentino, Behnaam Aazhang, Richard Baraniuk

The subdivision process constrains the affine maps on the (exponentially many) power diagram regions to greatly reduce their complexity.

Spline Filters For End-to-End Deep Learning

no code implementations ICML 2018 Randall Balestriero, Romain Cosentino, Herve Glotin, Richard Baraniuk

We propose to tackle the problem of end-to-end learning for raw waveform signals by introducing learnable continuous time-frequency atoms.

Cannot find the paper you are looking for? You can Submit a new open access paper.