no code implementations • 27 Jul 2023 • Rémi Delogne, Vincent Schellekens, Laurent Daudet, Laurent Jacques
In this context, the possibility of performing data processing (such as pattern detection or classification) directly in the sketched domain without accessing the original data was previously achieved for linear random sketching methods and compressive sensing.
no code implementations • 1 Dec 2022 • Rémi Delogne, Vincent Schellekens, Laurent Daudet, Laurent Jacques
In this context, the possibility of performing data processing (such as pattern detection or classification) directly in the sketched domain without accessing the original data was previously achieved for linear random sketching methods and compressive sensing.
no code implementations • 25 Nov 2022 • Florimond Houssiau, Vincent Schellekens, Antoine Chatalic, Shreyas Kumar Annamraju, Yves-Alexandre de Montjoye
In this paper, we introduce the generic moment-to-moment (M$^2$M) method to perform a wide range of data exploration tasks from a single private sketch.
no code implementations • 17 May 2022 • Rémi Delogne, Vincent Schellekens, Laurent Jacques
In a nutshell, the SPE shows that the scalar product of a signal sketch with the "sign" of the sketch of a given pattern approximates the square of the projection of that signal on this pattern.
no code implementations • 20 Apr 2021 • Vincent Schellekens, Laurent Jacques
The compressive learning framework reduces the computational cost of training on large-scale datasets.
no code implementations • 14 Sep 2020 • Vincent Schellekens, Laurent Jacques
In compressive learning, a mixture model (a set of centroids or a Gaussian mixture) is learned from a sketch vector, that serves as a highly compressed representation of the dataset.
no code implementations • 4 Aug 2020 • Rémi Gribonval, Antoine Chatalic, Nicolas Keriven, Vincent Schellekens, Laurent Jacques, Philip Schniter
This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e. g., clustering, classification, or regression) is performed.
no code implementations • 14 Apr 2020 • Vincent Schellekens, Laurent Jacques
Concretely, we introduce the general framework of asymmetric random periodic features, where the two signals of interest are observed through random periodic features: random projections followed by a general periodic map, which is allowed to be different for both signals.
1 code implementation • 12 Feb 2020 • Vincent Schellekens, Laurent Jacques
Generative networks implicitly approximate complex densities from their sampling with impressive accuracy.
no code implementations • 4 Dec 2018 • Vincent Schellekens, Laurent Jacques
Compressive learning is a framework where (so far unsupervised) learning tasks use not the entire dataset but a compressed summary (sketch) of it.
no code implementations • 26 Apr 2018 • Vincent Schellekens, Laurent Jacques
The recent framework of compressive statistical learning aims at designing tractable learning algorithms that use only a heavily compressed representation-or sketch-of massive datasets.