Search Results for author: Illia Horenko

Found 6 papers, 0 papers with code

Towards Generalized Entropic Sparsification for Convolutional Neural Networks

no code implementations6 Apr 2024 Tin Barisin, Illia Horenko

Convolutional neural networks (CNNs) are reported to be overparametrized.

Gauge-optimal approximate learning for small data classification problems

no code implementations29 Oct 2023 Edoardo Vecchi, Davide Bassetti, Fabio Graziato, Lukas Pospisil, Illia Horenko

As a potential solution to this problem, here we exploit the idea of reducing and rotating the feature space in a lower-dimensional gauge and propose the Gauge-Optimal Approximate Learning (GOAL) algorithm, which provides an analytically tractable joint solution to the dimension reduction, feature segmentation and classification problems for small data learning problems.

Classification Dimensionality Reduction

On existence, uniqueness and scalability of adversarial robustness measures for AI classifiers

no code implementations19 Oct 2023 Illia Horenko

Simply-verifiable mathematical conditions for existence, uniqueness and explicit analytical computation of minimal adversarial paths (MAP) and minimal adversarial distances (MAD) for (locally) uniquely-invertible classifiers, for generalized linear models (GLM), and for entropic AI (EAI) are formulated and proven.

Adversarial Robustness

Linearly-scalable learning of smooth low-dimensional patterns with permutation-aided entropic dimension reduction

no code implementations17 Jun 2023 Illia Horenko, Lukas Pospisil

In many data science applications, the objective is to extract appropriately-ordered smooth low-dimensional data patterns from high-dimensional data sets.

Dimensionality Reduction

Robust learning of data anomalies with analytically-solvable entropic outlier sparsification

no code implementations22 Dec 2021 Illia Horenko

Entropic Outlier Sparsification (EOS) is proposed as a robust computational strategy for the detection of data anomalies in a broad class of learning methods, including the unsupervised problems (like detection of non-Gaussian outliers in mostly-Gaussian data) and in the supervised learning with mislabeled data.

On a scalable entropic breaching of the overfitting barrier in machine learning

no code implementations8 Feb 2020 Illia Horenko

Overfitting and treatment of "small data" are among the most challenging problems in the machine learning (ML), when a relatively small data statistics size $T$ is not enough to provide a robust ML fit for a relatively large data feature dimension $D$.

BIG-bench Machine Learning Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.