Search Results for author: Alexander N. Gorban

Found 33 papers, 7 papers with code

Weakly Supervised Learners for Correction of AI Errors with Provable Performance Guarantees

no code implementations31 Jan 2024 Ivan Y. Tyukin, Tatiana Tyukina, Daniel van Helden, Zedong Zheng, Evgeny M. Mirkes, Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, Penelope Allison

A key technical focus of the work is in providing performance guarantees for these new AI correctors through bounds on the probabilities of incorrect decisions.

Relative intrinsic dimensionality is intrinsic to learning

no code implementations10 Oct 2023 Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, Ivan Y. Tyukin

High dimensional data can have a surprising property: pairs of data points may be easily separated from each other, or even from arbitrary subsets, with high probability using just simple linear classifiers.

Binary Classification

The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning

no code implementations13 Sep 2023 Alexander Bastounis, Alexander N. Gorban, Anders C. Hansen, Desmond J. Higham, Danil Prokhorov, Oliver Sutton, Ivan Y. Tyukin, Qinghua Zhou

We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation.

How adversarial attacks can disrupt seemingly stable accurate classifiers

no code implementations7 Sep 2023 Oliver J. Sutton, Qinghua Zhou, Ivan Y. Tyukin, Alexander N. Gorban, Alexander Bastounis, Desmond J. Higham

We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability -- notably the simultaneous susceptibility of the (otherwise accurate) model to easily constructed adversarial attacks, and robustness to random perturbations of the input data.

Image Classification

Towards a mathematical understanding of learning from few examples with nonlinear feature maps

no code implementations7 Nov 2022 Oliver J. Sutton, Alexander N. Gorban, Ivan Y. Tyukin

We consider the problem of data classification where the training set consists of just a few data points.

Domain Adaptation Principal Component Analysis: base linear method for learning with out-of-distribution data

1 code implementation28 Aug 2022 Evgeny M Mirkes, Jonathan Bac, Aziz Fouché, Sergey V. Stasenko, Andrei Zinovyev, Alexander N. Gorban

Domain adaptation is a popular paradigm in modern machine learning which aims at tackling the problem of divergence (or shift) between the labeled training and validation datasets (source domain) and a potentially large unlabeled dataset (target domain).

Domain Adaptation

An Informational Space Based Semantic Analysis for Scientific Texts

no code implementations31 May 2022 Neslihan Suzen, Alexander N. Gorban, Jeremy Levesley, Evgeny M. Mirkes

This paper introduces computational methods for semantic analysis and the quantifying the meaning of short scientific texts.

Common Sense Reasoning

Learning from few examples with nonlinear feature maps

no code implementations31 Mar 2022 Ivan Y. Tyukin, Oliver Sutton, Alexander N. Gorban

In this work we consider the problem of data classification in post-classical settings were the number of training examples consists of mere few data points.

Quasi-orthogonality and intrinsic dimensions as measures of learning and generalisation

no code implementations30 Mar 2022 Qinghua Zhou, Alexander N. Gorban, Evgeny M. Mirkes, Jonathan Bac, Andrei Zinovyev, Ivan Y. Tyukin

Recent work by Mellor et al (2021) showed that there may exist correlations between the accuracies of trained networks and the values of some easily computable measures defined on randomly initialised networks which may enable to search tens of thousands of neural architectures without training.

Neural Architecture Search

Situation-based memory in spiking neuron-astrocyte network

no code implementations15 Feb 2022 Susanna Gordleeva, Yuliya A. Tsybina, Mikhail I. Krivonosov, Ivan Y. Tyukin, Victor B. Kazantsev, Alexey A. Zaikin, Alexander N. Gorban

Three pools of stimuli patterns are considered: external patterns, patterns from the situation associative pool regularly presented to the network and learned by the network, and patterns already learned and remembered by astrocytes.

Retrieval

Scikit-dimension: a Python package for intrinsic dimension estimation

1 code implementation6 Sep 2021 Jonathan Bac, Evgeny M. Mirkes, Alexander N. Gorban, Ivan Tyukin, Andrei Zinovyev

Dealing with uncertainty in applications of machine learning to real-life data critically depends on the knowledge of intrinsic dimensionality (ID).

Benchmarking

The Feasibility and Inevitability of Stealth Attacks

no code implementations26 Jun 2021 Ivan Y. Tyukin, Desmond J. Higham, Alexander Bastounis, Eliyas Woldegeorgis, Alexander N. Gorban

Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team.

Demystification of Few-shot and One-shot Learning

no code implementations25 Apr 2021 Ivan Y. Tyukin, Alexander N. Gorban, Muhammad H. Alkhudaydi, Qinghua Zhou

Few-shot and one-shot learning have been the subject of active and intensive research in recent years, with mounting evidence pointing to successful implementation and exploitation of few-shot learning algorithms in practice.

One-Shot Learning

General stochastic separation theorems with optimal bounds

no code implementations11 Oct 2020 Bogdan Grechuk, Alexander N. Gorban, Ivan Y. Tyukin

To manage errors and analyze vulnerabilities, the stochastic separation theorems should evaluate the probability that the dataset will be Fisher separable in given dimensionality and for a given class of distributions.

Pruning coupled with learning, ensembles of minimal neural networks, and future of XAI

no code implementations13 May 2020 Alexander N. Gorban, Evgeny M. Mirkes

This principle is expected to work both for artificial NN and for selection and modification of important synaptic contacts in brain.

Explainable Artificial Intelligence (XAI) Face Recognition +1

Fractional norms and quasinorms do not help to overcome the curse of dimensionality

no code implementations29 Apr 2020 Evgeny M. Mirkes, Jeza Allohibi, Alexander N. Gorban

The curse of dimensionality causes the well-known and widely discussed problems for machine learning methods.

General Classification

Informational Space of Meaning for Scientific Texts

no code implementations28 Apr 2020 Neslihan Suzen, Evgeny M. Mirkes, Alexander N. Gorban

The LSC is a scientific corpus of 1, 673, 350 abstracts and the LScDC is a scientific dictionary which words are extracted from the LSC.

On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems

no code implementations9 Apr 2020 Ivan Y. Tyukin, Desmond J. Higham, Alexander N. Gorban

We show that in both cases, i. e., in the case of an attack based on adversarial examples and in the case of a stealth attack, the dimensionality of the AI's decision-making space is a major contributor to the AI's susceptibility.

Decision Making Small Data Image Classification

High--Dimensional Brain in a High-Dimensional World: Blessing of Dimensionality

no code implementations14 Jan 2020 Alexander N. Gorban, Valery A. Makarov, Ivan Y. Tyukin

High-dimensional data and high-dimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning.

BIG-bench Machine Learning Vocal Bursts Intensity Prediction

LScDC-new large scientific dictionary

1 code implementation14 Dec 2019 Neslihan Suzen, Evgeny M. Mirkes, Alexander N. Gorban

In this paper, we present a scientific corpus of abstracts of academic papers in English -- Leicester Scientific Corpus (LSC).

Blessing of dimensionality at the edge

no code implementations30 Sep 2019 Ivan Y. Tyukin, Alexander N. Gorban, Alistair A. McEwan, Sepehr Meshkinfamfard, Lixin Tang

Another feature of this approach is that, in the supervised setting, the computational complexity of training is linear in the number of training samples.

General Classification

Symphony of high-dimensional brain

no code implementations27 Jun 2019 Alexander N. Gorban, Valeri A. Makarov, Ivan Y. Tyukin

This paper is the final part of the scientific discussion organised by the Journal "Physics of Life Rviews" about the simplicity revolution in neuroscience and AI.

BIG-bench Machine Learning Learning Theory +1

Fast Construction of Correcting Ensembles for Legacy Artificial Intelligence Systems: Algorithms and a Case Study

no code implementations12 Oct 2018 Ivan Y. Tyukin, Alexander N. Gorban, Stephen Green, Danil Prokhorov

This paper presents a technology for simple and computationally efficient improvements of a generic Artificial Intelligence (AI) system, including Multilayer and Deep Learning neural networks.

Robust And Scalable Learning Of Complex Dataset Topologies Via Elpigraph

2 code implementations20 Apr 2018 Luca Albergante, Evgeny M. Mirkes, Huidong Chen, Alexis Martin, Louis Faure, Emmanuel Barillot, Luca Pinello, Alexander N. Gorban, Andrei Zinovyev

Large datasets represented by multidimensional data point clouds often possess non-trivial distributions with branching trajectories and excluded regions, with the recent single-cell transcriptomic studies of developing embryo being notable examples.

Astronomy

Augmented Artificial Intelligence: a Conceptual Framework

no code implementations6 Feb 2018 Alexander N. Gorban, Bogdan Grechuk, Ivan Y. Tyukin

We combine some ideas of learning in heterogeneous multiagent systems with new and original mathematical approaches for non-iterative corrections of errors of legacy AI systems.

Knowledge Transfer Between Artificial Intelligence Systems

no code implementations5 Sep 2017 Ivan Y. Tyukin, Alexander N. Gorban, Konstantin Sofeikov, Ilya Romanenko

We consider the fundamental question: how a legacy "student" Artificial Intelligent (AI) system could learn from a legacy "teacher" AI system or a human expert without complete re-training and, most importantly, without requiring significant computational resources.

Transfer Learning

Multivariate Gaussian and Student$-t$ Process Regression for Multi-output Prediction

1 code implementation13 Mar 2017 Zexun Chen, Bo wang, Alexander N. Gorban

Gaussian process model for vector-valued function has been shown to be useful for multi-output prediction.

GPR regression

One-Trial Correction of Legacy AI Systems and Stochastic Separation Theorems

no code implementations3 Oct 2016 Alexander N. Gorban, Ilya Romanenko, Richard Burton, Ivan Y. Tyukin

The tuning method that we propose enables dealing with errors without the need to re-train the system.

Basic and simple mathematical model of coupled transcription, translation and degradation

1 code implementation26 Apr 2012 Alexander N. Gorban, Annick Harel-Bellan, Nadya Morozova, Andrei Zinovyev

Synthesis of proteins is one of the most fundamental biological processes, which consumes a significant amount of cellular resources.

Molecular Networks

Cannot find the paper you are looking for? You can Submit a new open access paper.