Search Results for author: Ivan Y. Tyukin

Found 23 papers, 0 papers with code

Weakly Supervised Learners for Correction of AI Errors with Provable Performance Guarantees

no code implementations31 Jan 2024 Ivan Y. Tyukin, Tatiana Tyukina, Daniel van Helden, Zedong Zheng, Evgeny M. Mirkes, Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, Penelope Allison

A key technical focus of the work is in providing performance guarantees for these new AI correctors through bounds on the probabilities of incorrect decisions.

Relative intrinsic dimensionality is intrinsic to learning

no code implementations10 Oct 2023 Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, Ivan Y. Tyukin

High dimensional data can have a surprising property: pairs of data points may be easily separated from each other, or even from arbitrary subsets, with high probability using just simple linear classifiers.

Binary Classification

The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning

no code implementations13 Sep 2023 Alexander Bastounis, Alexander N. Gorban, Anders C. Hansen, Desmond J. Higham, Danil Prokhorov, Oliver Sutton, Ivan Y. Tyukin, Qinghua Zhou

We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation.

How adversarial attacks can disrupt seemingly stable accurate classifiers

no code implementations7 Sep 2023 Oliver J. Sutton, Qinghua Zhou, Ivan Y. Tyukin, Alexander N. Gorban, Alexander Bastounis, Desmond J. Higham

We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability -- notably the simultaneous susceptibility of the (otherwise accurate) model to easily constructed adversarial attacks, and robustness to random perturbations of the input data.

Image Classification

Agile gesture recognition for capacitive sensing devices: adapting on-the-job

no code implementations12 May 2023 Ying Liu, Liucheng Guo, Valeri A. Makarov, Yuxiang Huang, Alexander Gorban, Evgeny Mirkes, Ivan Y. Tyukin

However, there is growing demand for gesture recognition technology that can be implemented on low-power devices using limited sensor data instead of high-dimensional inputs like hand images.

Dimensionality Reduction Hand Gesture Recognition +1

Towards a mathematical understanding of learning from few examples with nonlinear feature maps

no code implementations7 Nov 2022 Oliver J. Sutton, Alexander N. Gorban, Ivan Y. Tyukin

We consider the problem of data classification where the training set consists of just a few data points.

Learning from few examples with nonlinear feature maps

no code implementations31 Mar 2022 Ivan Y. Tyukin, Oliver Sutton, Alexander N. Gorban

In this work we consider the problem of data classification in post-classical settings were the number of training examples consists of mere few data points.

Quasi-orthogonality and intrinsic dimensions as measures of learning and generalisation

no code implementations30 Mar 2022 Qinghua Zhou, Alexander N. Gorban, Evgeny M. Mirkes, Jonathan Bac, Andrei Zinovyev, Ivan Y. Tyukin

Recent work by Mellor et al (2021) showed that there may exist correlations between the accuracies of trained networks and the values of some easily computable measures defined on randomly initialised networks which may enable to search tens of thousands of neural architectures without training.

Neural Architecture Search

Situation-based memory in spiking neuron-astrocyte network

no code implementations15 Feb 2022 Susanna Gordleeva, Yuliya A. Tsybina, Mikhail I. Krivonosov, Ivan Y. Tyukin, Victor B. Kazantsev, Alexey A. Zaikin, Alexander N. Gorban

Three pools of stimuli patterns are considered: external patterns, patterns from the situation associative pool regularly presented to the network and learned by the network, and patterns already learned and remembered by astrocytes.

Retrieval

Learning from scarce information: using synthetic data to classify Roman fine ware pottery

no code implementations3 Jul 2021 Santos J. Núñez Jareño, Daniël P. van Helden, Evgeny M. Mirkes, Ivan Y. Tyukin, Penelope M. Allison

To address the challenge we propose to use a transfer learning approach whereby the model is first trained on a synthetic dataset replicating features of the original objects.

Transfer Learning

The Feasibility and Inevitability of Stealth Attacks

no code implementations26 Jun 2021 Ivan Y. Tyukin, Desmond J. Higham, Alexander Bastounis, Eliyas Woldegeorgis, Alexander N. Gorban

Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team.

Demystification of Few-shot and One-shot Learning

no code implementations25 Apr 2021 Ivan Y. Tyukin, Alexander N. Gorban, Muhammad H. Alkhudaydi, Qinghua Zhou

Few-shot and one-shot learning have been the subject of active and intensive research in recent years, with mounting evidence pointing to successful implementation and exploitation of few-shot learning algorithms in practice.

One-Shot Learning

General stochastic separation theorems with optimal bounds

no code implementations11 Oct 2020 Bogdan Grechuk, Alexander N. Gorban, Ivan Y. Tyukin

To manage errors and analyze vulnerabilities, the stochastic separation theorems should evaluate the probability that the dataset will be Fisher separable in given dimensionality and for a given class of distributions.

On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems

no code implementations9 Apr 2020 Ivan Y. Tyukin, Desmond J. Higham, Alexander N. Gorban

We show that in both cases, i. e., in the case of an attack based on adversarial examples and in the case of a stealth attack, the dimensionality of the AI's decision-making space is a major contributor to the AI's susceptibility.

Decision Making Small Data Image Classification

High--Dimensional Brain in a High-Dimensional World: Blessing of Dimensionality

no code implementations14 Jan 2020 Alexander N. Gorban, Valery A. Makarov, Ivan Y. Tyukin

High-dimensional data and high-dimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning.

BIG-bench Machine Learning Vocal Bursts Intensity Prediction

Blessing of dimensionality at the edge

no code implementations30 Sep 2019 Ivan Y. Tyukin, Alexander N. Gorban, Alistair A. McEwan, Sepehr Meshkinfamfard, Lixin Tang

Another feature of this approach is that, in the supervised setting, the computational complexity of training is linear in the number of training samples.

General Classification

Symphony of high-dimensional brain

no code implementations27 Jun 2019 Alexander N. Gorban, Valeri A. Makarov, Ivan Y. Tyukin

This paper is the final part of the scientific discussion organised by the Journal "Physics of Life Rviews" about the simplicity revolution in neuroscience and AI.

BIG-bench Machine Learning Learning Theory +1

Fast Construction of Correcting Ensembles for Legacy Artificial Intelligence Systems: Algorithms and a Case Study

no code implementations12 Oct 2018 Ivan Y. Tyukin, Alexander N. Gorban, Stephen Green, Danil Prokhorov

This paper presents a technology for simple and computationally efficient improvements of a generic Artificial Intelligence (AI) system, including Multilayer and Deep Learning neural networks.

Augmented Artificial Intelligence: a Conceptual Framework

no code implementations6 Feb 2018 Alexander N. Gorban, Bogdan Grechuk, Ivan Y. Tyukin

We combine some ideas of learning in heterogeneous multiagent systems with new and original mathematical approaches for non-iterative corrections of errors of legacy AI systems.

Knowledge Transfer Between Artificial Intelligence Systems

no code implementations5 Sep 2017 Ivan Y. Tyukin, Alexander N. Gorban, Konstantin Sofeikov, Ilya Romanenko

We consider the fundamental question: how a legacy "student" Artificial Intelligent (AI) system could learn from a legacy "teacher" AI system or a human expert without complete re-training and, most importantly, without requiring significant computational resources.

Transfer Learning

One-Trial Correction of Legacy AI Systems and Stochastic Separation Theorems

no code implementations3 Oct 2016 Alexander N. Gorban, Ilya Romanenko, Richard Burton, Ivan Y. Tyukin

The tuning method that we propose enables dealing with errors without the need to re-train the system.

Cannot find the paper you are looking for? You can Submit a new open access paper.