no code implementations • 12 Mar 2024 • Ying Liu, Liucheng Guo, Valeri A. Makarovc, Alexander Gorbana, Evgeny Mirkesa, Ivan Y. Tyukin
Automated hand gesture recognition has long been a focal point in the AI community.
no code implementations • 31 Jan 2024 • Ivan Y. Tyukin, Tatiana Tyukina, Daniel van Helden, Zedong Zheng, Evgeny M. Mirkes, Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, Penelope Allison
A key technical focus of the work is in providing performance guarantees for these new AI correctors through bounds on the probabilities of incorrect decisions.
no code implementations • 10 Oct 2023 • Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, Ivan Y. Tyukin
High dimensional data can have a surprising property: pairs of data points may be easily separated from each other, or even from arbitrary subsets, with high probability using just simple linear classifiers.
no code implementations • 13 Sep 2023 • Alexander Bastounis, Alexander N. Gorban, Anders C. Hansen, Desmond J. Higham, Danil Prokhorov, Oliver Sutton, Ivan Y. Tyukin, Qinghua Zhou
We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation.
no code implementations • 7 Sep 2023 • Oliver J. Sutton, Qinghua Zhou, Ivan Y. Tyukin, Alexander N. Gorban, Alexander Bastounis, Desmond J. Higham
We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability -- notably the simultaneous susceptibility of the (otherwise accurate) model to easily constructed adversarial attacks, and robustness to random perturbations of the input data.
no code implementations • 12 May 2023 • Ying Liu, Liucheng Guo, Valeri A. Makarov, Yuxiang Huang, Alexander Gorban, Evgeny Mirkes, Ivan Y. Tyukin
However, there is growing demand for gesture recognition technology that can be implemented on low-power devices using limited sensor data instead of high-dimensional inputs like hand images.
no code implementations • 7 Nov 2022 • Oliver J. Sutton, Alexander N. Gorban, Ivan Y. Tyukin
We consider the problem of data classification where the training set consists of just a few data points.
no code implementations • 31 Mar 2022 • Ivan Y. Tyukin, Oliver Sutton, Alexander N. Gorban
In this work we consider the problem of data classification in post-classical settings were the number of training examples consists of mere few data points.
no code implementations • 30 Mar 2022 • Qinghua Zhou, Alexander N. Gorban, Evgeny M. Mirkes, Jonathan Bac, Andrei Zinovyev, Ivan Y. Tyukin
Recent work by Mellor et al (2021) showed that there may exist correlations between the accuracies of trained networks and the values of some easily computable measures defined on randomly initialised networks which may enable to search tens of thousands of neural architectures without training.
no code implementations • 15 Feb 2022 • Susanna Gordleeva, Yuliya A. Tsybina, Mikhail I. Krivonosov, Ivan Y. Tyukin, Victor B. Kazantsev, Alexey A. Zaikin, Alexander N. Gorban
Three pools of stimuli patterns are considered: external patterns, patterns from the situation associative pool regularly presented to the network and learned by the network, and patterns already learned and remembered by astrocytes.
no code implementations • 3 Jul 2021 • Santos J. Núñez Jareño, Daniël P. van Helden, Evgeny M. Mirkes, Ivan Y. Tyukin, Penelope M. Allison
To address the challenge we propose to use a transfer learning approach whereby the model is first trained on a synthetic dataset replicating features of the original objects.
no code implementations • 28 Jun 2021 • Alexander N. Gorban, Bogdan Grechuk, Evgeny M. Mirkes, Sergey V. Stasenko, Ivan Y. Tyukin
New stochastic separation theorems for data with fine-grained structure are formulated and proved.
no code implementations • 26 Jun 2021 • Ivan Y. Tyukin, Desmond J. Higham, Alexander Bastounis, Eliyas Woldegeorgis, Alexander N. Gorban
Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team.
no code implementations • 25 Apr 2021 • Ivan Y. Tyukin, Alexander N. Gorban, Muhammad H. Alkhudaydi, Qinghua Zhou
Few-shot and one-shot learning have been the subject of active and intensive research in recent years, with mounting evidence pointing to successful implementation and exploitation of few-shot learning algorithms in practice.
no code implementations • 11 Oct 2020 • Bogdan Grechuk, Alexander N. Gorban, Ivan Y. Tyukin
To manage errors and analyze vulnerabilities, the stochastic separation theorems should evaluate the probability that the dataset will be Fisher separable in given dimensionality and for a given class of distributions.
no code implementations • 9 Apr 2020 • Ivan Y. Tyukin, Desmond J. Higham, Alexander N. Gorban
We show that in both cases, i. e., in the case of an attack based on adversarial examples and in the case of a stealth attack, the dimensionality of the AI's decision-making space is a major contributor to the AI's susceptibility.
no code implementations • 14 Jan 2020 • Alexander N. Gorban, Valery A. Makarov, Ivan Y. Tyukin
High-dimensional data and high-dimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning.
BIG-bench Machine Learning Vocal Bursts Intensity Prediction
no code implementations • 30 Sep 2019 • Ivan Y. Tyukin, Alexander N. Gorban, Alistair A. McEwan, Sepehr Meshkinfamfard, Lixin Tang
Another feature of this approach is that, in the supervised setting, the computational complexity of training is linear in the number of training samples.
no code implementations • 27 Jun 2019 • Alexander N. Gorban, Valeri A. Makarov, Ivan Y. Tyukin
This paper is the final part of the scientific discussion organised by the Journal "Physics of Life Rviews" about the simplicity revolution in neuroscience and AI.
no code implementations • 12 Oct 2018 • Ivan Y. Tyukin, Alexander N. Gorban, Stephen Green, Danil Prokhorov
This paper presents a technology for simple and computationally efficient improvements of a generic Artificial Intelligence (AI) system, including Multilayer and Deep Learning neural networks.
no code implementations • 6 Feb 2018 • Alexander N. Gorban, Bogdan Grechuk, Ivan Y. Tyukin
We combine some ideas of learning in heterogeneous multiagent systems with new and original mathematical approaches for non-iterative corrections of errors of legacy AI systems.
no code implementations • 5 Sep 2017 • Ivan Y. Tyukin, Alexander N. Gorban, Konstantin Sofeikov, Ilya Romanenko
We consider the fundamental question: how a legacy "student" Artificial Intelligent (AI) system could learn from a legacy "teacher" AI system or a human expert without complete re-training and, most importantly, without requiring significant computational resources.
no code implementations • 3 Oct 2016 • Alexander N. Gorban, Ilya Romanenko, Richard Burton, Ivan Y. Tyukin
The tuning method that we propose enables dealing with errors without the need to re-train the system.