no code implementations • 9 Feb 2024 • Neslihan Suzen, Evgeny M. Mirkes, Damian Roland, Jeremy Levesley, Alexander N. Gorban, Tim J. Coats
Electronic patient records (EPRs) produce a wealth of data but contain significant missing information.
no code implementations • 31 Jan 2024 • Ivan Y. Tyukin, Tatiana Tyukina, Daniel van Helden, Zedong Zheng, Evgeny M. Mirkes, Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, Penelope Allison
A key technical focus of the work is in providing performance guarantees for these new AI correctors through bounds on the probabilities of incorrect decisions.
no code implementations • 23 Nov 2023 • Innokentiy Kastalskiy, Andrei Zinovyev, Evgeny Mirkes, Victor Kazantsev, Alexander N. Gorban
In the context of natural disasters, human responses inevitably intertwine with natural factors.
no code implementations • 10 Oct 2023 • Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, Ivan Y. Tyukin
High dimensional data can have a surprising property: pairs of data points may be easily separated from each other, or even from arbitrary subsets, with high probability using just simple linear classifiers.
no code implementations • 13 Sep 2023 • Alexander Bastounis, Alexander N. Gorban, Anders C. Hansen, Desmond J. Higham, Danil Prokhorov, Oliver Sutton, Ivan Y. Tyukin, Qinghua Zhou
We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation.
no code implementations • 7 Sep 2023 • Oliver J. Sutton, Qinghua Zhou, Ivan Y. Tyukin, Alexander N. Gorban, Alexander Bastounis, Desmond J. Higham
We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability -- notably the simultaneous susceptibility of the (otherwise accurate) model to easily constructed adversarial attacks, and robustness to random perturbations of the input data.
no code implementations • 7 Nov 2022 • Oliver J. Sutton, Alexander N. Gorban, Ivan Y. Tyukin
We consider the problem of data classification where the training set consists of just a few data points.
1 code implementation • 28 Aug 2022 • Evgeny M Mirkes, Jonathan Bac, Aziz Fouché, Sergey V. Stasenko, Andrei Zinovyev, Alexander N. Gorban
Domain adaptation is a popular paradigm in modern machine learning which aims at tackling the problem of divergence (or shift) between the labeled training and validation datasets (source domain) and a potentially large unlabeled dataset (target domain).
no code implementations • 31 May 2022 • Neslihan Suzen, Alexander N. Gorban, Jeremy Levesley, Evgeny M. Mirkes
This paper introduces computational methods for semantic analysis and the quantifying the meaning of short scientific texts.
no code implementations • 31 Mar 2022 • Ivan Y. Tyukin, Oliver Sutton, Alexander N. Gorban
In this work we consider the problem of data classification in post-classical settings were the number of training examples consists of mere few data points.
no code implementations • 30 Mar 2022 • Qinghua Zhou, Alexander N. Gorban, Evgeny M. Mirkes, Jonathan Bac, Andrei Zinovyev, Ivan Y. Tyukin
Recent work by Mellor et al (2021) showed that there may exist correlations between the accuracies of trained networks and the values of some easily computable measures defined on randomly initialised networks which may enable to search tens of thousands of neural architectures without training.
no code implementations • 15 Feb 2022 • Susanna Gordleeva, Yuliya A. Tsybina, Mikhail I. Krivonosov, Ivan Y. Tyukin, Victor B. Kazantsev, Alexey A. Zaikin, Alexander N. Gorban
Three pools of stimuli patterns are considered: external patterns, patterns from the situation associative pool regularly presented to the network and learned by the network, and patterns already learned and remembered by astrocytes.
1 code implementation • 6 Sep 2021 • Jonathan Bac, Evgeny M. Mirkes, Alexander N. Gorban, Ivan Tyukin, Andrei Zinovyev
Dealing with uncertainty in applications of machine learning to real-life data critically depends on the knowledge of intrinsic dimensionality (ID).
no code implementations • 28 Jun 2021 • Alexander N. Gorban, Bogdan Grechuk, Evgeny M. Mirkes, Sergey V. Stasenko, Ivan Y. Tyukin
New stochastic separation theorems for data with fine-grained structure are formulated and proved.
no code implementations • 26 Jun 2021 • Ivan Y. Tyukin, Desmond J. Higham, Alexander Bastounis, Eliyas Woldegeorgis, Alexander N. Gorban
Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team.
no code implementations • 25 Apr 2021 • Ivan Y. Tyukin, Alexander N. Gorban, Muhammad H. Alkhudaydi, Qinghua Zhou
Few-shot and one-shot learning have been the subject of active and intensive research in recent years, with mounting evidence pointing to successful implementation and exploitation of few-shot learning algorithms in practice.
no code implementations • 11 Oct 2020 • Bogdan Grechuk, Alexander N. Gorban, Ivan Y. Tyukin
To manage errors and analyze vulnerabilities, the stochastic separation theorems should evaluate the probability that the dataset will be Fisher separable in given dimensionality and for a given class of distributions.
1 code implementation • 7 Jul 2020 • Sergey E. Golovenkin, Jonathan Bac, Alexander Chervov, Evgeny M. Mirkes, Yuliya V. Orlova, Emmanuel Barillot, Alexander N. Gorban, Andrei Zinovyev
Large observational clinical datasets become increasingly available for mining associations between various disease traits and administered therapy.
no code implementations • 13 May 2020 • Alexander N. Gorban, Evgeny M. Mirkes
This principle is expected to work both for artificial NN and for selection and modification of important synaptic contacts in brain.
Explainable Artificial Intelligence (XAI) Face Recognition +1
no code implementations • 29 Apr 2020 • Evgeny M. Mirkes, Jeza Allohibi, Alexander N. Gorban
The curse of dimensionality causes the well-known and widely discussed problems for machine learning methods.
no code implementations • 28 Apr 2020 • Neslihan Suzen, Evgeny M. Mirkes, Alexander N. Gorban
The LSC is a scientific corpus of 1, 673, 350 abstracts and the LScDC is a scientific dictionary which words are extracted from the LSC.
no code implementations • 9 Apr 2020 • Ivan Y. Tyukin, Desmond J. Higham, Alexander N. Gorban
We show that in both cases, i. e., in the case of an attack based on adversarial examples and in the case of a stealth attack, the dimensionality of the AI's decision-making space is a major contributor to the AI's susceptibility.
no code implementations • 14 Jan 2020 • Alexander N. Gorban, Valery A. Makarov, Ivan Y. Tyukin
High-dimensional data and high-dimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning.
BIG-bench Machine Learning Vocal Bursts Intensity Prediction
1 code implementation • 14 Dec 2019 • Neslihan Suzen, Evgeny M. Mirkes, Alexander N. Gorban
In this paper, we present a scientific corpus of abstracts of academic papers in English -- Leicester Scientific Corpus (LSC).
no code implementations • 30 Sep 2019 • Ivan Y. Tyukin, Alexander N. Gorban, Alistair A. McEwan, Sepehr Meshkinfamfard, Lixin Tang
Another feature of this approach is that, in the supervised setting, the computational complexity of training is linear in the number of training samples.
no code implementations • 27 Jun 2019 • Alexander N. Gorban, Valeri A. Makarov, Ivan Y. Tyukin
This paper is the final part of the scientific discussion organised by the Journal "Physics of Life Rviews" about the simplicity revolution in neuroscience and AI.
no code implementations • 12 Oct 2018 • Ivan Y. Tyukin, Alexander N. Gorban, Stephen Green, Danil Prokhorov
This paper presents a technology for simple and computationally efficient improvements of a generic Artificial Intelligence (AI) system, including Multilayer and Deep Learning neural networks.
2 code implementations • 20 Apr 2018 • Luca Albergante, Evgeny M. Mirkes, Huidong Chen, Alexis Martin, Louis Faure, Emmanuel Barillot, Luca Pinello, Alexander N. Gorban, Andrei Zinovyev
Large datasets represented by multidimensional data point clouds often possess non-trivial distributions with branching trajectories and excluded regions, with the recent single-cell transcriptomic studies of developing embryo being notable examples.
no code implementations • 6 Feb 2018 • Alexander N. Gorban, Bogdan Grechuk, Ivan Y. Tyukin
We combine some ideas of learning in heterogeneous multiagent systems with new and original mathematical approaches for non-iterative corrections of errors of legacy AI systems.
no code implementations • 5 Sep 2017 • Ivan Y. Tyukin, Alexander N. Gorban, Konstantin Sofeikov, Ilya Romanenko
We consider the fundamental question: how a legacy "student" Artificial Intelligent (AI) system could learn from a legacy "teacher" AI system or a human expert without complete re-training and, most importantly, without requiring significant computational resources.
1 code implementation • 13 Mar 2017 • Zexun Chen, Bo wang, Alexander N. Gorban
Gaussian process model for vector-valued function has been shown to be useful for multi-output prediction.
no code implementations • 3 Oct 2016 • Alexander N. Gorban, Ilya Romanenko, Richard Burton, Ivan Y. Tyukin
The tuning method that we propose enables dealing with errors without the need to re-train the system.
1 code implementation • 26 Apr 2012 • Alexander N. Gorban, Annick Harel-Bellan, Nadya Morozova, Andrei Zinovyev
Synthesis of proteins is one of the most fundamental biological processes, which consumes a significant amount of cellular resources.
Molecular Networks