Search Results for author: Qinghua Zhou

Found 7 papers, 1 papers with code

Weakly Supervised Learners for Correction of AI Errors with Provable Performance Guarantees

no code implementations31 Jan 2024 Ivan Y. Tyukin, Tatiana Tyukina, Daniel van Helden, Zedong Zheng, Evgeny M. Mirkes, Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, Penelope Allison

A key technical focus of the work is in providing performance guarantees for these new AI correctors through bounds on the probabilities of incorrect decisions.

Relative intrinsic dimensionality is intrinsic to learning

no code implementations10 Oct 2023 Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, Ivan Y. Tyukin

High dimensional data can have a surprising property: pairs of data points may be easily separated from each other, or even from arbitrary subsets, with high probability using just simple linear classifiers.

Binary Classification

The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning

no code implementations13 Sep 2023 Alexander Bastounis, Alexander N. Gorban, Anders C. Hansen, Desmond J. Higham, Danil Prokhorov, Oliver Sutton, Ivan Y. Tyukin, Qinghua Zhou

We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation.

How adversarial attacks can disrupt seemingly stable accurate classifiers

no code implementations7 Sep 2023 Oliver J. Sutton, Qinghua Zhou, Ivan Y. Tyukin, Alexander N. Gorban, Alexander Bastounis, Desmond J. Higham

We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability -- notably the simultaneous susceptibility of the (otherwise accurate) model to easily constructed adversarial attacks, and robustness to random perturbations of the input data.

Image Classification

Quasi-orthogonality and intrinsic dimensions as measures of learning and generalisation

no code implementations30 Mar 2022 Qinghua Zhou, Alexander N. Gorban, Evgeny M. Mirkes, Jonathan Bac, Andrei Zinovyev, Ivan Y. Tyukin

Recent work by Mellor et al (2021) showed that there may exist correlations between the accuracies of trained networks and the values of some easily computable measures defined on randomly initialised networks which may enable to search tens of thousands of neural architectures without training.

Neural Architecture Search

Demystification of Few-shot and One-shot Learning

no code implementations25 Apr 2021 Ivan Y. Tyukin, Alexander N. Gorban, Muhammad H. Alkhudaydi, Qinghua Zhou

Few-shot and one-shot learning have been the subject of active and intensive research in recent years, with mounting evidence pointing to successful implementation and exploitation of few-shot learning algorithms in practice.

One-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.