1 code implementation • 15 Nov 2023 • Nataliia Molchanova, Vatsal Raina, Andrey Malinin, Francesco La Rosa, Adrien Depeursinge, Mark Gales, Cristina Granziera, Henning Muller, Mara Graziani, Meritxell Bach Cuadra
The results from a multi-centric MRI dataset of 334 patients demonstrate that our proposed measures more effectively capture model errors at the lesion and patient scales compared to measures that average voxel-scale uncertainty values.
1 code implementation • 10 Feb 2023 • Vatsal Raina, Nataliia Molchanova, Mara Graziani, Andrey Malinin, Henning Muller, Meritxell Bach Cuadra, Mark Gales
This work describes a detailed analysis of the recently proposed normalised Dice Similarity Coefficient (nDSC) for binary segmentation tasks as an adaptation of DSC which scales the precision at a fixed recall rate to tackle this bias.
1 code implementation • 9 Nov 2022 • Nataliia Molchanova, Vatsal Raina, Andrey Malinin, Francesco La Rosa, Henning Muller, Mark Gales, Cristina Granziera, Mara Graziani, Meritxell Bach Cuadra
This paper focuses on the uncertainty estimation for white matter lesions (WML) segmentation in magnetic resonance imaging (MRI).
2 code implementations • 30 Jun 2022 • Andrey Malinin, Andreas Athanasopoulos, Muhamed Barakovic, Meritxell Bach Cuadra, Mark J. F. Gales, Cristina Granziera, Mara Graziani, Nikolay Kartashev, Konstantinos Kyriakopoulos, Po-Jui Lu, Nataliia Molchanova, Antonis Nikitakis, Vatsal Raina, Francesco La Rosa, Eli Sivena, Vasileios Tsarsitalidis, Efi Tsompopoulou, Elena Volf
This creates a need to be able to assess how robustly ML models generalize as well as the quality of their uncertainty estimates.
1 code implementation • EMNLP 2021 • Ivan Provilkov, Andrey Malinin
We demonstrate that MSR significantly reduces degradation with growing beam size and improves final translation quality on the IWSTL$15$ En-Vi, IWSTL$17$ En-Fr, and WMT$14$ En-De datasets.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +6
no code implementations • EMNLP 2021 • Carel van Niekerk, Andrey Malinin, Christian Geishauser, Michael Heck, Hsien-Chin Lin, Nurul Lubis, Shutong Feng, Milica Gašić
This highlights the importance of developing neural dialogue belief trackers that take uncertainty into account.
3 code implementations • 15 Jul 2021 • Andrey Malinin, Neil Band, Ganshin, Alexander, German Chesnokov, Yarin Gal, Mark J. F. Gales, Alexey Noskov, Andrey Ploskonosov, Liudmila Prokhorenkova, Ivan Provilkov, Vatsal Raina, Vyas Raina, Roginskiy, Denis, Mariya Shmatova, Panos Tigas, Boris Yangel
However, many tasks of practical interest have different modalities, such as tabular data, audio, text, or sensor data, which offer significant challenges involving regression and discrete or continuous structured prediction.
Ranked #2 on Weather Forecasting on Shifts
1 code implementation • NeurIPS 2021 • Ekaterina Lobacheva, Maxim Kodryan, Nadezhda Chirkova, Andrey Malinin, Dmitry Vetrov
Training neural networks with batch normalization and weight decay has become a common practice in recent years.
1 code implementation • NeurIPS 2021 • Max Ryabinin, Andrey Malinin, Mark Gales
\emph{Ensemble Distribution Distillation} is an approach that allows a single model to efficiently capture both the predictive performance and uncertainty estimates of an ensemble.
no code implementations • 24 Nov 2020 • Yassir Fathullah, Mark Gales, Andrey Malinin
It is, however, more challenging than the standard tasks investigated for distillation as the prediction of any grammatical correction to a word will be highly dependent on both the input sequence and the generated output history for the word.
1 code implementation • 20 Jun 2020 • Andrey Malinin, Sergey Chervontsev, Ivan Provilkov, Mark Gales
Prior Networks are a recently developed class of models which yield interpretable measures of uncertainty and have been shown to outperform state-of-the-art ensemble approaches on a range of tasks.
no code implementations • ICLR 2021 • Andrey Malinin, Liudmila Prokhorenkova, Aleksei Ustimenko
For many practical, high-risk applications, it is essential to quantify uncertainty in a model's predictions to avoid costly mistakes.
no code implementations • ICLR 2021 • Andrey Malinin, Mark Gales
Uncertainty estimation is important for ensuring safety and robustness of AI systems.
1 code implementation • NeurIPS 2019 • Andrey Malinin, Mark Gales
Second, taking advantage of this new training criterion, this paper investigates using Prior Networks to detect adversarial attacks and proposes a generalized form of adversarial training.
1 code implementation • ICLR 2020 • Andrey Malinin, Bruno Mlodozeniec, Mark Gales
The properties of EnD$^2$ are investigated on both an artificial dataset, and on the CIFAR-10, CIFAR-100 and TinyImageNet datasets, where it is shown that EnD$^2$ can approach the classification performance of an ensemble, and outperforms both standard DNNs and Ensemble Distillation on the tasks of misclassification and out-of-distribution input detection.
no code implementations • 6 Dec 2018 • Andrey Malinin, Mark Gales
In this work, Prior Networks are applied to adversarial attack detection using measures of uncertainty in a similar fashion to Monte-Carlo Dropout.
1 code implementation • NeurIPS 2018 • Andrey Malinin, Mark Gales
Experiments on synthetic and MNIST and CIFAR-10 data show that unlike previous non-Bayesian methods PNs are able to distinguish between data and distributional uncertainty.
no code implementations • ACL 2017 • Andrey Malinin, Anton Ragni, Kate Knill, Mark Gales
On experiments conducted on data from the Business Language Testing Service (BULATS), the proposed approach is found to outperform GPs and DNNs with MCD in uncertainty-based rejection whilst achieving comparable grading performance.