no code implementations • 25 Feb 2022 • Bang Xiang Yong, Alexandra Brintrup
Despite numerous studies of deep autoencoders (AEs) for unsupervised anomaly detection, AEs still lack a way to express uncertainty in their predictions, crucial for ensuring safe and trustworthy machine learning systems in high-stake applications.
no code implementations • 25 Feb 2022 • Bang Xiang Yong, Alexandra Brintrup
Learning the identity function renders the AEs useless for anomaly detection.
no code implementations • 19 Oct 2021 • Bang Xiang Yong, Alexandra Brintrup
This paper aims to improve the explainability of Autoencoder's (AE) predictions by proposing two explanation methods based on the mean and epistemic uncertainty of log-likelihood estimate, which naturally arise from the probabilistic formulation of the AE called Bayesian Autoencoders (BAE).
1 code implementation • 28 Jul 2021 • Bang Xiang Yong, Tim Pearce, Alexandra Brintrup
After an autoencoder (AE) has learnt to reconstruct one dataset, it might be expected that the likelihood on an out-of-distribution (OOD) input would be low.
no code implementations • 28 Jul 2021 • Bang Xiang Yong, Alexandra Brintrup
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty in a cyber-physical manufacturing system (CPMS) scenario.
1 code implementation • 28 Jul 2021 • Bang Xiang Yong, Yasmin Fathy, Alexandra Brintrup
Autoencoders are unsupervised models which have been used for detecting anomalies in multi-sensor environments.