no code implementations • 17 Feb 2024 • Hadi M. Dolatabadi, Sarah M. Erfani, Christopher Leckie
Our analysis of these two failure cases of DNNs reveals that finding a unified solution for shortcut learning in DNNs is not out of reach, and TDA can play a significant role in forming such a framework.
no code implementations • 20 Sep 2023 • Andrew C. Cullen, Paul Montague, Shijie Liu, Sarah M. Erfani, Benjamin I. P. Rubinstein
Certified robustness circumvents the fragility of defences against adversarial attacks, by endowing model predictions with guarantees of class invariance for attacks up to a calculated size.
no code implementations • 15 Aug 2023 • Shijie Liu, Andrew C. Cullen, Paul Montague, Sarah M. Erfani, Benjamin I. P. Rubinstein
Poisoning attacks can disproportionately influence model behaviour by making small changes to the training corpus.
no code implementations • 22 Jun 2023 • Maxwell T. West, Shu-Lok Tsang, Jia S. Low, Charles D. Hill, Christopher Leckie, Lloyd C. L. Hollenberg, Sarah M. Erfani, Muhammad Usman
Machine learning algorithms are powerful tools for data driven tasks such as image classification and feature detection, however their vulnerability to adversarial examples - input samples manipulated to fool the algorithm - remains a serious challenge.
no code implementations • 9 Feb 2023 • Andrew C. Cullen, Shijie Liu, Paul Montague, Sarah M. Erfani, Benjamin I. P. Rubinstein
In guaranteeing the absence of adversarial examples in an instance's neighbourhood, certification mechanisms play an important role in demonstrating neural net robustness.
no code implementations • 22 Dec 2022 • Shu Lok Tsang, Maxwell T. West, Sarah M. Erfani, Muhammad Usman
A subclass of QML methods is quantum generative adversarial networks (QGANs) which have been studied as a quantum counterpart of classical GANs widely used in image manipulation and generation tasks.
no code implementations • 23 Nov 2022 • Maxwell T. West, Sarah M. Erfani, Christopher Leckie, Martin Sevior, Lloyd C. L. Hollenberg, Muhammad Usman
Machine learning (ML) methods such as artificial neural networks are rapidly becoming ubiquitous in modern science, technology and industry.
1 code implementation • 12 Oct 2022 • Andrew C. Cullen, Paul Montague, Shijie Liu, Sarah M. Erfani, Benjamin I. P. Rubinstein
In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution.
no code implementations • 16 Jun 2022 • Fanzhe Qu, Sarah M. Erfani, Muhammad Usman
However, the impact of coreset selection on the performance of quantum K-Means clustering has not been explored.
1 code implementation • 24 Sep 2021 • Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie, Benjamin I. P. Rubinstein
In this paper, we derive a lower-bound and an upper-bound for the LID value of a perturbed data point and demonstrate that the bounds, in particular the lower-bound, has a positive correlation with the magnitude of the perturbation.
1 code implementation • 15 Feb 2021 • Farbod Taymouri, Marcello La Rosa, Sarah M. Erfani
The results show improvements up to four times compared to the state of the art in suffix and remaining time prediction of event sequences, specifically in the realm of business process executions.
no code implementations • 1 Jan 2021 • Hanxun Huang, Xingjun Ma, Sarah M. Erfani, James Bailey
NAS can be performed via policy gradient, evolutionary algorithms, differentiable architecture search or tree-search methods.
no code implementations • 16 Nov 2020 • Elaheh AlipourChavary, Sarah M. Erfani, Christopher Leckie
In addition, as an application of CPs, we demonstrate that CPM is a highly effective method for detection of meaningful changes in network traffic.
1 code implementation • 21 Aug 2020 • Sandamal Weerasinghe, Sarah M. Erfani, Tansu Alpcan, Christopher Leckie, Justin Kopacz
Regression models, which are widely used from engineering applications to financial forecasting, are vulnerable to targeted malicious attacks such as training data poisoning, through which adversaries can manipulate their predictions.
1 code implementation • 21 Aug 2020 • Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie
We introduce a weighted SVM against such attacks using K-LID as a distinguishing characteristic that de-emphasizes the effect of suspicious data samples on the SVM decision boundary.
2 code implementations • ICML 2018 • Xingjun Ma, Yisen Wang, Michael E. Houle, Shuo Zhou, Sarah M. Erfani, Shu-Tao Xia, Sudanthi Wijewickrema, James Bailey
Datasets with significant proportions of noisy (incorrect) class labels present challenges for training accurate Deep Neural Networks (DNNs).
Ranked #39 on Image Classification on mini WebVision 1.0
1 code implementation • ICLR 2018 • Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, James Bailey
Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction.
no code implementations • 8 Jan 2018 • Masud Moshtaghi, James C. Bezdek, Sarah M. Erfani, Christopher Leckie, James Bailey
An important part of cluster analysis is validating the quality of computationally obtained clusters.
no code implementations • 28 Jul 2017 • Tansu Alpcan, Sarah M. Erfani, Christopher Leckie
After many hype cycles and lessons from AI history, it is clear that a big conceptual leap is needed for crossing the starting line to kick-start mainstream AGI research.
no code implementations • 3 Aug 2016 • Fateme Fahiman, Jame C. Bezdek, Sarah M. Erfani, Christopher Leckie, Marimuthu Palaniswami
The two new algorithms are heuristic derivatives of fuzzy c-means (FCM).