no code implementations • 18 Mar 2024 • Anh Bui, Khanh Doan, Trung Le, Paul Montague, Tamas Abraham, Dinh Phung
Generative models have demonstrated remarkable potential in generating visually impressive content from textual descriptions.
no code implementations • 6 Mar 2024 • Sean Lamont, Michael Norrish, Amir Dezfouli, Christian Walder, Paul Montague
We also provide a qualitative analysis, illustrating that improved performance is associated with more semantically-aware embeddings.
no code implementations • 15 Dec 2023 • Rollin Omari, Junae Kim, Paul Montague
In this paper we explore the challenges and strategies for enhancing the robustness of $k$-means clustering algorithms against adversarial manipulations.
no code implementations • 20 Sep 2023 • Andrew C. Cullen, Paul Montague, Shijie Liu, Sarah M. Erfani, Benjamin I. P. Rubinstein
Certified robustness circumvents the fragility of defences against adversarial attacks, by endowing model predictions with guarantees of class invariance for attacks up to a calculated size.
no code implementations • 15 Aug 2023 • Shijie Liu, Andrew C. Cullen, Paul Montague, Sarah M. Erfani, Benjamin I. P. Rubinstein
Poisoning attacks can disproportionately influence model behaviour by making small changes to the training corpus.
1 code implementation • 26 Apr 2023 • Anh Bui, Trung Le, He Zhao, Quan Tran, Paul Montague, Dinh Phung
The key factor for the success of adversarial training is the capability to generate qualified and divergent adversarial examples which satisfy some objectives/goals (e. g., finding adversarial examples that maximize the model losses for simultaneously attacking multiple models).
no code implementations • 9 Feb 2023 • Andrew C. Cullen, Shijie Liu, Paul Montague, Sarah M. Erfani, Benjamin I. P. Rubinstein
In guaranteeing the absence of adversarial examples in an instance's neighbourhood, certification mechanisms play an important role in demonstrating neural net robustness.
1 code implementation • 12 Oct 2022 • Andrew C. Cullen, Paul Montague, Shijie Liu, Sarah M. Erfani, Benjamin I. P. Rubinstein
In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution.
no code implementations • 21 Jun 2022 • Shuiqiao Yang, Bao Gia Doan, Paul Montague, Olivier De Vel, Tamas Abraham, Seyit Camtepe, Damith C. Ranasinghe, Salil S. Kanhere
In this paper, we disclose the TRAP attack, a Transferable GRAPh backdoor attack.
no code implementations • 29 Sep 2021 • Siqi Xia, Shijie Liu, Trung Le, Dinh Phung, Sarah Erfani, Benjamin I. P. Rubinstein, Christopher Leckie, Paul Montague
More specifically, by minimizing the WS distance of interest, an adversarial example is pushed toward the cluster of benign examples sharing the same label on the latent space, which helps to strengthen the generalization ability of the classifier on the adversarial examples.
1 code implementation • 25 Jan 2021 • Anh Bui, Trung Le, He Zhao, Paul Montague, Seyit Camtepe, Dinh Phung
Central to this approach is the selection of positive (similar) and negative (dissimilar) sets to provide the model the opportunity to `contrast' between data and class representation in the latent space.
no code implementations • 13 Oct 2020 • He Zhao, Thanh Nguyen, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung
Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.
1 code implementation • 21 Sep 2020 • Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung
An important technique of this approach is to control the transferability of adversarial examples among ensemble members.
1 code implementation • ECCV 2020 • Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung
The fact that deep neural networks are susceptible to crafted perturbations severely impacts the use of deep learning in certain domains of application.
no code implementations • 3 Oct 2019 • He Zhao, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung
Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.
no code implementations • ICLR 2019 • Tue Le, Tuan Nguyen, Trung Le, Dinh Phung, Paul Montague, Olivier De Vel, Lizhen Qu
Due to the sharp increase in the severity of the threat imposed by software vulnerabilities, the detection of vulnerabilities in binary code has become an important concern in the software industry, such as the embedded systems industry, and in the field of computer security.
no code implementations • 25 Feb 2019 • Yi Han, David Hubczenko, Paul Montague, Olivier De Vel, Tamas Abraham, Benjamin I. P. Rubinstein, Christopher Leckie, Tansu Alpcan, Sarah Erfani
Recent studies have demonstrated that reinforcement learning (RL) agents are susceptible to adversarial manipulation, similar to vulnerabilities previously demonstrated in the supervised learning setting.
no code implementations • 17 Aug 2018 • Yi Han, Benjamin I. P. Rubinstein, Tamas Abraham, Tansu Alpcan, Olivier De Vel, Sarah Erfani, David Hubczenko, Christopher Leckie, Paul Montague
Despite the successful application of machine learning (ML) in a wide range of domains, adaptability---the very property that makes machine learning desirable---can be exploited by adversaries to contaminate training and evade classification.