no code implementations • ALTA 2021 • Narjes Askarian, Ehsan Abbasnejad, Ingrid Zukerman, Wray Buntine, Gholamreza Haffari
In this paper, we propose curriculum-based learning (CL) regime to increase the accuracy of VQA models trained on small datasets.
no code implementations • 8 Apr 2024 • Viet Quoc Vo, Ehsan Abbasnejad, Damith C. Ranasinghe
We study the unique, less-well understood problem of generating sparse adversarial samples simply by observing the score-based replies to model queries.
1 code implementation • 12 Mar 2024 • Ankit Sonthalia, Alexander Rubinstein, Ehsan Abbasnejad, Seong Joon Oh
This means that two independent solutions can be connected by a linear path with low loss, given one of them is appropriately permuted.
1 code implementation • 12 Mar 2024 • Mark D. McDonnell, Dong Gong, Ehsan Abbasnejad, Anton Van Den Hengel
We show here that the combination of a large language model and an image generation model can similarly provide useful premonitions as to how a continual learning challenge might develop over time.
no code implementations • 4 Mar 2024 • Damien Teney, Armand Nicolicioiu, Valentin Hartmann, Ehsan Abbasnejad
Prevailing explanations are based on implicit biases of gradient descent (GD) but they cannot account for the capabilities of models from gradient-free methods nor the simplicity bias recently observed in untrained networks.
no code implementations • 22 Dec 2023 • Cristian Rodriguez-Opazo, Edison Marrese-Taylor, Ehsan Abbasnejad, Hamed Damirchi, Ignacio M. Jara, Felipe Bravo-Marquez, Anton Van Den Hengel
Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning.
no code implementations • 29 Nov 2023 • Hamed Damirchi, Cristian Rodríguez-Opazo, Ehsan Abbasnejad, Damien Teney, Javen Qinfeng Shi, Stephen Gould, Anton Van Den Hengel
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
no code implementations • 7 Nov 2023 • Iman Abbasnejad, Fabio Zambetta, Flora Salim, Timothy Wiley, Jeffrey Chan, Russell Gallagher, Ehsan Abbasnejad
SCONE-GAN presents an end-to-end image translation, which is shown to be effective for learning to generate realistic and diverse scenery images.
no code implementations • 9 Sep 2023 • Hai-Ming Xu, Lingqiao Liu, Hao Chen, Ehsan Abbasnejad, Rafael Felix
As an effective way to alleviate the burden of data annotation, semi-supervised learning (SSL) provides an attractive solution due to its ability to leverage both labeled and unlabeled data to build a predictive model.
1 code implementation • NeurIPS 2023 • Mark D. McDonnell, Dong Gong, Amin Parveneh, Ehsan Abbasnejad, Anton Van Den Hengel
In this paper, we propose a concise and effective approach for CL with pre-trained models.
1 code implementation • 29 May 2023 • Jinan Zou, Maihao Guo, Yu Tian, YuHao Lin, Haiyao Cao, Lingqiao Liu, Ehsan Abbasnejad, Javen Qinfeng Shi
Identifying unexpected domain-shifted instances in natural language processing is crucial in real-world applications.
no code implementations • 26 May 2023 • Damien Teney, Jindong Wang, Ehsan Abbasnejad
We have found a new equivalence between two successful methods: selective mixup and resampling.
no code implementations • CVPR 2023 • Islam Nassar, Munawar Hayat, Ehsan Abbasnejad, Hamid Rezatofighi, Gholamreza Haffari
Finally, ProtoCon addresses the poor training signal in the initial phase of training (due to fewer confident predictions) by introducing an auxiliary self-supervised loss.
no code implementations • ICCV 2023 • Samitha Herath, Basura Fernando, Ehsan Abbasnejad, Munawar Hayat, Shahram Khadivi, Mehrtash Harandi, Hamid Rezatofighi, Gholamreza Haffari
EBL can be used to improve the instance selection for a self-training task on the unlabelled target domain, and 2. alignment and normalizing energy scores can learn domain-invariant representations.
no code implementations • 24 Dec 2022 • Jinan Zou, Qingying Zhao, Yang Jiao, Haiyao Cao, Yanxi Liu, Qingsen Yan, Ehsan Abbasnejad, Lingqiao Liu, Javen Qinfeng Shi
Existing surveys on stock market prediction often focus on traditional machine learning methods instead of deep learning methods.
1 code implementation • 5 Dec 2022 • Bao Gia Doan, Ehsan Abbasnejad, Javen Qinfeng Shi, Damith C. Ranasinghe
We recognize the adversarial learning approach for approximating the multi-modal posterior distribution of a Bayesian model can lead to mode collapse; consequently, the model's achievements in robustness and performance are sub-optimal.
1 code implementation • 19 Oct 2022 • Islam Nassar, Munawar Hayat, Ehsan Abbasnejad, Hamid Rezatofighi, Mehrtash Harandi, Gholamreza Haffari
We present LAVA, a simple yet effective method for multi-domain visual transfer learning with limited data.
no code implementations • 6 Jul 2022 • Damien Teney, Maxime Peyrard, Ehsan Abbasnejad
Underspecification refers to the existence of multiple models that are indistinguishable in their in-domain accuracy, even though they differ in other desirable properties such as out-of-distribution (OOD) performance.
no code implementations • 29 Jun 2022 • Violetta Shevchenko, Ehsan Abbasnejad, Anthony Dick, Anton Van Den Hengel, Damien Teney
In a simple setting similar to CLEVR, we find that CL representations also improve systematic generalization, and even match the performance of representations from a larger, supervised, ImageNet-pretrained model.
1 code implementation • 14 Jun 2022 • Jinan Zou, Haiyao Cao, Lingqiao Liu, YuHao Lin, Ehsan Abbasnejad, Javen Qinfeng Shi
In addition, we propose a self-supervised learning strategy based on SRLP to enhance the out-of-distribution generalization performance of our system.
Ranked #1 on Stock Price Prediction on Astock
1 code implementation • NAACL 2022 • Hai-Ming Xu, Lingqiao Liu, Ehsan Abbasnejad
Semi-supervised learning is a promising way to reduce the annotation cost for text-classification.
3 code implementations • CVPR 2022 • Amin Parvaneh, Ehsan Abbasnejad, Damien Teney, Reza Haffari, Anton Van Den Hengel, Javen Qinfeng Shi
We identify unlabelled instances with sufficiently-distinct features by seeking inconsistencies in predictions resulting from interventions on their representations.
1 code implementation • 31 Jan 2022 • Viet Quoc Vo, Ehsan Abbasnejad, Damith C. Ranasinghe
The ability to extract information from solely the output of a machine learning model to craft adversarial perturbations to black-box models is a practical threat against real-world systems, such as autonomous cars or machine learning models exposed as a service (MLaaS).
1 code implementation • 10 Dec 2021 • Viet Quoc Vo, Ehsan Abbasnejad, Damith C. Ranasinghe
In our study, we first deep dive into recent state-of-the-art decision-based attacks in ICLR and SP to highlight the costly nature of discovering low distortion adversarial employing gradient estimation methods.
no code implementations • 19 Nov 2021 • Bao Gia Doan, Minhui Xue, Shiqing Ma, Ehsan Abbasnejad, Damith C. Ranasinghe
Now, an adversary can arm themselves with a patch that is naturalistic, less malicious-looking, physically realizable, highly effective achieving high attack success rates, and universal.
1 code implementation • 21 Jun 2021 • Miao Zhang, Steven Su, Shirui Pan, Xiaojun Chang, Ehsan Abbasnejad, Reza Haffari
A key challenge to the scalability and quality of the learned architectures is the need for differentiating through the inner-loop optimisation.
Ranked #22 on Neural Architecture Search on NAS-Bench-201, CIFAR-10
1 code implementation • CVPR 2022 • Damien Teney, Ehsan Abbasnejad, Simon Lucey, Anton Van Den Hengel
The method - the first to evade the simplicity bias - highlights the need for a better understanding and control of inductive biases in deep learning.
1 code implementation • CVPR 2021 • Islam Nassar, Samitha Herath, Ehsan Abbasnejad, Wray Buntine, Gholamreza Haffari
We train two classifiers with two different views of the class labels: one classifier uses the one-hot view of the labels and disregards any potential similarity among the classes, while the other uses a distributed view of the labels and groups potentially similar classes together.
no code implementations • 28 Feb 2021 • Mahdi Kazemi Moghaddam, Ehsan Abbasnejad, Qi Wu, Javen Shi, Anton Van Den Hengel
ForeSIT is trained to imagine the recurrent latent representation of a future state that leads to success, e. g. either a sub-goal state that is important to reach before the target, or the goal state itself.
no code implementations • ICCV 2021 • Damien Teney, Ehsan Abbasnejad, Anton Van Den Hengel
subsets treated as multiple training environments can guide the learning of models with better out-of-distribution generalization.
no code implementations • NeurIPS 2020 • Amin Parvaneh, Ehsan Abbasnejad, Damien Teney, Qinfeng Shi, Anton Van Den Hengel
The task of vision-and-language navigation (VLN) requires an agent to follow text instructions to find its way through simulated household environments.
no code implementations • CVPR 2020 • Ehsan Abbasnejad, Damien Teney, Amin Parvaneh, Javen Shi, Anton van den Hengel
It is particularly remarkable that this success has been achieved on the basis of comparatively small datasets, given the scale of the problem.
no code implementations • NeurIPS 2020 • Damien Teney, Kushal Kafle, Robik Shrestha, Ehsan Abbasnejad, Christopher Kanan, Anton Van Den Hengel
Out-of-distribution (OOD) testing is increasingly popular for evaluating a machine learning system's ability to generalize beyond the biases of a training set.
no code implementations • 7 Apr 2020 • Mahdi Kazemi Moghaddam, Qi Wu, Ehsan Abbasnejad, Javen Qinfeng Shi
Through empirical studies, we show that our agent, dubbed as the optimistic agent, has a more realistic estimate of the state value during a navigation episode which leads to a higher success rate.
no code implementations • 2 Apr 2020 • Mehdi Neshat, Meysam Majidi Nezhad, Ehsan Abbasnejad, Daniele Groppi, Azim Heydari, Lina Bertling Tjernberg, Davide Astiaso Garcia, Bradley Alexander, Markus Wagner
Reliable wind turbine power prediction is imperative to the planning, scheduling and control of wind energy farms for stable power production.
no code implementations • 27 Feb 2020 • Damien Teney, Ehsan Abbasnejad, Anton Van Den Hengel
subsets treated as multiple training environments can guide the learning of models with better out-of-distribution generalization.
no code implementations • 21 Feb 2020 • Mehdi Neshat, Meysam Majidi Nezhad, Ehsan Abbasnejad, Lina Bertling Tjernberg, Davide Astiaso Garcia, Bradley Alexander, Markus Wagner
Accurate short-term wind speed forecasting is essential for large-scale integration of wind power generation.
no code implementations • 30 Sep 2019 • Damien Teney, Ehsan Abbasnejad, Anton Van Den Hengel
We also show that incorporating this type of prior knowledge with our method brings consistent improvements, independently from the amount of supervised data used.
no code implementations • 25 Sep 2019 • Damien Teney, Ehsan Abbasnejad, Anton Van Den Hengel
We also show that incorporating this type of prior knowledge with our method brings consistent improvements, independently from the amount of supervised data used.
1 code implementation • 9 Aug 2019 • Bao Gia Doan, Ehsan Abbasnejad, Damith C. Ranasinghe
Notably, in contrast to existing approaches, our approach removes the need for ground-truth labelled data or anomaly detection methods for Trojan detection or retraining a model or prior knowledge of an attack.
Cryptography and Security
no code implementations • 6 Jul 2019 • Mehdi Neshat, Ehsan Abbasnejad, Qinfeng Shi, Bradley Alexander, Markus Wagner
The installed amount of renewable energy has expanded massively in recent years.
no code implementations • 22 Jun 2019 • Yinglong Wang, Qinfeng Shi, Ehsan Abbasnejad, Chao Ma, Xiaoping Ma, Bing Zeng
Instead of using the estimated atmospheric light directly to learn a network to calculate transmission, we utilize it as ground truth and design a simple but novel triangle-shaped network structure to learn atmospheric light for every rainy image, then fine-tune the network to obtain a better estimation of atmospheric light during the training of transmission network.
no code implementations • 11 May 2019 • Mohammad Mahdi Kazemi Moghaddam, Ehsan Abbasnejad, Javen Shi
We empirically show the capability of our approach by achieving state-of-the-art results on MERL shopping dataset.
no code implementations • 7 May 2019 • Amin Parvaneh, Ehsan Abbasnejad, Qi Wu, Javen Qinfeng Shi, Anton Van Den Hengel
Negotiation, as an essential and complicated aspect of online shopping, is still challenging for an intelligent agent.
no code implementations • 6 Apr 2019 • Anthony Manchin, Ehsan Abbasnejad, Anton Van Den Hengel
Attention models have had a significant positive impact on deep learning across a range of tasks.
no code implementations • CVPR 2020 • Ehsan Abbasnejad, Iman Abbasnejad, Qi Wu, Javen Shi, Anton Van Den Hengel
For each potential action a distribution of the expected outcomes is calculated, and the value of the potential information gain assessed.
no code implementations • CVPR 2019 • Ehsan Abbasnejad, Qi Wu, Javen Shi, Anton Van Den Hengel
We propose a solution to this problem based on a Bayesian model of the uncertainty in the implicit model maintained by the visual dialogue agent, and in the function used to select an appropriate output.
no code implementations • 20 Nov 2018 • Alireza Abedin Varamin, Ehsan Abbasnejad, Qinfeng Shi, Damith Ranasinghe, Hamid Rezatofighi
Automatic recognition of human activities from time-series sensor data (referred to as HAR) is a growing area of research in ubiquitous computing.
1 code implementation • British Machine Vision Conference 2018 2018 • Masoud Abdi, Ehsan Abbasnejad, Chee Peng Lim, Saeid Nahavandi
In this paper, we propose a novel method that seeks to predict the 3d position of the hand using both synthetic and partially-labeled real data.
no code implementations • ICLR 2018 • Ehsan Abbasnejad, Javen Shi, Anton Van Den Hengel
To facilitate this, we develop both theoretical and practical building blocks, using which one can construct different neural networks using a large range of metrics, as well as ensure Lipschitz condition and sufficient capacity of the networks.
no code implementations • ICCV 2017 • S. Hamid Rezatofighi, Vijay Kumar B G, Anton Milan, Ehsan Abbasnejad, Anthony Dick, Ian Reid
This paper addresses the task of set prediction using deep learning.
no code implementations • CVPR 2017 • Ehsan Abbasnejad, Anthony Dick, Anton Van Den Hengel
This paper presents an infinite variational autoencoder (VAE) whose capacity adapts to suit the input data.