Search Results for author: Mahdieh Abbasi

Found 10 papers, 3 papers with code

Self-supervised Robust Object Detectors from Partially Labelled Datasets

no code implementations23 May 2020 Mahdieh Abbasi, Denis Laurendeau, Christian Gagne

With the goal of training \emph{one integrated robust object detector with high generalization performance}, we propose a training framework to overcome missing-label challenge of the merged datasets.

Object object-detection +1

Toward Adversarial Robustness by Diversity in an Ensemble of Specialized Deep Neural Networks

no code implementations17 May 2020 Mahdieh Abbasi, Arezoo Rajabi, Christian Gagne, Rakesh B. Bobba

Using MNIST and CIFAR-10, we empirically verify the ability of our ensemble to detect a large portion of well-known black-box adversarial examples, which leads to a significant reduction in the risk rate of adversaries, at the expense of a small increase in the risk rate of clean samples.

Adversarial Robustness

Toward Metrics for Differentiating Out-of-Distribution Sets

1 code implementation18 Oct 2019 Mahdieh Abbasi, Changjian Shui, Arezoo Rajabi, Christian Gagne, Rakesh Bobba

We empirically verify that the most protective OOD sets -- selected according to our metrics -- lead to A-CNNs with significantly lower generalization errors than the A-CNNs trained on the least protective ones.

Out of Distribution (OOD) Detection

Controlling Over-generalization and its Effect on Adversarial Examples Detection and Generation

no code implementations ICLR 2019 Mahdieh Abbasi, Arezoo Rajabi, Azadeh Sadat Mozafari, Rakesh B. Bobba, Christian Gagné

As an appropriate training set for the extra class, we introduce two resources that are computationally efficient to obtain: a representative natural out-distribution set and interpolated in-distribution samples.

A Principled Approach for Learning Task Similarity in Multitask Learning

1 code implementation21 Mar 2019 Changjian Shui, Mahdieh Abbasi, Louis-Émile Robitaille, Boyu Wang, Christian Gagné

Hence, an important aspect of multitask learning is to understand the similarities within a set of tasks.

Controlling Over-generalization and its Effect on Adversarial Examples Generation and Detection

no code implementations21 Aug 2018 Mahdieh Abbasi, Arezoo Rajabi, Azadeh Sadat Mozafari, Rakesh B. Bobba, Christian Gagne

As an appropriate training set for the extra class, we introduce two resources that are computationally efficient to obtain: a representative natural out-distribution set and interpolated in-distribution samples.

Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning

no code implementations24 Apr 2018 Mahdieh Abbasi, Arezoo Rajabi, Christian Gagné, Rakesh B. Bobba

Detection and rejection of adversarial examples in security sensitive and safety-critical systems using deep CNNs is essential.

Out-distribution training confers robustness to deep neural networks

1 code implementation20 Feb 2018 Mahdieh Abbasi, Christian Gagné

The easiness at which adversarial instances can be generated in deep neural networks raises some fundamental questions on their functioning and concerns on their use in critical systems.

Robustness to Adversarial Examples through an Ensemble of Specialists

no code implementations22 Feb 2017 Mahdieh Abbasi, Christian Gagné

We are proposing to use an ensemble of diverse specialists, where speciality is defined according to the confusion matrix.

Alternating Direction Method of Multipliers for Sparse Convolutional Neural Networks

no code implementations5 Nov 2016 Farkhondeh Kiaee, Christian Gagné, Mahdieh Abbasi

This method alternates between promoting the sparsity of the network and optimizing the recognition performance, which allows us to exploit the two-part structure of the corresponding objective functions.

Cannot find the paper you are looking for? You can Submit a new open access paper.