57 papers with code · Adversarial

Competitions with currently unpublished results:

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

# Latest papers with code

In this paper, we show that open-set recognition systems are vulnerable to adversarial attacks.

2
02 Sep 2020

29 Jul 2020Muzammal-Naseer/SAT

In contrast to existing adversarial training methods that only use class-boundary information (e. g., using a cross entropy loss), we propose to exploit additional information from the feature space to craft stronger adversaries that are in turn used to learn a robust model.

2
29 Jul 2020

# A Unified Framework for Analyzing and Detecting Malicious Examples of DNN Models

26 Jun 2020kaidi-jin/backdoor_samples_detection

In this paper, we present a unified framework for detecting malicious examples and protecting the inference results of Deep Learning models.

2
26 Jun 2020

Hence we propose smooth adversarial training (SAT), in which we replace ReLU with its smooth approximations to strengthen adversarial training.

16
25 Jun 2020

# PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning

PatchAttack induces misclassifications by superimposing small textured patches on the input image.

22
12 Apr 2020

# Toward Adversarial Robustness via Semi-supervised Robust Training

16 Mar 2020THUYimingLi/Semi-supervised_Robust_Training

In this work, we propose a novel defense method, the robust training (RT), by jointly minimizing two separated risks ($R_{stand}$ and $R_{rob}$), which is with respect to the benign example and its neighborhoods respectively.

8
16 Mar 2020

# Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness

In this study, we introduce Learn2Perturb, an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.

10
02 Mar 2020

# PaRoT: A Practical Framework for Robust Deep Neural Network Training

7 Jan 2020fiveai/parot

Deep Neural Networks (DNNs) are finding important applications in safety-critical systems such as Autonomous Vehicles (AVs), where perceiving the environment correctly and robustly is necessary for safe operation.

2
07 Jan 2020

# Error Correcting Output Codes Improve Probability Estimation and Adversarial Robustness of Deep Neural Networks

Modern machine learning systems are susceptible to adversarial examples; inputs which clearly preserve the characteristic semantics of a given class, but whose classification is (usually confidently) incorrect.

5
01 Dec 2019

# Smoothed Inference for Adversarially-Trained Models

17 Nov 2019yanemcovsky/SIAM

In this work, we study the application of randomized smoothing as a way to improve performance on unperturbed data as well as to increase robustness to adversarial attacks.

3
17 Nov 2019