Physical Adversarial Examples for Object Detectors

Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has shown that these attacks generalize to the physical domain, to create perturbations on physical objects that fool image classifiers under a variety of real-world conditions... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Average Pooling
Pooling Operations
Global Average Pooling
Pooling Operations
1x1 Convolution
Convolutions
Batch Normalization
Normalization
Max Pooling
Pooling Operations
Darknet-19
Convolutional Neural Networks
YOLOv2
Object Detection Models
RPN
Region Proposal
Softmax
Output Functions
Convolution
Convolutions
RoIPool
RoI Feature Extractors
Faster R-CNN
Object Detection Models