Dynamic Adversarial Patch for Evading Object Detection Models

Recent research shows that neural networks models used for computer vision (e.g., YOLO and Fast R-CNN) are vulnerable to adversarial evasion attacks. Most of the existing real-world adversarial attacks against object detectors use an adversarial patch which is attached to the target object (e.g., a carefully crafted sticker placed on a stop sign)... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Batch Normalization
Normalization
Average Pooling
Pooling Operations
Max Pooling
Pooling Operations
Softmax
Output Functions
Convolution
Convolutions
Global Average Pooling
Pooling Operations
1x1 Convolution
Convolutions
Darknet-19
Convolutional Neural Networks
YOLOv2
Object Detection Models