Adversarial Attack
598 papers with code • 2 benchmarks • 9 datasets
An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.
Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks
Libraries
Use these libraries to find Adversarial Attack models and implementationsDatasets
Subtasks
Latest papers with no code
Multi-granular Adversarial Attacks against Black-box Neural Ranking Models
However, limiting perturbations to a single level of granularity may reduce the flexibility of adversarial examples, thereby diminishing the potential threat of the attack.
Jailbreaking Prompt Attack: A Controllable Adversarial Attack against Diffusion Models
The fast advance of the image generation community has attracted attention worldwide.
The Double-Edged Sword of Input Perturbations to Robust Accurate Fairness
Deep neural networks (DNNs) are known to be sensitive to adversarial input perturbations, leading to a reduction in either prediction accuracy or individual fairness.
Deep Learning for Robust and Explainable Models in Computer Vision
This thesis presents developments in computer vision models' robustness and explainability.
CosalPure: Learning Concept from Group Images for Robust Co-Saliency Detection
In this paper, we propose a novel robustness enhancement framework by first learning the concept of the co-salient objects based on the input group images and then leveraging this concept to purify adversarial perturbations, which are subsequently fed to CoSODs for robustness enhancement.
Uncertainty-Aware SAR ATR: Defending Against Adversarial Attacks via Bayesian Neural Networks
Adversarial attacks have demonstrated the vulnerability of Machine Learning (ML) image classifiers in Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems.
Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking
In Virtual Reality (VR), adversarial attack remains a significant security threat.
FMM-Attack: A Flow-based Multi-modal Adversarial Attack on Video-based LLMs
Despite the remarkable performance of video-based large language models (LLMs), their adversarial threat remains unexplored.
DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation
Dataset distillation is an advanced technique aimed at compressing datasets into significantly smaller counterparts, while preserving formidable training performance.
Capsule Neural Networks as Noise Stabilizer for Time Series Data
In this paper, we investigate the effectiveness of CapsNets in analyzing highly sensitive and noisy time series sensor data.