Hard-label Attack

6 papers with code • 2 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Sign-OPT: A Query-Efficient Hard-label Adversarial Attack

cmhcbb/attackbox ICLR 2020

We study the most practical problem setup for evaluating adversarial robustness of a machine learning system with limited access: the hard-label black-box attack setting for generating adversarial examples, where limited model queries are allowed and only the decision is provided to a queried data input.

RayS: A Ray Searching Method for Hard-label Adversarial Attack

uclaml/RayS 23 Jun 2020

Deep neural networks are vulnerable to adversarial attacks.

Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

machanic/tangentattack NeurIPS 2021

In this paper, we propose a novel geometric-based approach called Tangent Attack (TA), which identifies an optimal tangent point of a virtual hemisphere located on the decision boundary to reduce the distortion of the attack.

TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack

jhl-hust/texthacker 20 Jan 2022

Existing textual adversarial attacks usually utilize the gradient or prediction confidence to generate adversarial examples, making it hard to be deployed in real-world applications.

LimeAttack: Local Explainable Method for Textual Hard-Label Adversarial Attack

zhuhai-ustc/limeattack 1 Aug 2023

Natural language processing models are vulnerable to adversarial examples.

HQA-Attack: Toward High Quality Black-Box Hard-Label Adversarial Attack on Text

hqa-attack/hqaattack-demo NeurIPS 2023

Black-box hard-label adversarial attack on text is a practical and challenging task, as the text data space is inherently discrete and non-differentiable, and only the predicted label is accessible.