Neural Network Security

2 papers with code • 0 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

Hacking Neural Networks: A Short Introduction

Kayzaks/HackingNeuralNetworks 18 Nov 2019

A large chunk of research on the security issues of neural networks is focused on adversarial attacks.

Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis

yuweisunn/ADA 22 Mar 2022

To overcome this challenge, we propose the Attacking Distance-aware Attack (ADA) to enhance a poisoning attack by finding the optimized target class in the feature space.