1 code implementation • Asian Chapter of the Association for Computational Linguistics 2020 • Steffen Eger, Yannik Benz
Adversarial attacks are label-preserving modifications to inputs of machine learning classifiers designed to fool machines but not humans.
1 code implementation • 12 Oct 2020 • Steffen Eger, Yannik Benz
Adversarial attacks are label-preserving modifications to inputs of machine learning classifiers designed to fool machines but not humans.