no code implementations • EMNLP 2021 • Nathaniel Berger, Stefan Riezler, Sebastian Ebert, Artem Sokolov
Recently more attention has been given to adversarial attacks on neural networks for natural language processing (NLP).
no code implementations • 17 Jul 2023 • Nathaniel Berger, Miriam Exel, Matthias Huck, Stefan Riezler
Supervised learning in Neural Machine Translation (NMT) typically follows a teacher forcing paradigm where reference tokens constitute the conditioning context in the model's prediction, instead of its own previous predictions.
no code implementations • 16 Sep 2021 • Nathaniel Berger, Stefan Riezler, Artem Sokolov, Sebastian Ebert
Recently more attention has been given to adversarial attacks on neural networks for natural language processing (NLP).
1 code implementation • 2 Jun 2020 • Mayumi Ohta, Nathaniel Berger, Artem Sokolov, Stefan Riezler
Interest in stochastic zeroth-order (SZO) methods has recently been revived in black-box optimization scenarios such as adversarial black-box attacks to deep neural networks.
1 code implementation • EAMT 2020 • Julia Kreutzer, Nathaniel Berger, Stefan Riezler
Sequence-to-sequence learning involves a trade-off between signal strength and annotation cost of training data.