1 code implementation • 10 Oct 2022 • Mark Niklas Müller, Franziska Eckert, Marc Fischer, Martin Vechev
To obtain, deterministic guarantees of adversarial robustness, specialized training methods are used.
no code implementations • 1 Jan 2021 • Julian G. Zilly, Franziska Eckert, Bhairav Mehta, Andrea Censi, Emilio Frazzoli
Negative pretraining is a prominent sequential learning effect of neural networks where a pretrained model obtains a worse generalization performance than a model that is trained from scratch when either are trained on a target task.