Search Results for author: Franziska Eckert

Found 2 papers, 1 papers with code

Certified Training: Small Boxes are All You Need

1 code implementation10 Oct 2022 Mark Niklas Müller, Franziska Eckert, Marc Fischer, Martin Vechev

To obtain, deterministic guarantees of adversarial robustness, specialized training methods are used.

Adversarial Robustness

The Negative Pretraining Effect in Sequential Deep Learning and Three Ways to Fix It

no code implementations1 Jan 2021 Julian G. Zilly, Franziska Eckert, Bhairav Mehta, Andrea Censi, Emilio Frazzoli

Negative pretraining is a prominent sequential learning effect of neural networks where a pretrained model obtains a worse generalization performance than a model that is trained from scratch when either are trained on a target task.

Cannot find the paper you are looking for? You can Submit a new open access paper.