no code implementations • 3 Aug 2023 • Sujan Sai Gannamaneni, Michael Mock, Maram Akila
With the advancement of DNNs into safety-critical applications, testing approaches for such models have gained more attention.
no code implementations • 20 Jun 2023 • Maximilian Poretschkin, Anna Schmitz, Maram Akila, Linara Adilova, Daniel Becker, Armin B. Cremers, Dirk Hecker, Sebastian Houben, Michael Mock, Julia Rosenzweig, Joachim Sicking, Elena Schulz, Angelika Voss, Stefan Wrobel
Artificial Intelligence (AI) has made impressive progress in recent years and represents a key technology that has a crucial impact on the economy and society.
no code implementations • 2 May 2022 • Maximilian Pintz, Joachim Sicking, Maximilian Poretschkin, Maram Akila
The success of deep learning (DL) fostered the creation of unifying frameworks such as tensorflow or pytorch as much as it was driven by their creation in return.
no code implementations • 29 Apr 2022 • Joachim Sicking, Maram Akila, Jan David Schneider, Fabian Hüger, Peter Schlicht, Tim Wirtz, Stefan Wrobel
Uncertainty estimation bears the potential to make deep learning (DL) systems more reliable.
no code implementations • 10 Jun 2021 • Julia Rosenzweig, Eduardo Brito, Hans-Ulrich Kobialka, Maram Akila, Nico M. Schmidt, Peter Schlicht, Jan David Schneider, Fabian Hüger, Matthias Rottmann, Sebastian Houben, Tim Wirtz
We propose a novel framework consisting of a generative label-to-image synthesis model together with different transferability measures to inspect to what extent we can transfer testing results of semantic segmentation models from synthetic data to equivalent real-life data.
no code implementations • 29 Apr 2021 • Sebastian Houben, Stephanie Abrecht, Maram Akila, Andreas Bär, Felix Brockherde, Patrick Feifel, Tim Fingscheidt, Sujan Sai Gannamaneni, Seyed Eghbal Ghobadi, Ahmed Hammam, Anselm Haselhoff, Felix Hauser, Christian Heinzemann, Marco Hoffmann, Nikhil Kapoor, Falk Kappel, Marvin Klingner, Jan Kronenberger, Fabian Küppers, Jonas Löhdefink, Michael Mlynarski, Michael Mock, Firas Mualla, Svetlana Pavlitskaya, Maximilian Poretschkin, Alexander Pohl, Varun Ravi-Kumar, Julia Rosenzweig, Matthias Rottmann, Stefan Rüping, Timo Sämann, Jan David Schneider, Elena Schulz, Gesina Schwalbe, Joachim Sicking, Toshika Srivastava, Serin Varghese, Michael Weber, Sebastian Wirkert, Tim Wirtz, Matthias Woehrle
Our paper addresses both machine learning experts and safety engineers: The former ones might profit from the broad range of machine learning topics covered and discussions on limitations of recent methods.
no code implementations • 22 Apr 2021 • Julia Rosenzweig, Joachim Sicking, Sebastian Houben, Michael Mock, Maram Akila
To address this constraint, we present an approach to detect learned shortcuts using an interpretable-by-design network as a proxy to the black-box model of interest.
no code implementations • 19 Apr 2021 • Linara Adilova, Elena Schulz, Maram Akila, Sebastian Houben, Jan David Schneider, Fabian Hueger, Tim Wirtz
Data-driven sensor interpretation in autonomous driving can lead to highly implausible predictions as can most of the time be verified with common-sense knowledge.
no code implementations • pproximateinference AABI Symposium 2021 • Joachim Sicking, Maram Akila, Maximilian Pintz, Tim Wirtz, Asja Fischer, Stefan Wrobel
One of the most commonly used approaches so far is Monte Carlo dropout, which is computationally cheap and easy to apply in practice.
1 code implementation • 23 Dec 2020 • Joachim Sicking, Maram Akila, Maximilian Pintz, Tim Wirtz, Asja Fischer, Stefan Wrobel
Despite of its importance for safe machine learning, uncertainty quantification for neural networks is far from being solved.
1 code implementation • 17 Dec 2020 • Joachim Sicking, Maximilian Pintz, Maram Akila, Tim Wirtz
We propose two optimization schemes that make use of this: a modification of the Baum-Welch algorithm and a direct co-occurrence optimization.
no code implementations • 10 Jul 2020 • Joachim Sicking, Maram Akila, Tim Wirtz, Sebastian Houben, Asja Fischer
Monte Carlo (MC) dropout is one of the state-of-the-art approaches for uncertainty estimation in neural networks (NNs).