Search Results for author: Michelle Karg

Found 6 papers, 1 papers with code

Overcoming the Limitations of Localization Uncertainty: Efficient & Exact Non-Linear Post-Processing and Calibration

no code implementations15 Jun 2023 Moussa Kassem Sbeyti, Michelle Karg, Christian Wirth, Azarm Nowzad, Sahin Albayrak

We overcome these limitations by: (1) implementing loss attenuation in EfficientDet, and proposing two deterministic methods for the exact and fast propagation of the output distribution, (2) demonstrating on the KITTI and BDD100K datasets that the predicted uncertainty is miscalibrated, and adapting two calibration methods to the localization task, and (3) investigating the correlation between aleatoric uncertainty and task-relevant error sources.

Plausible Uncertainties for Human Pose Regression

no code implementations ICCV 2023 Lennart Bramlage, Michelle Karg, Cristóbal Curio

Here, we propose a straightforward human pose regression framework to examine the behavior of two established methods for simultaneous aleatoric and epistemic uncertainty estimation: maximum a-posteriori (MAP) estimation with Monte-Carlo variational inference and deep evidential regression (DER).

Autonomous Driving Pose Estimation +3

Residual Error: a New Performance Measure for Adversarial Robustness

no code implementations18 Jun 2021 Hossein Aboutalebi, Mohammad Javad Shafiee, Michelle Karg, Christian Scharfenberger, Alexander Wong

Motivated by this, this study presents the concept of residual error, a new performance measure for not only assessing the adversarial robustness of a deep neural network at the individual sample level, but also can be used to differentiate between adversarial and non-adversarial examples to facilitate for adversarial example detection.

Adversarial Robustness Image Classification

Vulnerability Under Adversarial Machine Learning: Bias or Variance?

no code implementations1 Aug 2020 Hossein Aboutalebi, Mohammad Javad Shafiee, Michelle Karg, Christian Scharfenberger, Alexander Wong

In this study, we investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network and analyze how adversarial perturbations can affect the generalization of a network.

BIG-bench Machine Learning

StressedNets: Efficient Feature Representations via Stress-induced Evolutionary Synthesis of Deep Neural Networks

no code implementations16 Jan 2018 Mohammad Javad Shafiee, Brendan Chwyl, Francis Li, Rongyan Chen, Michelle Karg, Christian Scharfenberger, Alexander Wong

The computational complexity of leveraging deep neural networks for extracting deep feature representations is a significant barrier to its widespread adoption, particularly for use in embedded devices.

object-detection Object Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.