Search Results for author: Thomas Decker

Found 4 papers, 0 papers with code

Does Your Model Think Like an Engineer? Explainable AI for Bearing Fault Detection with Deep Learning

no code implementations19 Oct 2023 Thomas Decker, Michael Lebacher, Volker Tresp

Deep Learning has already been successfully applied to analyze industrial sensor data in a variety of relevant use cases.

Fault Detection

Explaining Deep Neural Networks for Bearing Fault Detection with Vibration Concepts

no code implementations17 Oct 2023 Thomas Decker, Michael Lebacher, Volker Tresp

Concept-based explanation methods, such as Concept Activation Vectors, are potent means to quantify how abstract or high-level characteristics of input data influence the predictions of complex deep neural networks.

Fault Detection

Towards Scenario-based Safety Validation for Autonomous Trains with Deep Generative Models

no code implementations16 Oct 2023 Thomas Decker, Ananta R. Bhattarai, Michael Lebacher

A common approach is to conduct safety validation based on a predefined Operational Design Domain (ODD) describing specific conditions under which a system under test is required to operate properly.

Autonomous Vehicles Scene Segmentation

The Thousand Faces of Explainable AI Along the Machine Learning Life Cycle: Industrial Reality and Current State of Research

no code implementations11 Oct 2023 Thomas Decker, Ralf Gross, Alexander Koebler, Michael Lebacher, Ronald Schnitzer, Stefan H. Weber

In this paper, we investigate the practical relevance of explainable artificial intelligence (XAI) with a special focus on the producing industries and relate them to the current state of academic XAI research.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Cannot find the paper you are looking for? You can Submit a new open access paper.