no code implementations • 19 Oct 2023 • Thomas Decker, Michael Lebacher, Volker Tresp
Deep Learning has already been successfully applied to analyze industrial sensor data in a variety of relevant use cases.
no code implementations • 17 Oct 2023 • Thomas Decker, Michael Lebacher, Volker Tresp
Concept-based explanation methods, such as Concept Activation Vectors, are potent means to quantify how abstract or high-level characteristics of input data influence the predictions of complex deep neural networks.
no code implementations • 16 Oct 2023 • Thomas Decker, Ananta R. Bhattarai, Michael Lebacher
A common approach is to conduct safety validation based on a predefined Operational Design Domain (ODD) describing specific conditions under which a system under test is required to operate properly.
no code implementations • 11 Oct 2023 • Thomas Decker, Ralf Gross, Alexander Koebler, Michael Lebacher, Ronald Schnitzer, Stefan H. Weber
In this paper, we investigate the practical relevance of explainable artificial intelligence (XAI) with a special focus on the producing industries and relate them to the current state of academic XAI research.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)