Search Results for author: Gesina Schwalbe

Found 12 papers, 1 papers with code

GCPV: Guided Concept Projection Vectors for the Explainable Inspection of CNN Feature Spaces

no code implementations24 Nov 2023 Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade

The latter, though, is of particular interest for debugging, like finding and understanding outliers, learned notions of sub-concepts, and concept confusion.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes

no code implementations8 Sep 2023 Youssef Shoeb, Robin Chan, Gesina Schwalbe, Azarm Nowzard, Fatma Güney, Hanno Gottschalk

In this work, we extend beyond identifying OoD road obstacles in video streams and offer a comprehensive approach to extract sequences of OoD road obstacles using text queries, thereby proposing a way of curating a collection of OoD data for subsequent analysis.

Retrieval

Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based Comparison of Feature Spaces

no code implementations30 Apr 2023 Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade

These allow insights into both the flow and likeness of semantic information within CNN layers, and into the degree of their similarity between different network architectures.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability

no code implementations28 Apr 2023 Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade

The guiding use-case is a post-hoc explainability framework for object detection (OD) CNNs, towards which existing concept analysis (CA) methods are successfully adapted.

Dimensionality Reduction Explainable artificial intelligence +4

Concept Embedding Analysis: A Review

no code implementations25 Mar 2022 Gesina Schwalbe

The research field of concept (embedding) analysis (CA) tackles this problem: CA aims to find global, assessable associations of humanly interpretable semantic concepts (e. g., eye, bearded) with internal representations of a DNN.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2

Enabling Verification of Deep Neural Networks in Perception Tasks Using Fuzzy Logic and Concept Embeddings

no code implementations3 Jan 2022 Gesina Schwalbe, Christian Wirth, Ute Schmid

In this work, we present a simple, yet effective, approach to verify that a CNN complies with symbolic predicate logic rules which relate visual concepts.

Explainable artificial intelligence

A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts

no code implementations15 May 2021 Gesina Schwalbe, Bettina Finzel

With the amount of XAI methods vastly growing, a taxonomy of methods is needed by researchers as well as practitioners: To grasp the breadth of the topic, compare methods, and to select the right XAI method based on traits required by a specific use-case context.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Verification of Size Invariance in DNN Activations using Concept Embeddings

no code implementations14 May 2021 Gesina Schwalbe

One approach to this is concept analysis, which aims to establish a mapping between the internal representation of a DNN and intuitive semantic concepts.

Pedestrian Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.