no code implementations • 7 Jun 2023 • Stephan Wäldchen
A prover selects a certificate from the datapoint and sends it to a verifier who decides the class.
1 code implementation • 1 Jun 2022 • Stephan Wäldchen, Kartikey Sharma, Berkant Turan, Max Zimmer, Sebastian Pokutta
We propose an interactive multi-agent classifier that provides provable interpretability guarantees even for complex agents such as neural networks.
no code implementations • 23 Feb 2022 • Stephan Wäldchen, Felix Huber, Sebastian Pokutta
Given only a standard classifier function, it is unclear how partial input should be realised.
no code implementations • 13 Dec 2021 • Jan Macdonald, Stephan Wäldchen
We prove that no invariant parametrised family of distributions can exist unless at least one of the following three restrictions holds: First, the network layers have a width of one, which is unreasonable for practical neural networks.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
2 code implementations • 27 May 2019 • Jan Macdonald, Stephan Wäldchen, Sascha Hauch, Gitta Kutyniok
We formalise the widespread idea of interpreting neural network decisions as an explicit optimisation problem in a rate-distortion framework.
1 code implementation • 26 Feb 2019 • Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior.