1 code implementation • 22 Jun 2023 • Benedict Clark, Rick Wilming, Stefan Haufe
The field of 'explainable' artificial intelligence (XAI) has produced highly cited methods that seek to make the decisions of complex machine learning (ML) methods 'understandable' to humans, for example by attributing 'importance' scores to input features.
1 code implementation • 21 Jun 2023 • Marta Oliveira, Rick Wilming, Benedict Clark, Céline Budding, Fabian Eitel, Kerstin Ritter, Stefan Haufe
Here, we propose a benchmark dataset that allows for quantifying explanation performance in a realistic magnetic resonance imaging (MRI) classification task.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2
no code implementations • 2 Jun 2023 • Rick Wilming, Leo Kieslich, Benedict Clark, Stefan Haufe
In recent years, the community of 'explainable artificial intelligence' (XAI) has created a vast body of methods to bridge a perceived gap between model 'complexity' and 'interpretability'.
1 code implementation • 9 Dec 2021 • Céline Budding, Fabian Eitel, Kerstin Ritter, Stefan Haufe
Over the last years, many 'explainable artificial intelligence' (xAI) approaches have been developed, but these have not always been objectively evaluated.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 14 Nov 2021 • Rick Wilming, Céline Budding, Klaus-Robert Müller, Stefan Haufe
It has been demonstrated that some saliency methods can highlight features that have no statistical association with the prediction target (suppressor variables).
Explainable Artificial Intelligence (XAI) Feature Importance
1 code implementation • NeurIPS 2021 • Ali Hashemi, Yijing Gao, Chang Cai, Sanjay Ghosh, Klaus-Robert Müller, Srikantan S. Nagarajan, Stefan Haufe
Several problems in neuroimaging and beyond require inference on the parameters of multi-task sparse hierarchical regression models.
1 code implementation • 1 Jan 2021 • Ali Hashemi, Chang Cai, Klaus Robert Muller, Srikantan Nagarajan, Stefan Haufe
We consider hierarchical Bayesian (type-II maximum likelihood) regression models for observations with latent variables for source and noise, where parameters of priors for source and noise terms need to be estimated jointly from data.
1 code implementation • NeurIPS 2019 • Tao Tu, John Paisley, Stefan Haufe, Paul Sajda
In this study, we develop a linear state-space model to infer the effective connectivity in a distributed brain network based on simultaneously recorded EEG and fMRI data.
1 code implementation • 26 Jan 2018 • Lucas C. Parra, Stefan Haufe, Jacek P. Dmochowski
How does one find dimensions in multivariate data that are reliably expressed across repetitions?
no code implementations • 25 Sep 2015 • Irene Winkler, Danny Panknin, Daniel Bartz, Klaus-Robert Müller, Stefan Haufe
Inferring causal interactions from observed data is a challenging problem, especially in the presence of measurement noise.
no code implementations • NeurIPS 2008 • Stefan Haufe, Vadim V. Nikulin, Andreas Ziehe, Klaus-Robert Müller, Guido Nolte
We introduce a novel framework for estimating vector fields using sparse basis field expansions (S-FLEX).