1 code implementation • 24 Apr 2024 • Evgenii Kortukov, Alexander Rubinstein, Elisa Nguyen, Seong Joon Oh
In cases where the models still fail to update their answers, we find a parametric bias: the incorrect parametric answer appearing in context makes the knowledge update likelier to fail.
1 code implementation • 2 Nov 2023 • Elisa Nguyen, Meike Nauta, Gwenn Englebienne, Christin Seifert
We present \textit{Temporal Spike Attribution} (TSA), a local explanation method for SNNs.
no code implementations • 31 Oct 2023 • Elisa Nguyen, Evgenii Kortukov, Jean Y. Song, Seong Joon Oh
Explainable AI (XAI) aims to provide insight into opaque model reasoning to humans and as such is an interdisciplinary field by nature.
no code implementations • 12 Oct 2023 • Bálint Mucsányi, Michael Kirchhof, Elisa Nguyen, Alexander Rubinstein, Seong Joon Oh
Collectively, we face a trustworthiness issue with the current machine learning technology.
Out-of-Distribution Generalization Uncertainty Quantification
1 code implementation • NeurIPS 2023 • Elisa Nguyen, Minjoon Seo, Seong Joon Oh
We recommend that future researchers and practitioners trust TDA estimates only in such cases.
no code implementations • 20 Jan 2022 • Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, Christin Seifert
Our so-called Co-12 properties serve as categorization scheme for systematically reviewing the evaluation practices of more than 300 papers published in the last 7 years at major AI and ML conferences that introduce an XAI method.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)