Search Results for author: Elisa Nguyen

Found 6 papers, 3 papers with code

Studying Large Language Model Behaviors Under Realistic Knowledge Conflicts

1 code implementation24 Apr 2024 Evgenii Kortukov, Alexander Rubinstein, Elisa Nguyen, Seong Joon Oh

In cases where the models still fail to update their answers, we find a parametric bias: the incorrect parametric answer appearing in context makes the knowledge update likelier to fail.

Feature Attribution Explanations for Spiking Neural Networks

1 code implementation2 Nov 2023 Elisa Nguyen, Meike Nauta, Gwenn Englebienne, Christin Seifert

We present \textit{Temporal Spike Attribution} (TSA), a local explanation method for SNNs.

Exploring Practitioner Perspectives On Training Data Attribution Explanations

no code implementations31 Oct 2023 Elisa Nguyen, Evgenii Kortukov, Jean Y. Song, Seong Joon Oh

Explainable AI (XAI) aims to provide insight into opaque model reasoning to humans and as such is an interdisciplinary field by nature.

A Bayesian Approach To Analysing Training Data Attribution In Deep Learning

1 code implementation NeurIPS 2023 Elisa Nguyen, Minjoon Seo, Seong Joon Oh

We recommend that future researchers and practitioners trust TDA estimates only in such cases.

From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI

no code implementations20 Jan 2022 Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, Christin Seifert

Our so-called Co-12 properties serve as categorization scheme for systematically reviewing the evaluation practices of more than 300 papers published in the last 7 years at major AI and ML conferences that introduce an XAI method.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Cannot find the paper you are looking for? You can Submit a new open access paper.