no code implementations • ACL 2022 • Ana Lucic, Maurits Bleeker, Samarth Bhargav, Jessica Forde, Koustuv Sinha, Jesse Dodge, Sasha Luccioni, Robert Stojnic
While recent progress in the field of ML has been significant, the reproducibility of these cutting-edge results is often lacking, with many submissions lacking the necessary information in order to ensure subsequent reproducibility.
1 code implementation • 22 Feb 2024 • Maksim Zhdanov, David Ruhe, Maurice Weiler, Ana Lucic, Johannes Brandstetter, Patrick Forré
We present Clifford-Steerable Convolutional Neural Networks (CS-CNNs), a novel class of $\mathrm{E}(p, q)$-equivariant CNNs.
no code implementations • 28 Jul 2023 • Garvita Allabadi, Ana Lucic, Peter Pao-Huang, Yu-Xiong Wang, Vikram Adve
Existing approaches for semi-supervised object detection assume a fixed set of classes present in training and unlabeled datasets, i. e., in-distribution (ID) data.
no code implementations • 12 Sep 2022 • Ana Lucic
Model explainability has become an important problem in machine learning (ML) due to the increased effect that algorithmic predictions have on humans.
no code implementations • 6 Jul 2022 • Ana Lucic, Sheeraz Ahmad, Amanda Furtado Brinhosa, Vera Liao, Himani Agrawal, Umang Bhatt, Krishnaram Kenthapadi, Alice Xiang, Maarten de Rijke, Nicholas Drabowski
In this paper, we report on ongoing work regarding (i) the development of an AI system for flagging and explaining low-quality medical images in real-time, (ii) an interview study to understand the explanation needs of stakeholders using the AI system at OurCompany, and, (iii) a longitudinal user study design to examine the effect of including explanations on the workflow of the technicians in our clinics.
1 code implementation • 9 May 2022 • Michael Neely, Stefan F. Schouten, Maurits Bleeker, Ana Lucic
The validity of "attention as explanation" has so far been evaluated by computing the rank correlation between attention-based explanations and existing feature attribution explanations using LSTM-based models.
no code implementations • 1 Nov 2021 • Ana Lucic, Maurits Bleeker, Sami Jullien, Samarth Bhargav, Maarten de Rijke
In this work, we explain the setup for a technical, graduate-level course on Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence (FACT-AI) at the University of Amsterdam, which teaches FACT-AI concepts through the lens of reproducibility.
1 code implementation • 7 May 2021 • Michael Neely, Stefan F. Schouten, Maurits J. R. Bleeker, Ana Lucic
By computing the rank correlation between attention weights and feature-additive explanation methods, previous analyses either invalidate or support the role of attention-based explanations as a faithful and plausible measure of salience.
1 code implementation • 14 Apr 2021 • Kim de Bie, Ana Lucic, Hinda Haned
In hybrid human-AI systems, users need to decide whether or not to trust an algorithmic prediction while the true error in the prediction is unknown.
1 code implementation • 5 Feb 2021 • Ana Lucic, Maartje ter Hoeve, Gabriele Tolomei, Maarten de Rijke, Fabrizio Silvestri
In this work, we propose a method for generating CF explanations for GNNs: the minimal perturbation to the input (graph) data such that the prediction changes.
1 code implementation • 27 Nov 2019 • Ana Lucic, Harrie Oosterhuis, Hinda Haned, Maarten de Rijke
Model interpretability has become an important problem in machine learning (ML) due to the increased effect that algorithmic decisions have on humans.
1 code implementation • 17 Jul 2019 • Ana Lucic, Hinda Haned, Maarten de Rijke
Given a large error, MC-BRP determines (1) feature values that would result in a reasonable prediction, and (2) general trends between each feature and the target, both based on Monte Carlo simulations.
no code implementations • 4 Jul 2019 • Ana Lucic, Hinda Haned, Maarten de Rijke
Understanding how "black-box" models arrive at their predictions has sparked significant interest from both within and outside the AI community.