Search Results for author: Ana Lucic

Found 13 papers, 7 papers with code

Towards Reproducible Machine Learning Research in Natural Language Processing

no code implementations ACL 2022 Ana Lucic, Maurits Bleeker, Samarth Bhargav, Jessica Forde, Koustuv Sinha, Jesse Dodge, Sasha Luccioni, Robert Stojnic

While recent progress in the field of ML has been significant, the reproducibility of these cutting-edge results is often lacking, with many submissions lacking the necessary information in order to ensure subsequent reproducibility.

BIG-bench Machine Learning

Clifford-Steerable Convolutional Neural Networks

1 code implementation22 Feb 2024 Maksim Zhdanov, David Ruhe, Maurice Weiler, Ana Lucic, Johannes Brandstetter, Patrick Forré

We present Clifford-Steerable Convolutional Neural Networks (CS-CNNs), a novel class of $\mathrm{E}(p, q)$-equivariant CNNs.

Semi-Supervised Object Detection in the Open World

no code implementations28 Jul 2023 Garvita Allabadi, Ana Lucic, Peter Pao-Huang, Yu-Xiong Wang, Vikram Adve

Existing approaches for semi-supervised object detection assume a fixed set of classes present in training and unlabeled datasets, i. e., in-distribution (ID) data.

Object object-detection +2

Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy

no code implementations12 Sep 2022 Ana Lucic

Model explainability has become an important problem in machine learning (ML) due to the increased effect that algorithmic predictions have on humans.

Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users

no code implementations6 Jul 2022 Ana Lucic, Sheeraz Ahmad, Amanda Furtado Brinhosa, Vera Liao, Himani Agrawal, Umang Bhatt, Krishnaram Kenthapadi, Alice Xiang, Maarten de Rijke, Nicholas Drabowski

In this paper, we report on ongoing work regarding (i) the development of an AI system for flagging and explaining low-quality medical images in real-time, (ii) an interview study to understand the explanation needs of stakeholders using the AI system at OurCompany, and, (iii) a longitudinal user study design to examine the effect of including explanations on the workflow of the technicians in our clinics.

Explainable Artificial Intelligence (XAI)

A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language Processing

1 code implementation9 May 2022 Michael Neely, Stefan F. Schouten, Maurits Bleeker, Ana Lucic

The validity of "attention as explanation" has so far been evaluated by computing the rank correlation between attention-based explanations and existing feature attribution explanations using LSTM-based models.

Explainable artificial intelligence

Reproducibility as a Mechanism for Teaching Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence

no code implementations1 Nov 2021 Ana Lucic, Maurits Bleeker, Sami Jullien, Samarth Bhargav, Maarten de Rijke

In this work, we explain the setup for a technical, graduate-level course on Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence (FACT-AI) at the University of Amsterdam, which teaches FACT-AI concepts through the lens of reproducibility.

Fairness

Order in the Court: Explainable AI Methods Prone to Disagreement

1 code implementation7 May 2021 Michael Neely, Stefan F. Schouten, Maurits J. R. Bleeker, Ana Lucic

By computing the rank correlation between attention weights and feature-additive explanation methods, previous analyses either invalidate or support the role of attention-based explanations as a faithful and plausible measure of salience.

To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions

1 code implementation14 Apr 2021 Kim de Bie, Ana Lucic, Hinda Haned

In hybrid human-AI systems, users need to decide whether or not to trust an algorithmic prediction while the true error in the prediction is unknown.

regression

CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks

1 code implementation5 Feb 2021 Ana Lucic, Maartje ter Hoeve, Gabriele Tolomei, Maarten de Rijke, Fabrizio Silvestri

In this work, we propose a method for generating CF explanations for GNNs: the minimal perturbation to the input (graph) data such that the prediction changes.

counterfactual

FOCUS: Flexible Optimizable Counterfactual Explanations for Tree Ensembles

1 code implementation27 Nov 2019 Ana Lucic, Harrie Oosterhuis, Hinda Haned, Maarten de Rijke

Model interpretability has become an important problem in machine learning (ML) due to the increased effect that algorithmic decisions have on humans.

counterfactual

Why Does My Model Fail? Contrastive Local Explanations for Retail Forecasting

1 code implementation17 Jul 2019 Ana Lucic, Hinda Haned, Maarten de Rijke

Given a large error, MC-BRP determines (1) feature values that would result in a reasonable prediction, and (2) general trends between each feature and the target, both based on Monte Carlo simulations.

Explaining Predictions from Tree-based Boosting Ensembles

no code implementations4 Jul 2019 Ana Lucic, Hinda Haned, Maarten de Rijke

Understanding how "black-box" models arrive at their predictions has sparked significant interest from both within and outside the AI community.

counterfactual Counterfactual Explanation

Cannot find the paper you are looking for? You can Submit a new open access paper.