no code implementations • 28 Apr 2024 • Paulina Tomaszewska, Przemysław Biecek
Does the stethoscope in the picture make the adjacent person a doctor or a patient?
no code implementations • 18 Apr 2024 • Bartlomiej Sobieski, Przemysław Biecek
Specifically, we discover that the latent space of Diffusion Autoencoders encodes the inference process of a given classifier in the form of global directions.
no code implementations • 16 Apr 2024 • Weronika Hryniewska-Guzik, Luca Longo, Przemysław Biecek
Explainable Artificial Intelligence has gained significant attention due to the widespread use of complex deep learning models in high-stake domains such as medicine, finance, and autonomous cars.
1 code implementation • 9 Apr 2024 • Weronika Hryniewska-Guzik, Jakub Bilski, Bartosz Chrostowski, Jakub Drak Sbahi, Przemysław Biecek
Robust and highly accurate lung segmentation in X-rays is crucial in medical imaging.
1 code implementation • 15 Mar 2024 • Sophie Hanna Langbein, Mateusz Krzyziński, Mikołaj Spytek, Hubert Baniecki, Przemysław Biecek, Marvin N. Wright
With the spread and rapid advancement of black box machine learning models, the field of interpretable machine learning (IML) or explainable artificial intelligence (XAI) has become increasingly important over the last decade.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +4
no code implementations • 18 Feb 2024 • Przemysław Bombiński, Patryk Szatkowski, Bartłomiej Sobieski, Tymoteusz Kwieciński, Szymon Płotka, Mariusz Adamek, Marcin Banasiuk, Mariusz I. Furmanek, Przemysław Biecek
We show, that lung X-ray masks created by following the contours of the heart, mediastinum, and diaphragm significantly underestimate lung regions and exclude substantial portions of the lungs from further assessment, which may result in numerous clinical errors.
1 code implementation • 30 Jan 2024 • Weronika Hryniewska-Guzik, Bartosz Sawicki, Przemysław Biecek
This paper presents a comprehensive comparative analysis of explainable artificial intelligence (XAI) ensembling methods.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 18 Jan 2024 • Paulina Tomaszewska, Elżbieta Sienkiewicz, Mai P. Hoang, Przemysław Biecek
The DSCon allows for a quantitative measure of the spatial context's role using three Spatial Context Measures: $SCM_{features}$, $SCM_{targets}$, $SCM_{residuals}$ to distinguish whether the spatial context is observable within the features of neighboring regions, their target values (attention scores) or residuals, respectively.
no code implementations • 20 Dec 2023 • Stanisław Giziński, Paulina Kaczyńska, Hubert Ruczyński, Emilia Wiśnios, Bartosz Pieliński, Przemysław Biecek, Julian Sienkiewicz
This suggests that the notion of Big Tech domination over AI research is oversimplified in the discourse.
1 code implementation • 30 Aug 2023 • Mikołaj Spytek, Mateusz Krzyziński, Sophie Hanna Langbein, Hubert Baniecki, Marvin N. Wright, Przemysław Biecek
Due to their flexibility and superior performance, machine learning models frequently complement and outperform traditional statistical survival models.
1 code implementation • 22 Aug 2023 • Katarzyna Kobylińska, Mateusz Krzyziński, Rafał Machowicz, Mariusz Adamek, Przemysław Biecek
If differently behaving models are detected in the Rashomon set, their combined analysis leads to more trustworthy conclusions, which is of vital importance for high-stakes applications such as medical applications.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 2 Aug 2023 • Weronika Hryniewska-Guzik, Maria Kędzierska, Przemysław Biecek
Lung cancer and covid-19 have one of the highest morbidity and mortality rates in the world.
1 code implementation • 30 Jun 2023 • Adrian Stando, Mustafa Cavus, Przemysław Biecek
To capture these changes, Explainable Artificial Intelligence tools are used to compare models trained on datasets before and after balancing.
Explainable artificial intelligence imbalanced classification
1 code implementation • 20 Jun 2023 • Katarzyna Woźnica, Piotr Wilczyński, Przemysław Biecek
In this paper, we present an example of SeFNet prepared for a collection of predictive tasks in healthcare, with the features' relations derived from the SNOMED-CT ontology.
no code implementations • 18 May 2023 • Weronika Hryniewska, Piotr Czarnecki, Jakub Wiśniewski, Przemysław Bombiński, Przemysław Biecek
Based on this use case, we show how to monitor data and model balance (fairness) throughout the life cycle of a predictive model, from data acquisition to parity analysis of model scores.
1 code implementation • 12 Apr 2023 • Piotr Komorowski, Hubert Baniecki, Przemysław Biecek
Our findings provide insights into the applicability of ViT explanations in medical imaging and highlight the importance of using appropriate evaluation criteria for comparing them.
1 code implementation • 25 Feb 2023 • Piotr Wilczyński, Artur Żółkowski, Mateusz Krzyziński, Emilia Wiśnios, Bartosz Pieliński, Stanisław Giziński, Julian Sienkiewicz, Przemysław Biecek
This paper introduces HADES, a novel tool for automatic comparative documents with similar structures.
no code implementations • 10 Nov 2022 • Artur Żółkowski, Mateusz Krzyziński, Piotr Wilczyński, Stanisław Giziński, Emilia Wiśnios, Bartosz Pieliński, Julian Sienkiewicz, Przemysław Biecek
The number of standardized policy documents regarding climate policy and their publication frequency is significantly increasing.
1 code implementation • 23 Aug 2022 • Mateusz Krzyziński, Mikołaj Spytek, Hubert Baniecki, Przemysław Biecek
Experiments on synthetic and medical data confirm that SurvSHAP(t) can detect variables with a time-dependent effect, and its aggregation is a better determinant of the importance of variables for a prediction than SurvLIME.
1 code implementation • 14 Jun 2022 • Mustafa Cavus, Przemysław Biecek
To measure the probability of a shot being a goal by the expected goal, several features are used to train an expected goal model which is based on the event and tracking football data.
1 code implementation • 27 Jan 2022 • Katarzyna Woźnica, Mateusz Grzyb, Zuzanna Trafas, Przemysław Biecek
For many machine learning models, a choice of hyperparameters is a crucial step towards achieving high performance.
1 code implementation • 15 Nov 2021 • Weronika Hryniewska, Adrianna Grudzień, Przemysław Biecek
LIMEcraft enhances the process of explanation by allowing a user to interactively select semantically consistent areas and thoroughly examine the prediction for the image instance in case of many image features.
no code implementations • 29 Jul 2021 • Stanisław Gizinski, Michał Kuzba, Bartosz Pielinski, Julian Sienkiewicz, Stanisław Łaniewski, Przemysław Biecek
The growing number of AI applications, also for high-stake decisions, increases the interest in Explainable and Interpretable Machine Learning (XI-ML).
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
no code implementations • 28 May 2021 • Katarzyna Woźnica, Katarzyna Pękala, Hubert Baniecki, Wojciech Kretowicz, Elżbieta Sienkiewicz, Przemysław Biecek
The increasing number of regulations and expectations of predictive machine learning models, such as so called right to explanation, has led to a large number of methods promising greater interpretability.
BIG-bench Machine Learning Explainable Artificial Intelligence (XAI)
no code implementations • 12 May 2021 • Tomasz Stanisławek, Filip Graliński, Anna Wróblewska, Dawid Lipiński, Agnieszka Kaliska, Paulina Rosalska, Bartosz Topolski, Przemysław Biecek
The relevance of the Key Information Extraction (KIE) task is increasingly important in natural language processing problems.
no code implementations • 14 Apr 2021 • Przemysław Biecek, Marcin Chlebus, Janusz Gajda, Alicja Gosiewska, Anna Kozak, Dominik Ogonowski, Jakub Sztachelski, Piotr Wojewnik
What is even more important and valuable we also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners, resolving the crucial obstacle in widespread deployment of more complex, 'black box' models like random forests, gradient boosted or extreme gradient boosted trees.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
1 code implementation • 1 Apr 2021 • Jakub Wiśniewski, Przemysław Biecek
The package includes a series of methods for bias mitigation that aim to diminish the discrimination in the model.
1 code implementation • 11 Dec 2020 • Weronika Hryniewska, Przemysław Bombiński, Patryk Szatkowski, Paulina Tomaszewska, Artur Przelaskowski, Przemysław Biecek
The sudden outbreak and uncontrolled spread of COVID-19 disease is one of the most important global problems today.
1 code implementation • 30 Aug 2020 • Wojciech Kretowicz, Przemysław Biecek
Data collected in this way is used to study the factors influencing the algorithm's performance.
no code implementations • 6 Jul 2020 • Katarzyna Woźnica, Przemysław Biecek
Incomplete data are common in practical applications.
3 code implementations • 2 Jun 2020 • Alicja Gosiewska, Katarzyna Woźnica, Przemysław Biecek
For example, the difference in performance for two models has no probabilistic interpretation, there is no reference point to indicate whether they represent a significant improvement, and it makes no sense to compare such differences between data sets.
no code implementations • 4 Mar 2020 • Filip Graliński, Tomasz Stanisławek, Anna Wróblewska, Dawid Lipiński, Agnieszka Kaliska, Paulina Rosalska, Bartosz Topolski, Przemysław Biecek
State-of-the-art solutions for Natural Language Processing (NLP) are able to capture a broad range of contexts, like the sentence-level context or document-level context for short documents.
no code implementations • 11 Feb 2020 • Katarzyna Woźnica, Przemysław Biecek
are used to predict the expected performance.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +2
1 code implementation • 7 Feb 2020 • Michał Kuźba, Przemysław Biecek
To our surprise, their development is driven by model developers rather than a study of needs for human end users.
1 code implementation • 3 Jul 2019 • Adam Gabriel Dobrakowski, Agnieszka Mykowiecka, Małgorzata Marciniak, Wojciech Jaworski, Przemysław Biecek
Is it true that patients with similar conditions get similar diagnoses?