no code implementations • 14 Apr 2021 • Przemysław Biecek, Marcin Chlebus, Janusz Gajda, Alicja Gosiewska, Anna Kozak, Dominik Ogonowski, Jakub Sztachelski, Piotr Wojewnik
What is even more important and valuable we also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners, resolving the crucial obstacle in widespread deployment of more complex, 'black box' models like random forests, gradient boosted or extreme gradient boosted trees.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
1 code implementation • 28 Sep 2020 • Michael Bücker, Gero Szepannek, Alicja Gosiewska, Przemyslaw Biecek
This paper works out different dimensions that have to be considered for making credit scoring models understandable and presents a framework for making ``black box'' machine learning models transparent, auditable and explainable.
1 code implementation • 24 Sep 2020 • Szymon Maksymiuk, Alicja Gosiewska, Przemyslaw Biecek
The growing availability of data and computing power fuels the development of predictive models.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
3 code implementations • 2 Jun 2020 • Alicja Gosiewska, Katarzyna Woźnica, Przemysław Biecek
For example, the difference in performance for two models has no probabilistic interpretation, there is no reference point to indicate whether they represent a significant improvement, and it makes no sense to compare such differences between data sets.
1 code implementation • 11 Feb 2020 • Alicja Gosiewska, Przemyslaw Biecek
Can we train interpretable and accurate models, without timeless feature engineering?
2 code implementations • 24 Aug 2019 • Alicja Gosiewska, Mateusz Bakala, Katarzyna Woznica, Maciej Zwolinski, Przemyslaw Biecek
Second is, that for k-fold cross-validation, the model performance is in most cases calculated as an average performance from particular folds, which neglects the information how stable is the performance for different folds.
2 code implementations • 27 Mar 2019 • Alicja Gosiewska, Przemyslaw Biecek
Explainable Artificial Intelligence (XAI)has received a great deal of attention recently.
4 code implementations • 28 Feb 2019 • Alicja Gosiewska, Aleksandra Gacek, Piotr Lubon, Przemyslaw Biecek
Complex black-box predictive models may have high accuracy, but opacity causes problems like lack of trust, lack of stability, sensitivity to concept drift.
4 code implementations • 19 Sep 2018 • Alicja Gosiewska, Przemyslaw Biecek
With modern software it is easy to train even a~complex model that fits the training data and results in high accuracy on the test set.