no code implementations • 15 Nov 2023 • Andrea Pugnana, Carlos Mougan, Dan Saattrup Nielsen
Such a framework is known as selective prediction.
no code implementations • 9 Nov 2023 • Carlos Mougan, Joshua Brand
Deontological ethics, specifically understood through Immanuel Kant, provides a moral framework that emphasizes the importance of duties and principles, rather than the consequences of action.
no code implementations • 7 Jul 2023 • Ayse Gizem Yasar, Andrew Chong, Evan Dong, Thomas Krendl Gilbert, Sarah Hladikova, Roland Maio, Carlos Mougan, Xudong Shen, Shubham Singh, Ana-Andreea Stoica, Savannah Thais, Miri Zilka
As AI technology advances rapidly, concerns over the risks of bigness in digital markets are also growing.
no code implementations • 14 Mar 2023 • Carlos Mougan, Klaus Broelemann, David Masip, Gjergji Kasneci, Thanassis Thiropanis, Steffen Staab
Then, state-of-the-art techniques model input data distributions or model prediction distributions and try to understand issues regarding the interactions between learned models and shifting distributions.
no code implementations • 14 Mar 2023 • Carlos Mougan, Laura State, Antonio Ferrara, Salvatore Ruggieri, Steffen Staab
Liberalism-oriented political philosophy reasons that all individuals should be treated equally independently of their protected characteristics.
no code implementations • 22 Oct 2022 • Carlos Mougan, Klaus Broelemann, Gjergji Kasneci, Thanassis Tiropanis, Steffen Staab
We provide a mathematical analysis of different types of distribution shifts as well as synthetic experimental examples.
no code implementations • 7 Feb 2022 • Carlos Mougan, George Kanellos, Johannes Micheler, Jose Martinez, Thomas Gottron
For this approach we make use of explainable supervised machine learning to (a) identify the types of exceptions and (b) to prioritize which exceptions are more likely to require an intervention or correction by the NCBs.
2 code implementations • 27 Jan 2022 • Carlos Mougan, Dan Saattrup Nielsen
In this work, we use non-parametric bootstrapped uncertainty estimates and SHAP values to provide explainable uncertainty estimation as a technique that aims to monitor the deterioration of machine learning models in deployment environments, as well as determine the source of model deterioration when target labels are not available.
2 code implementations • 27 Jan 2022 • Carlos Mougan, Jose M. Alvarez, Salvatore Ruggieri, Steffen Staab
We investigate the interaction between categorical encodings and target encoding regularization methods that reduce unfairness.
no code implementations • 18 Jul 2021 • Carlos Mougan, Georgios Kanellos, Thomas Gottron
Explainable AI constitutes a fundamental step towards establishing fairness and addressing bias in algorithmic decision-making.
2 code implementations • 27 May 2021 • Carlos Mougan, David Masip, Jordi Nin, Oriol Pujol
Regression problems have been widely studied in machinelearning literature resulting in a plethora of regression models and performance measures.