1 code implementation • 19 Oct 2022 • José Ribeiro, Níkolas Carneiro, Ronnie Alves
Intending to shed light on the explanations generated by XAI measures and their interpretabilities, this research addresses a real-world classification problem related to homicide prediction, duly endorsed by the scientific community, replicated its proposed black box model and used 6 different XAI measures to generate explanations and 6 different human experts to generate what this research referred to as Interpretability Expectations - IE.
2 code implementations • 18 Oct 2022 • José Ribeiro, Lucas Cardoso, Raíssa Silva, Vitor Cirilo, Níkolas Carneiro, Ronnie Alves
In recent years, XAI researchers have been formalizing proposals and developing new methods to explain black box models, with no general consensus in the community on which method to use to explain these models, with this choice being almost directly linked to the popularity of a specific method.
Binary Classification Explainable artificial intelligence +1
no code implementations • 6 Jul 2021 • José Ribeiro, Raíssa Silva, Lucas Cardoso, Ronnie Alves
Seeking to answer questions such as "Are the explanations generated by the different measures the same, similar or different?"
no code implementations • 2 Feb 2021 • Kleber Padovani, Roberto Xavier, Rafael Cabral Borges, Andre Carvalho, Anna Reali, Annie Chateau, Ronnie Alves
Reinforcement learning has proven promising for solving complex activities without supervision - such games - and there is a pressing need to understand the limits of this approach to 'real' problems, such as the DFA problem.
1 code implementation • 16 Aug 2020 • José Ribeiro, Lair Meneses, Denis Costa, Wando Miranda, Ronnie Alves
This research presents a machine learning model to predict homicide crimes, using a dataset that uses generic data (without study location dependencies) based on incident report records for 34 different types of crimes, along with time and space data from crime reports.