no code implementations • 15 Mar 2024 • Andrea Apicella, Salvatore Giugliano, Francesco Isgrò, Roberto Prevete
Modern Artificial Intelligence (AI) systems, especially Deep Learning (DL) models, poses challenges in understanding their inner workings by AI researchers.
no code implementations • 24 Jan 2024 • Andrea Apicella, Francesco Isgrò, Roberto Prevete
While this approach provides convenience, it raises concerns about the reliability of outcomes, leading to challenges such as incorrect performance evaluation.
no code implementations • 9 Jun 2023 • Andrea Apicella, Luca Di Lorenzo, Francesco Isgrò, Andrea Pollastro, Roberto Prevete
Explainable Artificial Intelligence (XAI) aims to provide insights into the decision-making process of AI models, allowing users to understand their results beyond their decisions.
no code implementations • 9 Jun 2023 • Andrea Apicella, Francesco Isgrò, Roberto Prevete
To this aim, we propose a neural network architecture which induces an error function involving the outputs of all the network layers.
no code implementations • 16 Dec 2022 • Andrea Apicella, Pasquale Arpaia, Giovanni D'Errico, Davide Marocco, Giovanna Mastrati, Nicola Moccaldi, Roberto Prevete
Several architectures and methods have been proposed to address this issue, mainly based on transfer learning methods.
no code implementations • 12 Oct 2022 • Andrea Apicella, Francesco Isgrò, Andrea Pollastro, Roberto Prevete
An interesting case of the well-known Dataset Shift Problem is the classification of Electroencephalogram (EEG) signals in the context of Brain-Computer Interface (BCI).
no code implementations • 3 Oct 2022 • Andrea Apicella, Francesco Isgrò, Andrea Pollastro, Roberto Prevete
This paper focuses on the impact of data normalisation, or standardisation strategies applied together with DA methods.
no code implementations • 9 Jun 2021 • Andrea Apicella, Salvatore Giugliano, Francesco Isgrò, Roberto Prevete
We start from the hypothesis that some autoencoders, relying on standard data representation approaches, could extract more salient and understandable input properties, which we call here \textit{Middle-Level input Features} (MLFs), for a user with respect to raw low-level features.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
no code implementations • 21 May 2021 • Andrea Apicella, Francesco Isgrò, Andrea Pollastro, Roberto Prevete
Over the last few years, we have witnessed the availability of an increasing data generated from non-Euclidean domains, which are usually represented as graphs with complex relationships, and Graph Neural Networks (GNN) have gained a high interest because of their potential in processing graph-structured data.
no code implementations • 16 Oct 2020 • Andrea Apicella, Salvatore Giugliano, Francesco Isgrò, Roberto Prevete
This work proposes a novel general framework, in the context of eXplainable Artificial Intelligence (XAI), to construct explanations for the behaviour of Machine Learning (ML) models in terms of middle-level features.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
no code implementations • 2 May 2020 • Andrea Apicella, Francesco Donnarumma, Francesco Isgrò, Roberto Prevete
In neural networks literature, there is a strong interest in identifying and defining activation functions which can improve neural network performance.
no code implementations • 8 Feb 2019 • Andrea Apicella, Francesco Isgrò, Roberto Prevete
Learning automatically the best activation function for the task is an active topic in neural network research.