XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.
Explainable machine learning (ML) enables human learning from ML, human appeal of automated model decisions, regulatory compliance, and security audits of ML models.
Surprisingly, the majority of methods developed for explainable machine learning focus on a single aspect of the model behavior.
Recurrent and convolutional neural networks comprise two distinct families of models that have proven to be useful for encoding natural language utterances.
Explainable artificial intelligence has been gaining attention in the past few years.
The growing availability of data and computing power fuels the development of predictive models.
We visualize the adapted knowledge on several datasets with different UDA methods and find that generated images successfully capture the style difference between the two domains.
In this work, we propose new methods to support model analysis by exploiting the information about the correlation between variables.