Robust Framework for Explanation Evaluation in Time Series Classification

8 Jun 2023  ·  Thu Trang Nguyen, Thach Le Nguyen, Georgiana Ifrim ·

Time series classification is a task which deals with a prevalent data type, temporal sequences, common in domains such as human activity recognition, sports analytics and general healthcare. This paper provides a framework to quantitatively evaluate and rank explanation methods for time series classification. The recent interest in explanation methods for time series has provided a great variety of explanation techniques. Nevertheless, when the explanations disagree on a specific problem, it remains unclear which of them to use. Comparing multiple explanations to find the right answer is non-trivial. Two key challenges remain: how to quantitatively and robustly evaluate the informativeness of a given explanation method (i.e., relevance for the classification task), and how to compare explanation methods side-by-side. We propose AMEE, a robust Model-Agnostic Explanation Evaluation framework for evaluating and comparing multiple saliency-based explanations for time series classification. In this approach, data perturbation is added to the input time series guided by each explanation. The impact of perturbation on classification accuracy is then measured and used for explanation evaluation. The results show that perturbing discriminative parts of the time series leads to significant changes in classification accuracy which can be used to evaluate each explanation. To be robust to different types of perturbations and different types of classifiers, we aggregate the accuracy loss across perturbations and classifiers. This novel approach allows us to quantify and rank different explanation methods. We provide a quantitative and qualitative analysis for synthetic datasets, a variety of time-series datasets, as well as a real-world dataset with known expert ground truth.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here