1 code implementation • 14 Mar 2024 • Alexander Stevens, Chun Ouyang, Johannes De Smedt, Catarina Moreira
In recent years, various machine and deep learning architectures have been successfully introduced to the field of predictive process analytics.
1 code implementation • 26 Feb 2023 • Chihcheng Hsieh, Isabel Blanco Nobre, Sandra Costa Sousa, Chun Ouyang, Margot Brereton, Jacinto C. Nascimento, Joaquim Jorge, Catarina Moreira
In this work, we propose a novel architecture consisting of two fusion methods that enable the model to simultaneously process patients' clinical data (structured data) and chest X-rays (image data).
1 code implementation • 4 Mar 2022 • Catarina Moreira, Yu-Liang Chou, Chihcheng Hsieh, Chun Ouyang, Joaquim Jorge, João Madeiras Pereira
This study investigates the impact of machine learning models on the generation of counterfactual explanations by conducting a benchmark evaluation over three different types of models: decision-tree (fully transparent, interpretable, white-box model), a random forest (a semi-interpretable, grey-box model), and a neural network (a fully opaque, black-box model).
no code implementations • 3 Sep 2021 • Bemali Wickramanayake, Zhipeng He, Chun Ouyang, Catarina Moreira, Yue Xu, Renuka Sindhgatta
In this paper, we address the "black-box" problem in predictive process analytics by building interpretable models that are capable to inform both what and why is a prediction.
no code implementations • 19 Jul 2021 • Chihcheng Hsieh, Catarina Moreira, Chun Ouyang
We design an extension of DiCE, namely DiCE4EL (DiCE for Event Logs), that can generate counterfactual explanations for process prediction, and propose an approach that supports deriving milestone-aware counterfactual explanations at key intermediate stages along process execution to promote interpretability.
no code implementations • 16 Jul 2021 • Chun Ouyang, Renuka Sindhgatta, Catarina Moreira
As an important branch of state-of-the-art data analytics, business process predictions are also faced with a challenge in regard to the lack of explanation to the reasoning and decision by the underlying `black-box' prediction models.
1 code implementation • 16 Jun 2021 • Mythreyi Velmurugan, Chun Ouyang, Catarina Moreira, Renuka Sindhgatta
Although modern machine learning and deep learning methods allow for complex and in-depth data analytics, the predictive models generated by these methods are often highly complex, and lack transparency.
BIG-bench Machine Learning Explainable Artificial Intelligence (XAI) +2
no code implementations • 7 Mar 2021 • Yu-Liang Chou, Catarina Moreira, Peter Bruza, Chun Ouyang, Joaquim Jorge
This paper presents an in-depth systematic review of the diverse existing body of literature on counterfactuals and causability for explainable artificial intelligence.
1 code implementation • 8 Dec 2020 • Mythreyi Velmurugan, Chun Ouyang, Catarina Moreira, Renuka Sindhgatta
Current explainable machine learning methods, such as LIME and SHAP, can be used to interpret black box models.
no code implementations • 24 Nov 2020 • Jing Yang, Chun Ouyang, Wil M. P. van der Aalst, Arthur H. M. ter Hofstede, Yang Yu
We demonstrate the feasibility of this framework by proposing an approach underpinned by the framework for organizational model discovery, and also conduct experiments on real-life event logs to discover and evaluate organizational models.
no code implementations • 21 Jul 2020 • Catarina Moreira, Yu-Liang Chou, Mythreyi Velmurugan, Chun Ouyang, Renuka Sindhgatta, Peter Bruza
This has led to an increased interest in interpretable machine learning, where post hoc interpretation presents a useful mechanism for generating interpretations of complex learning models.
no code implementations • 21 Feb 2020 • Catarina Moreira, Renuka Sindhgatta, Chun Ouyang, Peter Bruza, Andreas Wichert
We see certain distinct features used for predictions that provide useful insights about the type of cancer, along with features that do not generalize well.
Decision Making Interpretability Techniques for Deep Learning
no code implementations • 22 Dec 2019 • Renuka Sindhgatta, Chun Ouyang, Catarina Moreira
The explanations allow us to gain an understanding of the underlying reasons for a prediction and highlight scenarios where accuracy alone may not be sufficient in assessing the suitability of techniques used to encode event log data to features used by a predictive model.