Search Results for author: Chun Ouyang

Found 13 papers, 5 papers with code

Generating Feasible and Plausible Counterfactual Explanations for Outcome Prediction of Business Processes

1 code implementation14 Mar 2024 Alexander Stevens, Chun Ouyang, Johannes De Smedt, Catarina Moreira

In recent years, various machine and deep learning architectures have been successfully introduced to the field of predictive process analytics.

counterfactual Decision Making

MDF-Net for abnormality detection by fusing X-rays with clinical data

1 code implementation26 Feb 2023 Chihcheng Hsieh, Isabel Blanco Nobre, Sandra Costa Sousa, Chun Ouyang, Margot Brereton, Jacinto C. Nascimento, Joaquim Jorge, Catarina Moreira

In this work, we propose a novel architecture consisting of two fusion methods that enable the model to simultaneously process patients' clinical data (structured data) and chest X-rays (image data).

Anomaly Detection

Benchmarking Counterfactual Algorithms for XAI: From White Box to Black Box

1 code implementation4 Mar 2022 Catarina Moreira, Yu-Liang Chou, Chihcheng Hsieh, Chun Ouyang, Joaquim Jorge, João Madeiras Pereira

This study investigates the impact of machine learning models on the generation of counterfactual explanations by conducting a benchmark evaluation over three different types of models: decision-tree (fully transparent, interpretable, white-box model), a random forest (a semi-interpretable, grey-box model), and a neural network (a fully opaque, black-box model).

Benchmarking counterfactual +2

Building Interpretable Models for Business Process Prediction using Shared and Specialised Attention Mechanisms

no code implementations3 Sep 2021 Bemali Wickramanayake, Zhipeng He, Chun Ouyang, Catarina Moreira, Yue Xu, Renuka Sindhgatta

In this paper, we address the "black-box" problem in predictive process analytics by building interpretable models that are capable to inform both what and why is a prediction.

Attribute

DiCE4EL: Interpreting Process Predictions using a Milestone-Aware Counterfactual Approach

no code implementations19 Jul 2021 Chihcheng Hsieh, Catarina Moreira, Chun Ouyang

We design an extension of DiCE, namely DiCE4EL (DiCE for Event Logs), that can generate counterfactual explanations for process prediction, and propose an approach that supports deriving milestone-aware counterfactual explanations at key intermediate stages along process execution to promote interpretability.

counterfactual

Explainable AI Enabled Inspection of Business Process Prediction Models

no code implementations16 Jul 2021 Chun Ouyang, Renuka Sindhgatta, Catarina Moreira

As an important branch of state-of-the-art data analytics, business process predictions are also faced with a challenge in regard to the lack of explanation to the reasoning and decision by the underlying `black-box' prediction models.

BIG-bench Machine Learning Decision Making +1

Developing a Fidelity Evaluation Approach for Interpretable Machine Learning

1 code implementation16 Jun 2021 Mythreyi Velmurugan, Chun Ouyang, Catarina Moreira, Renuka Sindhgatta

Although modern machine learning and deep learning methods allow for complex and in-depth data analytics, the predictive models generated by these methods are often highly complex, and lack transparency.

BIG-bench Machine Learning Explainable Artificial Intelligence (XAI) +2

Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications

no code implementations7 Mar 2021 Yu-Liang Chou, Catarina Moreira, Peter Bruza, Chun Ouyang, Joaquim Jorge

This paper presents an in-depth systematic review of the diverse existing body of literature on counterfactuals and causability for explainable artificial intelligence.

counterfactual Explainable artificial intelligence

OrgMining 2.0: A Novel Framework for Organizational Model Mining from Event Logs

no code implementations24 Nov 2020 Jing Yang, Chun Ouyang, Wil M. P. van der Aalst, Arthur H. M. ter Hofstede, Yang Yu

We demonstrate the feasibility of this framework by proposing an approach underpinned by the framework for organizational model discovery, and also conduct experiments on real-life event logs to discover and evaluate organizational models.

Model Discovery

An Interpretable Probabilistic Approach for Demystifying Black-box Predictive Models

no code implementations21 Jul 2020 Catarina Moreira, Yu-Liang Chou, Mythreyi Velmurugan, Chun Ouyang, Renuka Sindhgatta, Peter Bruza

This has led to an increased interest in interpretable machine learning, where post hoc interpretation presents a useful mechanism for generating interpretations of complex learning models.

BIG-bench Machine Learning Decision Making +1

An Investigation of Interpretability Techniques for Deep Learning in Predictive Process Analytics

no code implementations21 Feb 2020 Catarina Moreira, Renuka Sindhgatta, Chun Ouyang, Peter Bruza, Andreas Wichert

We see certain distinct features used for predictions that provide useful insights about the type of cancer, along with features that do not generalize well.

Decision Making Interpretability Techniques for Deep Learning

Exploring Interpretability for Predictive Process Analytics

no code implementations22 Dec 2019 Renuka Sindhgatta, Chun Ouyang, Catarina Moreira

The explanations allow us to gain an understanding of the underlying reasons for a prediction and highlight scenarios where accuracy alone may not be sufficient in assessing the suitability of techniques used to encode event log data to features used by a predictive model.

BIG-bench Machine Learning Decision Making +2

Cannot find the paper you are looking for? You can Submit a new open access paper.