Search Results for author: Shruthi Chari

Found 9 papers, 0 papers with code

Leveraging Clinical Context for User-Centered Explainability: A Diabetes Use Case

no code implementations6 Jul 2021 Shruthi Chari, Prithwish Chakraborty, Mohamed Ghalwash, Oshani Seneviratne, Elif K. Eyigoz, Daniel M. Gruen, Fernando Suarez Saiz, Ching-Hua Chen, Pablo Meyer Rojas, Deborah L. McGuinness

To enable the adoption of the ever improving AI risk prediction models in practice, we have begun to explore techniques to contextualize such models along three dimensions of interest: the patients' clinical state, AI predictions about their risk of complications, and algorithmic explanations supporting the predictions.

Semantic Modeling for Food Recommendation Explanations

no code implementations4 May 2021 Ishita Padhiar, Oshani Seneviratne, Shruthi Chari, Daniel Gruen, Deborah L. McGuinness

Our motivation with the use of FEO is to empower users to make decisions about their health, fully equipped with an understanding of the AI recommender systems as they relate to user questions, by providing reasoning behind their recommendations in the form of explanations.

Food recommendation Knowledge Base Question Answering +1

Explanation Ontology: A Model of Explanations for User-Centered AI

no code implementations4 Oct 2020 Shruthi Chari, Oshani Seneviratne, Daniel M. Gruen, Morgan A. Foreman, Amar K. Das, Deborah L. McGuinness

With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration, mapping end user needs to specific explanation types and the system's AI capabilities.

Explanation Ontology in Action: A Clinical Use-Case

no code implementations4 Oct 2020 Shruthi Chari, Oshani Seneviratne, Daniel M. Gruen, Morgan A. Foreman, Amar K. Das, Deborah L. McGuinness

We addressed the problem of a lack of semantic representation for user-centric explanations and different explanation types in our Explanation Ontology (https://purl. org/heals/eo).

Foundations of Explainable Knowledge-Enabled Systems

no code implementations17 Mar 2020 Shruthi Chari, Daniel M. Gruen, Oshani Seneviratne, Deborah L. McGuinness

Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.

Explainable artificial intelligence

Directions for Explainable Knowledge-Enabled Systems

no code implementations17 Mar 2020 Shruthi Chari, Daniel M. Gruen, Oshani Seneviratne, Deborah L. McGuinness

Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently.

Explainable artificial intelligence

Making Study Populations Visible through Knowledge Graphs

no code implementations9 Jul 2019 Shruthi Chari, Miao Qi, Nkcheniyere N. Agu, Oshani Seneviratne, James P. McCusker, Kristin P. Bennett, Amar K. Das, Deborah L. McGuinness

To address these challenges, we develop an ontology-enabled prototype system, which exposes the population descriptions in research studies in a declarative manner, with the ultimate goal of allowing medical practitioners to better understand the applicability and generalizability of treatment recommendations.

Knowledge Graphs

Knowledge Integration for Disease Characterization: A Breast Cancer Example

no code implementations20 Jul 2018 Oshani Seneviratne, Sabbir M. Rashid, Shruthi Chari, James P. McCusker, Kristin P. Bennett, James A. Hendler, Deborah L. McGuinness

With the rapid advancements in cancer research, the information that is useful for characterizing disease, staging tumors, and creating treatment and survivorship plans has been changing at a pace that creates challenges when physicians try to remain current.

Cannot find the paper you are looking for? You can Submit a new open access paper.