Skip to main content
SearchLoginLogin or Signup

A User-Centered Design of Explainable AI for Clinical Decision Support

Published onJun 08, 2021
A User-Centered Design of Explainable AI for Clinical Decision Support
·

Abstract

Clinical decision support (CDS) systems are computer applications whose goal is to facilitate the decision-making process of clinicians. In recent years, CDSS has developed an interest in applying machine learning (ML) models to make predictions related to clinical outcomes. The limited interpretability of many ML models is a major barrier to clinical adoption. This challenge has sparked research interest in interpretable and explainable AI, commonly known as XAI. XAI methods are used to construct and communicate explanations of the predictions made by machine learning models so that end users can interpret those predictions. However, these methods are not designed based on end-users' needs; rather, they are based on the developers’ intuitions of what a good explanation is. Furthermore, XAI methods are not tailored to the specific tasks that a user will undertake, nor are they tailored to the interface used to perform those tasks. To tackle these issues, we propose to develop a visual analytic tool to explain an ML model for clinical applications whose design will explicitly take into account the context of tasks and the needs of end-users.

Article ID: 2021G07

Month: May

Year: 2021

Address: Online

Venue: Graduate Student Symposium- Canadian Conference on Artificial Intelligence

Publisher: Canadian Artificial Intelligence Association

URL:https://caiac.pubpub.org/pub/es06632p/

Comments
0
comment
No comments here
Why not start the discussion?