A User-Centered Design of Explainable AI for Clinical Decision Support

Clinical decision support (CDS) systems are computer applications whose goal is to facilitate the decision-making process of clinicians. In recent years, CDSS has developed an interest in applying machine learning (ML) models to make predictions related to clinical outcomes. The limited interpretability of many ML models is a major barrier to clinical adoption. This challenge has sparked research interest in interpretable and explainable AI, commonly known as XAI. XAI methods are used to construct and communicate explanations of the predictions made by machine learning models so that end users can interpret those predictions. However, these methods are not designed based on end-users' needs; rather, they are based on the developers’ intuitions of what a good explanation is. Furthermore, XAI methods are not tailored to the specific tasks that a user will undertake, nor are they tailored to the interface used to perform those tasks. To tackle these issues, we propose to develop a visual analytic tool to explain an ML model for clinical applications whose design will explicitly take into account the context of tasks and the needs of end-users.


Background
Clinical decision support (CDS) systems are computer applications whose goal is to facilitate the decision-making process of clinicians [1,2]. In recent years, interest has grown around applying Machine Learning (ML) to make predictions related to clinical outcomes for use in CDS systems [3] by constructing models from patient data. However, predictions made by ML models are often not easily interpretable [4], in the sense that the end-user is often not able to understand why each prediction is made. Particularly in clinical applications, where interpretability and trustworthiness of a model are as important as its accuracy [5], the limited interpretability of many ML models is a major barrier to adoption [6]. This barrier has sparked research interest in interpretable and explainable AI methods, commonly known as XAI. XAI methods are used to construct and communicate explanations of the predictions made by machine learning models [7] so that end-users can interpret those predictions. Such methods have been developed to facilitate interpretability and thereby address barriers to adoption stemming from accountability and trustworthiness [8], and several have been applied to CDS systems. However, these methods are not designed based on end-users' needs; rather, they are based on the developers' intuitions of what a good explanation is [9]. Furthermore, XAI methods are not tailored to the specific tasks that a user will undertake, nor are they tailored to the interface used to perform those tasks. We hypothesize that by explicitly designing XAI methods based on end-user needs and based on the relevant tasks and interfaces, we will be able to provide the interpretability needed to overcome barriers to adoption of ML in CDS systems.

Objective
To tackle these issues, we propose to develop a visual analytic tool to explain an ML model for clinical application whose design will explicitly take into account the context of tasks and the needs of end-users. The class of ML models we have chosen to explain is the hybrid feature-and kernel-based predictive model class (HFK) which is derived from Kriging method [10]. The visual analytic tool will explain how the model incorporates both feature-based and similarity-based information to make predictions in order to make the operation of the model more interpretable. We consider three different user categories with different explanations: clinicians, health researchers, and developers. Clinicians and health researchers require explanations about the reasoning behind predictions for a specific patient and a patient population, respectively. Developers require detailed technical information about how predictions are made for both individual and population levels in order to iterate and improve the model. Therefore, by targeting the users and their needs, the main anticipated outcome of this research will be improved usability, utility of the tool in clinical settings, and improved model performance over time.

Literature Review
Various XAI methods have been applied in a variety of clinical settings. Knowledge distillation methods compress a machine model to make it simpler and therefore easier for humans to understand. Such methods been applied to explain predictions of mortality and ventilator-free days [11], and they have been used to learn interpretable features from deep learning models on a real-world clinical time-series dataset [12]. Several approaches for explaining models in terms of feature importance have been applied to clinical problems including predicting stroke outcome [5], predicting ICU mortality [13], and detecting important clinical features within a large EHR dataset [14]. Layer Relevance Propagation (LRP) is a technique for explaining deep computer vision models wherein a heatmap with an indication of the relevance of pixels in the outcome is presented. LRP has been used to explain clinical events predictions and survival time [15], clinical gate analysis [16], and Alzheimer's disease [17].
Visual analytic tools have been used to open the black box and explain the logic of the ML models [18][19][20][21]. However, the application of visual analytics tools for XAI in CDSS has been much more limited. The RetainVis tool was developed to explain RNNs that make predictions from MRI data [22]. This tool provides a combination of explanations, such as providing displaying outputs change according to input changes, contribution score of each input in the decision, and what-if case analysis. Another visual analytic tool used case-based reasoning to show similar cases for a specific patient with breast cancer [23]. It used a rainbow box to show a comparison of drug properties and a table to describe details about similar cases. A third tool called RuleMatrix has been proposed to explain neural network behavior. This tool provides users with a rule-based explanation and data filtering capabilities, and its use was exemplified with two clinical usage scenarios of cancer and diabetes classifiers [24].

Methodology
Our design of a human-centered visual analytic tool will have two main phases. We will involve users in both phases of choosing a XAI method and designing the interface as follows.
1) Choosing an XAI method: As mentioned in the literature review, many XAI methods have been proposed. Each of these methods explains ML models' predictions in a specific way. To select appropriate XAI methods and explanations based on users' needs, we adopt Wang et al.'s [25] framework, which is based on human reasoning. This framework suggests how to decide which XAI methods can satisfy end-users' reasoning goals. In this phase, end-users fill a questionnaire and based on the result, XAI method are chosen.
2) Designing the interface for presenting the explanation: To design the interface, we define and describe all activities that the tool will support. These activities are composed of tasks such as organizing, ordering, summarizing, querying, locating, and clustering. These tasks are further divided into multiple lower-level tasks until they can be supported by visual or interactive tasks such as aggregating, aligning, identifying, and ranking [26]. Tasks and sub-tasks are combined to support users to reach their overall goal. Indeed, all of the visual sub-tasks and low-level interaction that a user performs give emergence to tasks, and sequences of tasks would lead to accomplishing an activity. To assign visual properties to these visual and interactive tasks, we will ask users about visualization tools that they are familiar with and also their preference for visual properties for each visual task.
To this date, I have examined the properties of the ML model and have explored different XAI methods. Also, I have investigated the required information to prepare the questionnaire for phase one and the interview questions for phase two. In the next step, we will contact prospective users to do the study for these two phases.