ABSTRACT
Machine learning models have the potential to transform healthcare by enabling the construction of decision support systems. However, a major challenge is the lack of transparency and accountability, as many models do not provide understandable explanations for their recommendations. Explainable Artificial Intelligence (XAI) methods aim to address this challenge by constructing and communicating explanations of how a model works and why it produces a particular output. This can help users evaluate the system and build trust in it, where appropriate. In this paper, we propose a method for explaining the relative rankings of predictions made by an XGBoost model, which involves understanding and comparing multiple predictions together. Our method uses counterfactual examples to show how changing the feature values of an entity can affect its position within the ranking defined by the model. Unlike traditional counterfactual explanations, which aim to find feature value changes that would result in a different predicted class label by meeting a fixed threshold, the proposed approach is unique in that it aims to identify changes that would bring the predictions in line with a dynamic threshold determined by other data items. We demonstrate the effectiveness of our approach in a healthcare triage problem. Our framework for counterfactual explanation provides a powerful tool for understanding the relationships between feature values and model rankings and can help promote transparency and accountability in healthcare decision-making and decision support.
Article ID: 2023GL9
Month: June
Year: 2023
Address: Online
Venue: The 36th Canadian Conference on Artificial Intelligence
Publisher: Canadian Artificial Intelligence Association
URL: https://caiac.pubpub.org/pub/9aov4tmt