Early detection of Alzheimer’s disease (AD) is significant for identifying of better treatment plans for the patients as the AD is not curable. On the other hand, lack of interpretability for the high performing prediction models might prevent incorporation of such models in clinical usage for AD detection. Accordingly, it is important to develop highly interpretable models which can create trust towards the prediction models by showing the factors that contribute to the models’ decisions. In this paper, we use ProtoPNet architecture in combination with popular pretrained deep learning models to add interpretability to the AD classifications over MRI scans from ADNI and OASIS datasets. We find that the ProtoPNet model with DenseNet121 architecture can reach 90 percent accuracy while providing explanatory illustrations of the model’s reasonings for the generated predictions. We also note that, in most cases, the performances of the ProtoPNet models are slightly inferior to their black-box counterparts, however, their ability to provide reasoning and transparency in the prediction generation process can contribute to higher adoption of the prediction models in clinical practice.
Article ID: 2021L26
Venue: Canadian Conference on Artificial Intelligence
Publisher: Canadian Artificial Intelligence Association