We compare Convolutional Neural Networks (CNN) and Deep Belief Networks (DBN) ability to withstand common image classification attacks. CNNs makes a strong inductive bias assumption about the relationship between pixels that are proximal to each other. We propose that this bias makes CNNs vulnerable to adversarial attacks. We implement two attacks using the MNIST and CIFAR-10 dataset where we modify pixels of the training and test images in different ways to challenge the CNN and DBN models. The results show that the DBN models generally perform better under attack than the CNN models. The CNNs convolutional inductive bias is at a disadvantage when the assumption of the relationship between proximal pixels no longer holds.
Article ID: 2021S18
Month: May
Year: 2021
Address: Online
Venue: Canadian Conference on Artificial Intelligence
Publisher: Canadian Artificial Intelligence Association
URL: https://caiac.pubpub.org/pub/f4zju4cb/