Skip to main content
SearchLoginLogin or Signup

Humans Don’t Get Fooled:Does Predictive Coding Defend Against Adversarial Attack?

30

Published onMay 27, 2024
Humans Don’t Get Fooled:Does Predictive Coding Defend Against Adversarial Attack?
·

Abstract

The success of backpropagation, a foundational method in machine learning, has somewhat overshadowed the potential of biologically plausible learning. However, a prevalent threat to contemporary artificial neural networks -- trained with backpropagation -- is their fragility to adversarial attack, in stark contrast to human visual perception. In our experiments, we demonstrate that predictive coding networks, a biologically plausible learning approach, exhibit robustness against adversarial attacks of various forms. This finding may provide a novel perspective on enhancing the robustness of machine learning models and demonstrating the potential of further applying biologically plausible learning methods.

Article ID: 2024S3

Month: May

Year: 2024

Address: Online

Venue: The 37th Canadian Conference on Artificial Intelligence

Publisher: Canadian Artificial Intelligence Association

URL: https://caiac.pubpub.org/pub/t915m4c7


Comments
0
comment
No comments here
Why not start the discussion?