Skip to main content
SearchLoginLogin or Signup

Sentiment Analysis with Cognitive Attention Supervision

Published onJun 08, 2021
Sentiment Analysis with Cognitive Attention Supervision


Neural network-based language models such as BERT (Bidirectional Encoder Representations from Transformers) use attention mechanisms to create contextualized representations of inputs, conceptually analogous to humans reading words in context. For the task of classifying the sentiment of texts, we ask whether BERT’s attention can be informed by human cognitive data. During training, we supervise attention with eye-tracking and/or brain imaging data and combine binary sentiment classification loss with these attention losses. We find that attention supervision can be used to manipulate BERT attention to be more similar to the ground truth human data, but that there are no significant differences in sentiment classification accuracy. However, models with cognitive attention supervision more frequently misclassify different samples from the baseline models–they more often make different errors–and the errors from models with supervised attention have a higher ratio of false negatives.

Article ID: 2021L21

Month: May

Year: 2021

Address: Online

Venue: Canadian Conference on Artificial Intelligence

Publisher: Canadian Artificial Intelligence Association


No comments here
Why not start the discussion?