Paper

Structured Self-Attention Weights Encode Semantics in Sentiment Analysis

Neural attention, especially the self-attention made popular by the Transformer, has become the workhorse of state-of-the-art natural language processing (NLP) models. Very recent work suggests that the self-attention in the Transformer encodes syntactic information; Here, we show that self-attention scores encode semantics by considering sentiment analysis tasks... (read more)

Results in Papers With Code
(↓ scroll down to see all results)