Papers
arxiv:2111.05498
Attention Approximates Sparse Distributed Memory
Published on Nov 10, 2021
Authors:
Abstract
While Attention has come to be an important mechanism in deep learning, there remains limited intuition for why it works so well. Here, we show that Transformer Attention can be closely related under certain data conditions to Kanerva's Sparse Distributed Memory (SDM), a biologically plausible associative memory model. We confirm that these conditions are satisfied in pre-trained GPT2 Transformer models. We discuss the implications of the Attention-SDM map and provide new computational and biological interpretations of Attention.
Models citing this paper 0
No model linking this paper
Cite arxiv.org/abs/2111.05498 in a model README.md to link it from this page.
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2111.05498 in a dataset README.md to link it from this page.
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2111.05498 in a Space README.md to link it from this page.