Short-term Hebbian learning can implement transformer-like attention.

Transformers have revolutionized machine learning models of language and vision, but their connection with neuroscience remains tenuous. Built from attention layers, they require a mass comparison of queries and keys that is difficult to perform using traditional neural circuits. Here, we show that...

Full description

Bibliographic Details
Main Author: Ian T Ellwood
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2024-01-01
Series:PLoS Computational Biology
Online Access:https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1011843&type=printable