On separating long- and short-term memories in hyperdimensional computing

Operations on high-dimensional, fixed-width vectors can be used to distribute information from several vectors over a single vector of the same width. For example, a set of key-value pairs can be encoded into a single vector with multiplication and addition of the corresponding key and value vectors...

Full description

Bibliographic Details
Main Authors: Jeffrey L. Teeters, Denis Kleyko, Pentti Kanerva, Bruno A. Olshausen
Format: Article
Language:English
Published: Frontiers Media S.A. 2023-01-01
Series:Frontiers in Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fnins.2022.867568/full
_version_ 1828067970978414592
author Jeffrey L. Teeters
Denis Kleyko
Denis Kleyko
Pentti Kanerva
Bruno A. Olshausen
author_facet Jeffrey L. Teeters
Denis Kleyko
Denis Kleyko
Pentti Kanerva
Bruno A. Olshausen
author_sort Jeffrey L. Teeters
collection DOAJ
description Operations on high-dimensional, fixed-width vectors can be used to distribute information from several vectors over a single vector of the same width. For example, a set of key-value pairs can be encoded into a single vector with multiplication and addition of the corresponding key and value vectors: the keys are bound to their values with component-wise multiplication, and the key-value pairs are combined into a single superposition vector with component-wise addition. The superposition vector is, thus, a memory which can then be queried for the value of any of the keys, but the result of the query is approximate. The exact vector is retrieved from a codebook (a.k.a. item memory), which contains vectors defined in the system. To perform these operations, the item memory vectors and the superposition vector must be the same width. Increasing the capacity of the memory requires increasing the width of the superposition and item memory vectors. In this article, we demonstrate that in a regime where many (e.g., 1,000 or more) key-value pairs are stored, an associative memory which maps key vectors to value vectors requires less memory and less computing to obtain the same reliability of storage as a superposition vector. These advantages are obtained because the number of storage locations in an associate memory can be increased without increasing the width of the vectors in the item memory. An associative memory would not replace a superposition vector as a medium of storage, but could augment it, because data recalled from an associative memory could be used in algorithms that use a superposition vector. This would be analogous to how human working memory (which stores about seven items) uses information recalled from long-term memory (which is much larger than the working memory). We demonstrate the advantages of an associative memory experimentally using the storage of large finite-state automata, which could model the storage and recall of state-dependent behavior by brains.
first_indexed 2024-04-10T23:55:18Z
format Article
id doaj.art-1ef9d9bf8eaf46d1ab0a07e19cea93e4
institution Directory Open Access Journal
issn 1662-453X
language English
last_indexed 2024-04-10T23:55:18Z
publishDate 2023-01-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Neuroscience
spelling doaj.art-1ef9d9bf8eaf46d1ab0a07e19cea93e42023-01-10T13:54:23ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2023-01-011610.3389/fnins.2022.867568867568On separating long- and short-term memories in hyperdimensional computingJeffrey L. Teeters0Denis Kleyko1Denis Kleyko2Pentti Kanerva3Bruno A. Olshausen4Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United StatesRedwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United StatesIntelligent Systems Lab, Research Institutes of Sweden, Kista, SwedenRedwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United StatesRedwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United StatesOperations on high-dimensional, fixed-width vectors can be used to distribute information from several vectors over a single vector of the same width. For example, a set of key-value pairs can be encoded into a single vector with multiplication and addition of the corresponding key and value vectors: the keys are bound to their values with component-wise multiplication, and the key-value pairs are combined into a single superposition vector with component-wise addition. The superposition vector is, thus, a memory which can then be queried for the value of any of the keys, but the result of the query is approximate. The exact vector is retrieved from a codebook (a.k.a. item memory), which contains vectors defined in the system. To perform these operations, the item memory vectors and the superposition vector must be the same width. Increasing the capacity of the memory requires increasing the width of the superposition and item memory vectors. In this article, we demonstrate that in a regime where many (e.g., 1,000 or more) key-value pairs are stored, an associative memory which maps key vectors to value vectors requires less memory and less computing to obtain the same reliability of storage as a superposition vector. These advantages are obtained because the number of storage locations in an associate memory can be increased without increasing the width of the vectors in the item memory. An associative memory would not replace a superposition vector as a medium of storage, but could augment it, because data recalled from an associative memory could be used in algorithms that use a superposition vector. This would be analogous to how human working memory (which stores about seven items) uses information recalled from long-term memory (which is much larger than the working memory). We demonstrate the advantages of an associative memory experimentally using the storage of large finite-state automata, which could model the storage and recall of state-dependent behavior by brains.https://www.frontiersin.org/articles/10.3389/fnins.2022.867568/fullhyperdimensional computingvector symbolic architecturessparse distributed memorylong-term memoryholographic reduced representationassociative memory
spellingShingle Jeffrey L. Teeters
Denis Kleyko
Denis Kleyko
Pentti Kanerva
Bruno A. Olshausen
On separating long- and short-term memories in hyperdimensional computing
Frontiers in Neuroscience
hyperdimensional computing
vector symbolic architectures
sparse distributed memory
long-term memory
holographic reduced representation
associative memory
title On separating long- and short-term memories in hyperdimensional computing
title_full On separating long- and short-term memories in hyperdimensional computing
title_fullStr On separating long- and short-term memories in hyperdimensional computing
title_full_unstemmed On separating long- and short-term memories in hyperdimensional computing
title_short On separating long- and short-term memories in hyperdimensional computing
title_sort on separating long and short term memories in hyperdimensional computing
topic hyperdimensional computing
vector symbolic architectures
sparse distributed memory
long-term memory
holographic reduced representation
associative memory
url https://www.frontiersin.org/articles/10.3389/fnins.2022.867568/full
work_keys_str_mv AT jeffreylteeters onseparatinglongandshorttermmemoriesinhyperdimensionalcomputing
AT deniskleyko onseparatinglongandshorttermmemoriesinhyperdimensionalcomputing
AT deniskleyko onseparatinglongandshorttermmemoriesinhyperdimensionalcomputing
AT penttikanerva onseparatinglongandshorttermmemoriesinhyperdimensionalcomputing
AT brunoaolshausen onseparatinglongandshorttermmemoriesinhyperdimensionalcomputing