Random sketching, clustering, and short-term memory in spiking neural networks

© Yael Hitron, Nancy Lynch, Cameron Musco, and Merav Parter. We study input compression in a biologically inspired model of neural computation. We demonstrate that a network consisting of a random projection step (implemented via random synaptic connectivity) followed by a sparsification step (imple...

Full description

Bibliographic Details
Main Authors: Hitron, Y, Lynch, N, Musco, C, Parter, M
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:English
Published: 2021
Online Access:https://hdl.handle.net/1721.1/137566
Description
Summary:© Yael Hitron, Nancy Lynch, Cameron Musco, and Merav Parter. We study input compression in a biologically inspired model of neural computation. We demonstrate that a network consisting of a random projection step (implemented via random synaptic connectivity) followed by a sparsification step (implemented via winner-take-all competition) can reduce well-separated high-dimensional input vectors to well-separated low-dimensional vectors. By augmenting our network with a third module, we can efficiently map each input (along with any small perturbations of the input) to a unique representative neuron, solving a neural clustering problem. Both the size of our network and its processing time, i.e., the time it takes the network to compute the compressed output given a presented input, are independent of the (potentially large) dimension of the input patterns and depend only on the number of distinct inputs that the network must encode and the pairwise relative Hamming distance between these inputs. The first two steps of our construction mirror known biological networks, for example, in the fruit fly olfactory system [9, 29, 17]. Our analysis helps provide a theoretical understanding of these networks and lay a foundation for how random compression and input memorization may be implemented in biological neural networks. Technically, a contribution in our network design is the implementation of a short-term memory. Our network can be given a desired memory time tm as an input parameter and satisfies the following with high probability: any pattern presented several times within a time window of tm rounds will be mapped to a single representative output neuron. However, a pattern not presented for c · tm rounds for some constant c > 1 will be “forgotten”, and its representative output neuron will be released, to accommodate newly introduced patterns.