Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks

Reinforcement learning is a paradigm that can account for how organisms learn to adapt their behavior in complex environments with sparse rewards. To partition an environment into discrete states, implementations in spiking neuronal networks typically rely on input architectures involving place cell...

Full description

Bibliographic Details
Main Authors: Philipp Weidel, Renato Duarte, Abigail Morrison
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-03-01
Series:Frontiers in Computational Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fncom.2021.543872/full
_version_ 1818915060774338560
author Philipp Weidel
Philipp Weidel
Renato Duarte
Abigail Morrison
Abigail Morrison
author_facet Philipp Weidel
Philipp Weidel
Renato Duarte
Abigail Morrison
Abigail Morrison
author_sort Philipp Weidel
collection DOAJ
description Reinforcement learning is a paradigm that can account for how organisms learn to adapt their behavior in complex environments with sparse rewards. To partition an environment into discrete states, implementations in spiking neuronal networks typically rely on input architectures involving place cells or receptive fields specified ad hoc by the researcher. This is problematic as a model for how an organism can learn appropriate behavioral sequences in unknown environments, as it fails to account for the unsupervised and self-organized nature of the required representations. Additionally, this approach presupposes knowledge on the part of the researcher on how the environment should be partitioned and represented and scales poorly with the size or complexity of the environment. To address these issues and gain insights into how the brain generates its own task-relevant mappings, we propose a learning architecture that combines unsupervised learning on the input projections with biologically motivated clustered connectivity within the representation layer. This combination allows input features to be mapped to clusters; thus the network self-organizes to produce clearly distinguishable activity patterns that can serve as the basis for reinforcement learning on the output projections. On the basis of the MNIST and Mountain Car tasks, we show that our proposed model performs better than either a comparable unclustered network or a clustered network with static input projections. We conclude that the combination of unsupervised learning and clustered connectivity provides a generic representational substrate suitable for further computation.
first_indexed 2024-12-19T23:56:17Z
format Article
id doaj.art-4c125f9208834a12abb6159a871ab5ff
institution Directory Open Access Journal
issn 1662-5188
language English
last_indexed 2024-12-19T23:56:17Z
publishDate 2021-03-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Computational Neuroscience
spelling doaj.art-4c125f9208834a12abb6159a871ab5ff2022-12-21T20:01:00ZengFrontiers Media S.A.Frontiers in Computational Neuroscience1662-51882021-03-011510.3389/fncom.2021.543872543872Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural NetworksPhilipp Weidel0Philipp Weidel1Renato Duarte2Abigail Morrison3Abigail Morrison4Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure-Function Relationship (JBI-1 / INM-10), Research Centre Jülich, Jülich, GermanyDepartment of Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, GermanyInstitute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure-Function Relationship (JBI-1 / INM-10), Research Centre Jülich, Jülich, GermanyInstitute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure-Function Relationship (JBI-1 / INM-10), Research Centre Jülich, Jülich, GermanyDepartment of Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, GermanyReinforcement learning is a paradigm that can account for how organisms learn to adapt their behavior in complex environments with sparse rewards. To partition an environment into discrete states, implementations in spiking neuronal networks typically rely on input architectures involving place cells or receptive fields specified ad hoc by the researcher. This is problematic as a model for how an organism can learn appropriate behavioral sequences in unknown environments, as it fails to account for the unsupervised and self-organized nature of the required representations. Additionally, this approach presupposes knowledge on the part of the researcher on how the environment should be partitioned and represented and scales poorly with the size or complexity of the environment. To address these issues and gain insights into how the brain generates its own task-relevant mappings, we propose a learning architecture that combines unsupervised learning on the input projections with biologically motivated clustered connectivity within the representation layer. This combination allows input features to be mapped to clusters; thus the network self-organizes to produce clearly distinguishable activity patterns that can serve as the basis for reinforcement learning on the output projections. On the basis of the MNIST and Mountain Car tasks, we show that our proposed model performs better than either a comparable unclustered network or a clustered network with static input projections. We conclude that the combination of unsupervised learning and clustered connectivity provides a generic representational substrate suitable for further computation.https://www.frontiersin.org/articles/10.3389/fncom.2021.543872/fullunsupervised learningreinforcement learningspiking neural networkneural plasticityclustered connectivity
spellingShingle Philipp Weidel
Philipp Weidel
Renato Duarte
Abigail Morrison
Abigail Morrison
Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks
Frontiers in Computational Neuroscience
unsupervised learning
reinforcement learning
spiking neural network
neural plasticity
clustered connectivity
title Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks
title_full Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks
title_fullStr Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks
title_full_unstemmed Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks
title_short Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks
title_sort unsupervised learning and clustered connectivity enhance reinforcement learning in spiking neural networks
topic unsupervised learning
reinforcement learning
spiking neural network
neural plasticity
clustered connectivity
url https://www.frontiersin.org/articles/10.3389/fncom.2021.543872/full
work_keys_str_mv AT philippweidel unsupervisedlearningandclusteredconnectivityenhancereinforcementlearninginspikingneuralnetworks
AT philippweidel unsupervisedlearningandclusteredconnectivityenhancereinforcementlearninginspikingneuralnetworks
AT renatoduarte unsupervisedlearningandclusteredconnectivityenhancereinforcementlearninginspikingneuralnetworks
AT abigailmorrison unsupervisedlearningandclusteredconnectivityenhancereinforcementlearninginspikingneuralnetworks
AT abigailmorrison unsupervisedlearningandclusteredconnectivityenhancereinforcementlearninginspikingneuralnetworks