Dynamic Neural Fields with Intrinsic Plasticity
Dynamic neural fields (DNFs) are dynamical systems models that approximate the activity of large, homogeneous, and recurrently connected neural networks based on a mean field approach. Within dynamic field theory, the DNFs have been used as building blocks in architectures to model sensorimotor embe...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2017-08-01
|
Series: | Frontiers in Computational Neuroscience |
Subjects: | |
Online Access: | http://journal.frontiersin.org/article/10.3389/fncom.2017.00074/full |
_version_ | 1811317271766761472 |
---|---|
author | Claudius Strub Claudius Strub Gregor Schöner Florentin Wörgötter Yulia Sandamirskaya |
author_facet | Claudius Strub Claudius Strub Gregor Schöner Florentin Wörgötter Yulia Sandamirskaya |
author_sort | Claudius Strub |
collection | DOAJ |
description | Dynamic neural fields (DNFs) are dynamical systems models that approximate the activity of large, homogeneous, and recurrently connected neural networks based on a mean field approach. Within dynamic field theory, the DNFs have been used as building blocks in architectures to model sensorimotor embedding of cognitive processes. Typically, the parameters of a DNF in an architecture are manually tuned in order to achieve a specific dynamic behavior (e.g., decision making, selection, or working memory) for a given input pattern. This manual parameters search requires expert knowledge and time to find and verify a suited set of parameters. The DNF parametrization may be particular challenging if the input distribution is not known in advance, e.g., when processing sensory information. In this paper, we propose the autonomous adaptation of the DNF resting level and gain by a learning mechanism of intrinsic plasticity (IP). To enable this adaptation, an input and output measure for the DNF are introduced, together with a hyper parameter to define the desired output distribution. The online adaptation by IP gives the possibility to pre-define the DNF output statistics without knowledge of the input distribution and thus, also to compensate for changes in it. The capabilities and limitations of this approach are evaluated in a number of experiments. |
first_indexed | 2024-04-13T12:05:01Z |
format | Article |
id | doaj.art-82916c573765428997bdc17776d71e84 |
institution | Directory Open Access Journal |
issn | 1662-5188 |
language | English |
last_indexed | 2024-04-13T12:05:01Z |
publishDate | 2017-08-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Computational Neuroscience |
spelling | doaj.art-82916c573765428997bdc17776d71e842022-12-22T02:47:40ZengFrontiers Media S.A.Frontiers in Computational Neuroscience1662-51882017-08-011110.3389/fncom.2017.00074268203Dynamic Neural Fields with Intrinsic PlasticityClaudius Strub0Claudius Strub1Gregor Schöner2Florentin Wörgötter3Yulia Sandamirskaya4Autonomous Robotics Lab, Institut für Neuroinformatik, Ruhr-UniversitätBochum, GermanyDepartment of Computational Neuroscience, III Physics Institute, Georg-August-UniversitätGöttingen, GermanyAutonomous Robotics Lab, Institut für Neuroinformatik, Ruhr-UniversitätBochum, GermanyDepartment of Computational Neuroscience, III Physics Institute, Georg-August-UniversitätGöttingen, GermanyInstitute of Neuroinformatics, University of Zurich and ETH ZurichZurich, SwitzerlandDynamic neural fields (DNFs) are dynamical systems models that approximate the activity of large, homogeneous, and recurrently connected neural networks based on a mean field approach. Within dynamic field theory, the DNFs have been used as building blocks in architectures to model sensorimotor embedding of cognitive processes. Typically, the parameters of a DNF in an architecture are manually tuned in order to achieve a specific dynamic behavior (e.g., decision making, selection, or working memory) for a given input pattern. This manual parameters search requires expert knowledge and time to find and verify a suited set of parameters. The DNF parametrization may be particular challenging if the input distribution is not known in advance, e.g., when processing sensory information. In this paper, we propose the autonomous adaptation of the DNF resting level and gain by a learning mechanism of intrinsic plasticity (IP). To enable this adaptation, an input and output measure for the DNF are introduced, together with a hyper parameter to define the desired output distribution. The online adaptation by IP gives the possibility to pre-define the DNF output statistics without knowledge of the input distribution and thus, also to compensate for changes in it. The capabilities and limitations of this approach are evaluated in a number of experiments.http://journal.frontiersin.org/article/10.3389/fncom.2017.00074/fulldynamic neural fieldsintrinsic plasticityadaptationdynamics |
spellingShingle | Claudius Strub Claudius Strub Gregor Schöner Florentin Wörgötter Yulia Sandamirskaya Dynamic Neural Fields with Intrinsic Plasticity Frontiers in Computational Neuroscience dynamic neural fields intrinsic plasticity adaptation dynamics |
title | Dynamic Neural Fields with Intrinsic Plasticity |
title_full | Dynamic Neural Fields with Intrinsic Plasticity |
title_fullStr | Dynamic Neural Fields with Intrinsic Plasticity |
title_full_unstemmed | Dynamic Neural Fields with Intrinsic Plasticity |
title_short | Dynamic Neural Fields with Intrinsic Plasticity |
title_sort | dynamic neural fields with intrinsic plasticity |
topic | dynamic neural fields intrinsic plasticity adaptation dynamics |
url | http://journal.frontiersin.org/article/10.3389/fncom.2017.00074/full |
work_keys_str_mv | AT claudiusstrub dynamicneuralfieldswithintrinsicplasticity AT claudiusstrub dynamicneuralfieldswithintrinsicplasticity AT gregorschoner dynamicneuralfieldswithintrinsicplasticity AT florentinworgotter dynamicneuralfieldswithintrinsicplasticity AT yuliasandamirskaya dynamicneuralfieldswithintrinsicplasticity |