Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; referees: 2 approved]
In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-taile...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
F1000 Research Ltd
2017-10-01
|
Series: | F1000Research |
Subjects: | |
Online Access: | https://f1000research.com/articles/6-1222/v2 |
_version_ | 1818326548197933056 |
---|---|
author | Gabriele Scheler |
author_facet | Gabriele Scheler |
author_sort | Gabriele Scheler |
collection | DOAJ |
description | In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability. |
first_indexed | 2024-12-13T12:02:07Z |
format | Article |
id | doaj.art-056a4d4037454423a5697d4b01b4b094 |
institution | Directory Open Access Journal |
issn | 2046-1402 |
language | English |
last_indexed | 2024-12-13T12:02:07Z |
publishDate | 2017-10-01 |
publisher | F1000 Research Ltd |
record_format | Article |
series | F1000Research |
spelling | doaj.art-056a4d4037454423a5697d4b01b4b0942022-12-21T23:47:04ZengF1000 Research LtdF1000Research2046-14022017-10-01610.12688/f1000research.12130.213939Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; referees: 2 approved]Gabriele Scheler0Carl Correns Foundation for Mathematical Biology, Mountain View, CA, 94040, USAIn this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.https://f1000research.com/articles/6-1222/v2Theoretical & Computational Neuroscience |
spellingShingle | Gabriele Scheler Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; referees: 2 approved] F1000Research Theoretical & Computational Neuroscience |
title | Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; referees: 2 approved] |
title_full | Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; referees: 2 approved] |
title_fullStr | Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; referees: 2 approved] |
title_full_unstemmed | Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; referees: 2 approved] |
title_short | Logarithmic distributions prove that intrinsic learning is Hebbian [version 2; referees: 2 approved] |
title_sort | logarithmic distributions prove that intrinsic learning is hebbian version 2 referees 2 approved |
topic | Theoretical & Computational Neuroscience |
url | https://f1000research.com/articles/6-1222/v2 |
work_keys_str_mv | AT gabrielescheler logarithmicdistributionsprovethatintrinsiclearningishebbianversion2referees2approved |