Learning cortical hierarchies with temporal Hebbian updates

A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations....

Full description

Bibliographic Details
Main Authors: Pau Vilimelis Aceituno, Matilde Tristany Farinha, Reinhard Loidl, Benjamin F. Grewe
Format: Article
Language:English
Published: Frontiers Media S.A. 2023-05-01
Series:Frontiers in Computational Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fncom.2023.1136010/full
_version_ 1797821452567707648
author Pau Vilimelis Aceituno
Pau Vilimelis Aceituno
Matilde Tristany Farinha
Reinhard Loidl
Benjamin F. Grewe
Benjamin F. Grewe
author_facet Pau Vilimelis Aceituno
Pau Vilimelis Aceituno
Matilde Tristany Farinha
Reinhard Loidl
Benjamin F. Grewe
Benjamin F. Grewe
author_sort Pau Vilimelis Aceituno
collection DOAJ
description A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.
first_indexed 2024-03-13T09:52:54Z
format Article
id doaj.art-c9f9ddbe5c3648268f12097888aa869a
institution Directory Open Access Journal
issn 1662-5188
language English
last_indexed 2024-03-13T09:52:54Z
publishDate 2023-05-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Computational Neuroscience
spelling doaj.art-c9f9ddbe5c3648268f12097888aa869a2023-05-24T06:08:56ZengFrontiers Media S.A.Frontiers in Computational Neuroscience1662-51882023-05-011710.3389/fncom.2023.11360101136010Learning cortical hierarchies with temporal Hebbian updatesPau Vilimelis Aceituno0Pau Vilimelis Aceituno1Matilde Tristany Farinha2Reinhard Loidl3Benjamin F. Grewe4Benjamin F. Grewe5Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, SwitzerlandETH AI Center, ETH Zurich, Zurich, SwitzerlandInstitute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, SwitzerlandInstitute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, SwitzerlandInstitute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, SwitzerlandETH AI Center, ETH Zurich, Zurich, SwitzerlandA key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.https://www.frontiersin.org/articles/10.3389/fncom.2023.1136010/fullcortical hierarchiesdeep learningcredit assignmentsynaptic plasticitybackpropagationspiking time-dependent plasticity
spellingShingle Pau Vilimelis Aceituno
Pau Vilimelis Aceituno
Matilde Tristany Farinha
Reinhard Loidl
Benjamin F. Grewe
Benjamin F. Grewe
Learning cortical hierarchies with temporal Hebbian updates
Frontiers in Computational Neuroscience
cortical hierarchies
deep learning
credit assignment
synaptic plasticity
backpropagation
spiking time-dependent plasticity
title Learning cortical hierarchies with temporal Hebbian updates
title_full Learning cortical hierarchies with temporal Hebbian updates
title_fullStr Learning cortical hierarchies with temporal Hebbian updates
title_full_unstemmed Learning cortical hierarchies with temporal Hebbian updates
title_short Learning cortical hierarchies with temporal Hebbian updates
title_sort learning cortical hierarchies with temporal hebbian updates
topic cortical hierarchies
deep learning
credit assignment
synaptic plasticity
backpropagation
spiking time-dependent plasticity
url https://www.frontiersin.org/articles/10.3389/fncom.2023.1136010/full
work_keys_str_mv AT pauvilimelisaceituno learningcorticalhierarchieswithtemporalhebbianupdates
AT pauvilimelisaceituno learningcorticalhierarchieswithtemporalhebbianupdates
AT matildetristanyfarinha learningcorticalhierarchieswithtemporalhebbianupdates
AT reinhardloidl learningcorticalhierarchieswithtemporalhebbianupdates
AT benjaminfgrewe learningcorticalhierarchieswithtemporalhebbianupdates
AT benjaminfgrewe learningcorticalhierarchieswithtemporalhebbianupdates