Biologically-Plausible Learning Algorithms Can Scale to Large Datasets

The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feed- back pathways. To address this “weight transport problem” (Grossberg, 1987), two more biologically plaus...

Full description

Bibliographic Details
Main Authors: Xiao, Will, Chen, Honglin, Liao, Qianli, Poggio, Tomaso
Format: Technical Report
Language:en_US
Published: Center for Brains, Minds and Machines (CBMM) 2018
Subjects:
Online Access:http://hdl.handle.net/1721.1/118195
_version_ 1826207522004926464
author Xiao, Will
Chen, Honglin
Liao, Qianli
Poggio, Tomaso
author_facet Xiao, Will
Chen, Honglin
Liao, Qianli
Poggio, Tomaso
author_sort Xiao, Will
collection MIT
description The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feed- back pathways. To address this “weight transport problem” (Grossberg, 1987), two more biologically plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by Bartunov et al. (2018) evaluate variants of target-propagation (TP) and feedback alignment (FA) on MINIST, CIFAR, and ImageNet datasets, and find that although many of the proposed algorithms perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights share signs but not magnitudes. We examine the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet, RetinaNet for MS COCO). Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks. These results complement the study by Bartunov et al. (2018), and establish a new benchmark for future biologically plausible learning algorithms on more difficult datasets and more complex architectures.
first_indexed 2024-09-23T13:50:52Z
format Technical Report
id mit-1721.1/118195
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T13:50:52Z
publishDate 2018
publisher Center for Brains, Minds and Machines (CBMM)
record_format dspace
spelling mit-1721.1/1181952019-04-12T22:43:46Z Biologically-Plausible Learning Algorithms Can Scale to Large Datasets Xiao, Will Chen, Honglin Liao, Qianli Poggio, Tomaso backpropagation feedback alignment sign-symmetry algorithm The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feed- back pathways. To address this “weight transport problem” (Grossberg, 1987), two more biologically plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by Bartunov et al. (2018) evaluate variants of target-propagation (TP) and feedback alignment (FA) on MINIST, CIFAR, and ImageNet datasets, and find that although many of the proposed algorithms perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights share signs but not magnitudes. We examine the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet, RetinaNet for MS COCO). Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks. These results complement the study by Bartunov et al. (2018), and establish a new benchmark for future biologically plausible learning algorithms on more difficult datasets and more complex architectures. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. 2018-09-28T19:24:08Z 2018-09-28T19:24:08Z 2018-09-27 Technical Report Working Paper Other http://hdl.handle.net/1721.1/118195 en_US CBMM Memo Series;092 application/pdf Center for Brains, Minds and Machines (CBMM)
spellingShingle backpropagation
feedback alignment
sign-symmetry algorithm
Xiao, Will
Chen, Honglin
Liao, Qianli
Poggio, Tomaso
Biologically-Plausible Learning Algorithms Can Scale to Large Datasets
title Biologically-Plausible Learning Algorithms Can Scale to Large Datasets
title_full Biologically-Plausible Learning Algorithms Can Scale to Large Datasets
title_fullStr Biologically-Plausible Learning Algorithms Can Scale to Large Datasets
title_full_unstemmed Biologically-Plausible Learning Algorithms Can Scale to Large Datasets
title_short Biologically-Plausible Learning Algorithms Can Scale to Large Datasets
title_sort biologically plausible learning algorithms can scale to large datasets
topic backpropagation
feedback alignment
sign-symmetry algorithm
url http://hdl.handle.net/1721.1/118195
work_keys_str_mv AT xiaowill biologicallyplausiblelearningalgorithmscanscaletolargedatasets
AT chenhonglin biologicallyplausiblelearningalgorithmscanscaletolargedatasets
AT liaoqianli biologicallyplausiblelearningalgorithmscanscaletolargedatasets
AT poggiotomaso biologicallyplausiblelearningalgorithmscanscaletolargedatasets