The geometry of representational drift in natural and artificial neural networks.
Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggested that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2022-11-01
|
Series: | PLoS Computational Biology |
Online Access: | https://doi.org/10.1371/journal.pcbi.1010716 |
_version_ | 1797974022330253312 |
---|---|
author | Kyle Aitken Marina Garrett Shawn Olsen Stefan Mihalas |
author_facet | Kyle Aitken Marina Garrett Shawn Olsen Stefan Mihalas |
author_sort | Kyle Aitken |
collection | DOAJ |
description | Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggested that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed "representational drift". In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift differs from in-session variance and most often occurs along directions that have the most in-class variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting. |
first_indexed | 2024-04-11T04:12:23Z |
format | Article |
id | doaj.art-e8cd955d41254d4c9deafaafbb320d75 |
institution | Directory Open Access Journal |
issn | 1553-734X 1553-7358 |
language | English |
last_indexed | 2024-04-11T04:12:23Z |
publishDate | 2022-11-01 |
publisher | Public Library of Science (PLoS) |
record_format | Article |
series | PLoS Computational Biology |
spelling | doaj.art-e8cd955d41254d4c9deafaafbb320d752023-01-01T05:31:13ZengPublic Library of Science (PLoS)PLoS Computational Biology1553-734X1553-73582022-11-011811e101071610.1371/journal.pcbi.1010716The geometry of representational drift in natural and artificial neural networks.Kyle AitkenMarina GarrettShawn OlsenStefan MihalasNeurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggested that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed "representational drift". In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift differs from in-session variance and most often occurs along directions that have the most in-class variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.https://doi.org/10.1371/journal.pcbi.1010716 |
spellingShingle | Kyle Aitken Marina Garrett Shawn Olsen Stefan Mihalas The geometry of representational drift in natural and artificial neural networks. PLoS Computational Biology |
title | The geometry of representational drift in natural and artificial neural networks. |
title_full | The geometry of representational drift in natural and artificial neural networks. |
title_fullStr | The geometry of representational drift in natural and artificial neural networks. |
title_full_unstemmed | The geometry of representational drift in natural and artificial neural networks. |
title_short | The geometry of representational drift in natural and artificial neural networks. |
title_sort | geometry of representational drift in natural and artificial neural networks |
url | https://doi.org/10.1371/journal.pcbi.1010716 |
work_keys_str_mv | AT kyleaitken thegeometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT marinagarrett thegeometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT shawnolsen thegeometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT stefanmihalas thegeometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT kyleaitken geometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT marinagarrett geometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT shawnolsen geometryofrepresentationaldriftinnaturalandartificialneuralnetworks AT stefanmihalas geometryofrepresentationaldriftinnaturalandartificialneuralnetworks |