A Bayesian account of learning and generalising representations in the brain

<p>Without learning we would be limited to a set of preprogrammed behaviours. While that may be acceptable for flies, it does not provide the basis for adaptive or intelligent behaviours familiar to humans. Learning, then, is one of the crucial components of brain operation. Learning, however,...

Full description

Bibliographic Details
Main Author: Whittington, JCR
Other Authors: Bogacz, R
Format: Thesis
Language:English
Published: 2019
Subjects:
_version_ 1826316136835186688
author Whittington, JCR
author2 Bogacz, R
author_facet Bogacz, R
Whittington, JCR
author_sort Whittington, JCR
collection OXFORD
description <p>Without learning we would be limited to a set of preprogrammed behaviours. While that may be acceptable for flies, it does not provide the basis for adaptive or intelligent behaviours familiar to humans. Learning, then, is one of the crucial components of brain operation. Learning, however, takes time. Thus, the key to adaptive behaviour is learning to systematically generalise; that is, have learned knowledge that can be flexibly recombined to understand any world in front of you. This thesis attempts to make inroads on two questions - how can brain networks learn, and what are the principles behind representations of knowledge that allow generalisation. Though bound by a common framework of Bayesian thinking, this thesis considers the questions in two separate parts.</p> <p>In the first part of the thesis, we investigate algorithms the brain may use to update connection strengths. While learning attempts to optimise a global function of the brain state, each connection only has access to local information. This is in contrast to artificial networks, where global information is easily conveyed to each synapse via the back-propagation algorithm. We show that, contrary to decades old beliefs, an analogous algorithm to back-propagation could be implemented in the local dynamics and learning rules of brain networks. We show an exact equivalence between the two algorithms and demonstrate that they perform identically on a standard machine learning benchmark. These results are the first to show that an algorithm as efficient as those used in machine learning could be implemented in the brain.</p> <p>In the second part of the thesis, we investigate frameworks for learning and generalising neural representations. It is proposed that a cognitive map encoding the relationships between entities in the world supports flexible behaviour. This map is traditionally associated with the hippocampal formation, due to its beautiful representations mapping space. This cognitive map, though, seems at odds with the other well-characterised aspect of the hippocampus: relational memory. Here we unify spatial cognition and relational memory within the framework of generalising relational knowledge. Using this framework, we build a machine that learns and generalises knowledge in both spatial and non-spatial tasks, while also displaying representations that mirror those in the brain. Finally, we confirm model predictions in neural data. Together, these results provide a computational framework for a systematic organisation of knowledge spanning all domains of behaviour.</p>
first_indexed 2024-03-06T20:13:19Z
format Thesis
id oxford-uuid:2b513340-9558-41dd-8533-0f250df98c66
institution University of Oxford
language English
last_indexed 2024-12-09T03:38:25Z
publishDate 2019
record_format dspace
spelling oxford-uuid:2b513340-9558-41dd-8533-0f250df98c662024-12-07T10:03:03ZA Bayesian account of learning and generalising representations in the brainThesishttp://purl.org/coar/resource_type/c_db06uuid:2b513340-9558-41dd-8533-0f250df98c66NeuroscienceMachine IntelligenceCognitive ScienceEnglishHyrax Deposit2019Whittington, JCRBogacz, RBehrens, TFriston, KSaxe, A<p>Without learning we would be limited to a set of preprogrammed behaviours. While that may be acceptable for flies, it does not provide the basis for adaptive or intelligent behaviours familiar to humans. Learning, then, is one of the crucial components of brain operation. Learning, however, takes time. Thus, the key to adaptive behaviour is learning to systematically generalise; that is, have learned knowledge that can be flexibly recombined to understand any world in front of you. This thesis attempts to make inroads on two questions - how can brain networks learn, and what are the principles behind representations of knowledge that allow generalisation. Though bound by a common framework of Bayesian thinking, this thesis considers the questions in two separate parts.</p> <p>In the first part of the thesis, we investigate algorithms the brain may use to update connection strengths. While learning attempts to optimise a global function of the brain state, each connection only has access to local information. This is in contrast to artificial networks, where global information is easily conveyed to each synapse via the back-propagation algorithm. We show that, contrary to decades old beliefs, an analogous algorithm to back-propagation could be implemented in the local dynamics and learning rules of brain networks. We show an exact equivalence between the two algorithms and demonstrate that they perform identically on a standard machine learning benchmark. These results are the first to show that an algorithm as efficient as those used in machine learning could be implemented in the brain.</p> <p>In the second part of the thesis, we investigate frameworks for learning and generalising neural representations. It is proposed that a cognitive map encoding the relationships between entities in the world supports flexible behaviour. This map is traditionally associated with the hippocampal formation, due to its beautiful representations mapping space. This cognitive map, though, seems at odds with the other well-characterised aspect of the hippocampus: relational memory. Here we unify spatial cognition and relational memory within the framework of generalising relational knowledge. Using this framework, we build a machine that learns and generalises knowledge in both spatial and non-spatial tasks, while also displaying representations that mirror those in the brain. Finally, we confirm model predictions in neural data. Together, these results provide a computational framework for a systematic organisation of knowledge spanning all domains of behaviour.</p>
spellingShingle Neuroscience
Machine Intelligence
Cognitive Science
Whittington, JCR
A Bayesian account of learning and generalising representations in the brain
title A Bayesian account of learning and generalising representations in the brain
title_full A Bayesian account of learning and generalising representations in the brain
title_fullStr A Bayesian account of learning and generalising representations in the brain
title_full_unstemmed A Bayesian account of learning and generalising representations in the brain
title_short A Bayesian account of learning and generalising representations in the brain
title_sort bayesian account of learning and generalising representations in the brain
topic Neuroscience
Machine Intelligence
Cognitive Science
work_keys_str_mv AT whittingtonjcr abayesianaccountoflearningandgeneralisingrepresentationsinthebrain
AT whittingtonjcr bayesianaccountoflearningandgeneralisingrepresentationsinthebrain