Linking reinforcement learning and cognitive maps to understand how the brain represents abstract tasks

<p>The terms “reinforcement learning” (RL) and “cognitive maps” are both commonly rooted in early behaviourist psychology. However, in neuroscience, these two subfields have largely progressed in parallel. In this thesis, I use ideas and techniques from one subfield to address open questions i...

Full description

Bibliographic Details
Main Author: Baram, A
Other Authors: Behrens, T
Format: Thesis
Language:English
Published: 2020
Subjects:
_version_ 1826279966838358016
author Baram, A
author2 Behrens, T
author_facet Behrens, T
Baram, A
author_sort Baram, A
collection OXFORD
description <p>The terms “reinforcement learning” (RL) and “cognitive maps” are both commonly rooted in early behaviourist psychology. However, in neuroscience, these two subfields have largely progressed in parallel. In this thesis, I use ideas and techniques from one subfield to address open questions in the other. </p> <p>While the theory of RL has dramatically advanced our understanding of the brain’s learning algorithms, we still don’t know how RL tasks are represented. Here, I take inspiration from the well-studied representations of the cognitive map in spatial tasks to investigate how abstract non-spatial RL tasks are represented and how task knowledge might be generalised to novel situations. I present converging results suggesting that the same areas that encode the structure of spatial tasks also encode the structure of abstract RL tasks. In addition, I use ideas from spatial cognitive maps to suggest novel interpretations of heavily-studied RL neural signals. Further, taking inspiration from the discovery of inference mechanisms over the structure of spatial tasks, I suggest a study, for which I already have a task and a model, that could shed light on similar structural inference mechanisms in RL tasks.</p> <p>While navigation and planning in physical space have been thoroughly studied, it is not clear how animals can navigate through cognitive maps with arbitrary topology. Here I investigate spatial cognitive maps through the lens of RL formalism, and use ideas from RL to suggest a flexible and efficient planning algorithm that can be used in both spatial environments and environments with arbitrary topology. </p> <p>This thesis demonstrates the progress that can be made by bringing these two subfields together, and hopefully brings us one step closer to understanding the mechanisms underlying the human capacity for flexible behaviour. </p>
first_indexed 2024-03-07T00:06:42Z
format Thesis
id oxford-uuid:77c884d3-1fcd-4e8e-8e38-5c058b4e9afc
institution University of Oxford
language English
last_indexed 2024-03-07T00:06:42Z
publishDate 2020
record_format dspace
spelling oxford-uuid:77c884d3-1fcd-4e8e-8e38-5c058b4e9afc2022-03-26T20:26:28ZLinking reinforcement learning and cognitive maps to understand how the brain represents abstract tasks Thesishttp://purl.org/coar/resource_type/c_db06uuid:77c884d3-1fcd-4e8e-8e38-5c058b4e9afcNeurosciencesCognitive neuroscienceEnglishHyrax Deposit2020Baram, ABehrens, T<p>The terms “reinforcement learning” (RL) and “cognitive maps” are both commonly rooted in early behaviourist psychology. However, in neuroscience, these two subfields have largely progressed in parallel. In this thesis, I use ideas and techniques from one subfield to address open questions in the other. </p> <p>While the theory of RL has dramatically advanced our understanding of the brain’s learning algorithms, we still don’t know how RL tasks are represented. Here, I take inspiration from the well-studied representations of the cognitive map in spatial tasks to investigate how abstract non-spatial RL tasks are represented and how task knowledge might be generalised to novel situations. I present converging results suggesting that the same areas that encode the structure of spatial tasks also encode the structure of abstract RL tasks. In addition, I use ideas from spatial cognitive maps to suggest novel interpretations of heavily-studied RL neural signals. Further, taking inspiration from the discovery of inference mechanisms over the structure of spatial tasks, I suggest a study, for which I already have a task and a model, that could shed light on similar structural inference mechanisms in RL tasks.</p> <p>While navigation and planning in physical space have been thoroughly studied, it is not clear how animals can navigate through cognitive maps with arbitrary topology. Here I investigate spatial cognitive maps through the lens of RL formalism, and use ideas from RL to suggest a flexible and efficient planning algorithm that can be used in both spatial environments and environments with arbitrary topology. </p> <p>This thesis demonstrates the progress that can be made by bringing these two subfields together, and hopefully brings us one step closer to understanding the mechanisms underlying the human capacity for flexible behaviour. </p>
spellingShingle Neurosciences
Cognitive neuroscience
Baram, A
Linking reinforcement learning and cognitive maps to understand how the brain represents abstract tasks
title Linking reinforcement learning and cognitive maps to understand how the brain represents abstract tasks
title_full Linking reinforcement learning and cognitive maps to understand how the brain represents abstract tasks
title_fullStr Linking reinforcement learning and cognitive maps to understand how the brain represents abstract tasks
title_full_unstemmed Linking reinforcement learning and cognitive maps to understand how the brain represents abstract tasks
title_short Linking reinforcement learning and cognitive maps to understand how the brain represents abstract tasks
title_sort linking reinforcement learning and cognitive maps to understand how the brain represents abstract tasks
topic Neurosciences
Cognitive neuroscience
work_keys_str_mv AT barama linkingreinforcementlearningandcognitivemapstounderstandhowthebrainrepresentsabstracttasks