Neural Networks With Motivation

Animals rely on internal motivational states to make decisions. The role of motivational salience in decision making is in early stages of mathematical understanding. Here, we propose a reinforcement learning framework that relies on neural networks to learn optimal ongoing behavior for dynamically...

Full description

Bibliographic Details
Main Authors: Sergey A. Shuvaev, Ngoc B. Tran, Marcus Stephenson-Jones, Bo Li, Alexei A. Koulakov
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-01-01
Series:Frontiers in Systems Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fnsys.2020.609316/full
_version_ 1818452476506931200
author Sergey A. Shuvaev
Ngoc B. Tran
Marcus Stephenson-Jones
Marcus Stephenson-Jones
Bo Li
Alexei A. Koulakov
author_facet Sergey A. Shuvaev
Ngoc B. Tran
Marcus Stephenson-Jones
Marcus Stephenson-Jones
Bo Li
Alexei A. Koulakov
author_sort Sergey A. Shuvaev
collection DOAJ
description Animals rely on internal motivational states to make decisions. The role of motivational salience in decision making is in early stages of mathematical understanding. Here, we propose a reinforcement learning framework that relies on neural networks to learn optimal ongoing behavior for dynamically changing motivation values. First, we show that neural networks implementing Q-learning with motivational salience can navigate in environment with dynamic rewards without adjustments in synaptic strengths when the needs of an agent shift. In this setting, our networks may display elements of addictive behaviors. Second, we use a similar framework in hierarchical manager-agent system to implement a reinforcement learning algorithm with motivation that both infers motivational states and behaves. Finally, we show that, when trained in the Pavlovian conditioning setting, the responses of the neurons in our model resemble previously published neuronal recordings in the ventral pallidum, a basal ganglia structure involved in motivated behaviors. We conclude that motivation allows Q-learning networks to quickly adapt their behavior to conditions when expected reward is modulated by agent’s dynamic needs. Our approach addresses the algorithmic rationale of motivation and makes a step toward better interpretability of behavioral data via inference of motivational dynamics in the brain.
first_indexed 2024-12-14T21:23:42Z
format Article
id doaj.art-3e7e4e0a9c4f4073971c97d6d1ecf4fd
institution Directory Open Access Journal
issn 1662-5137
language English
last_indexed 2024-12-14T21:23:42Z
publishDate 2021-01-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Systems Neuroscience
spelling doaj.art-3e7e4e0a9c4f4073971c97d6d1ecf4fd2022-12-21T22:46:52ZengFrontiers Media S.A.Frontiers in Systems Neuroscience1662-51372021-01-011410.3389/fnsys.2020.609316609316Neural Networks With MotivationSergey A. Shuvaev0Ngoc B. Tran1Marcus Stephenson-Jones2Marcus Stephenson-Jones3Bo Li4Alexei A. Koulakov5Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, United StatesCold Spring Harbor Laboratory, Cold Spring Harbor, NY, United StatesCold Spring Harbor Laboratory, Cold Spring Harbor, NY, United StatesSainsbury Wellcome Centre, University College London, London, United KingdomCold Spring Harbor Laboratory, Cold Spring Harbor, NY, United StatesCold Spring Harbor Laboratory, Cold Spring Harbor, NY, United StatesAnimals rely on internal motivational states to make decisions. The role of motivational salience in decision making is in early stages of mathematical understanding. Here, we propose a reinforcement learning framework that relies on neural networks to learn optimal ongoing behavior for dynamically changing motivation values. First, we show that neural networks implementing Q-learning with motivational salience can navigate in environment with dynamic rewards without adjustments in synaptic strengths when the needs of an agent shift. In this setting, our networks may display elements of addictive behaviors. Second, we use a similar framework in hierarchical manager-agent system to implement a reinforcement learning algorithm with motivation that both infers motivational states and behaves. Finally, we show that, when trained in the Pavlovian conditioning setting, the responses of the neurons in our model resemble previously published neuronal recordings in the ventral pallidum, a basal ganglia structure involved in motivated behaviors. We conclude that motivation allows Q-learning networks to quickly adapt their behavior to conditions when expected reward is modulated by agent’s dynamic needs. Our approach addresses the algorithmic rationale of motivation and makes a step toward better interpretability of behavioral data via inference of motivational dynamics in the brain.https://www.frontiersin.org/articles/10.3389/fnsys.2020.609316/fullmachine learningmotivational saliencereinforcement learningartificial intelligenceaddictionhierarchical reinforcement learning
spellingShingle Sergey A. Shuvaev
Ngoc B. Tran
Marcus Stephenson-Jones
Marcus Stephenson-Jones
Bo Li
Alexei A. Koulakov
Neural Networks With Motivation
Frontiers in Systems Neuroscience
machine learning
motivational salience
reinforcement learning
artificial intelligence
addiction
hierarchical reinforcement learning
title Neural Networks With Motivation
title_full Neural Networks With Motivation
title_fullStr Neural Networks With Motivation
title_full_unstemmed Neural Networks With Motivation
title_short Neural Networks With Motivation
title_sort neural networks with motivation
topic machine learning
motivational salience
reinforcement learning
artificial intelligence
addiction
hierarchical reinforcement learning
url https://www.frontiersin.org/articles/10.3389/fnsys.2020.609316/full
work_keys_str_mv AT sergeyashuvaev neuralnetworkswithmotivation
AT ngocbtran neuralnetworkswithmotivation
AT marcusstephensonjones neuralnetworkswithmotivation
AT marcusstephensonjones neuralnetworkswithmotivation
AT boli neuralnetworkswithmotivation
AT alexeiakoulakov neuralnetworkswithmotivation