Where does value come from?

The computational framework of reinforcement learning (RL) has allowed us to both understand biological brains and build successful artificial agents. However, in this opinion, we highlight open challenges for RL as a model of animal behaviour in natural environments. We ask how the external reward...

全面介紹

書目詳細資料
Main Authors: Juechems, K, Summerfield, C
格式: Journal article
語言:English
出版: Cell Press 2019
_version_ 1826256975239839744
author Juechems, K
Summerfield, C
author_facet Juechems, K
Summerfield, C
author_sort Juechems, K
collection OXFORD
description The computational framework of reinforcement learning (RL) has allowed us to both understand biological brains and build successful artificial agents. However, in this opinion, we highlight open challenges for RL as a model of animal behaviour in natural environments. We ask how the external reward function is designed for biological systems, and how we can account for the context sensitivity of valuation. We summarise both old and new theories proposing that animals track current and desired internal states and seek to minimise the distance to a goal across multiple value dimensions. We suggest that this framework readily accounts for canonical phenomena observed in the fields of psychology, behavioural ecology, and economics, and recent findings from brain-imaging studies of value-guided decision-making.
first_indexed 2024-03-06T18:10:48Z
format Journal article
id oxford-uuid:02f3af5e-bc2d-49f5-bf1d-f2e3d7e2f545
institution University of Oxford
language English
last_indexed 2024-03-06T18:10:48Z
publishDate 2019
publisher Cell Press
record_format dspace
spelling oxford-uuid:02f3af5e-bc2d-49f5-bf1d-f2e3d7e2f5452022-03-26T08:43:33ZWhere does value come from?Journal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:02f3af5e-bc2d-49f5-bf1d-f2e3d7e2f545EnglishSymplectic Elements at OxfordCell Press2019Juechems, KSummerfield, CThe computational framework of reinforcement learning (RL) has allowed us to both understand biological brains and build successful artificial agents. However, in this opinion, we highlight open challenges for RL as a model of animal behaviour in natural environments. We ask how the external reward function is designed for biological systems, and how we can account for the context sensitivity of valuation. We summarise both old and new theories proposing that animals track current and desired internal states and seek to minimise the distance to a goal across multiple value dimensions. We suggest that this framework readily accounts for canonical phenomena observed in the fields of psychology, behavioural ecology, and economics, and recent findings from brain-imaging studies of value-guided decision-making.
spellingShingle Juechems, K
Summerfield, C
Where does value come from?
title Where does value come from?
title_full Where does value come from?
title_fullStr Where does value come from?
title_full_unstemmed Where does value come from?
title_short Where does value come from?
title_sort where does value come from
work_keys_str_mv AT juechemsk wheredoesvaluecomefrom
AT summerfieldc wheredoesvaluecomefrom