Model-based prioritization for acquiring protection.

Protection often involves the capacity to prospectively plan the actions needed to mitigate harm. The computational architecture of decisions involving protection remains unclear, as well as whether these decisions differ from other beneficial prospective actions such as reward acquisition. Here we...

Full description

Bibliographic Details
Main Authors: Sarah M Tashjian, Toby Wise, Dean Mobbs
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2022-12-01
Series:PLoS Computational Biology
Online Access:https://doi.org/10.1371/journal.pcbi.1010805
_version_ 1811167247472787456
author Sarah M Tashjian
Toby Wise
Dean Mobbs
author_facet Sarah M Tashjian
Toby Wise
Dean Mobbs
author_sort Sarah M Tashjian
collection DOAJ
description Protection often involves the capacity to prospectively plan the actions needed to mitigate harm. The computational architecture of decisions involving protection remains unclear, as well as whether these decisions differ from other beneficial prospective actions such as reward acquisition. Here we compare protection acquisition to reward acquisition and punishment avoidance to examine overlapping and distinct features across the three action types. Protection acquisition is positively valenced similar to reward. For both protection and reward, the more the actor gains, the more benefit. However, reward and protection occur in different contexts, with protection existing in aversive contexts. Punishment avoidance also occurs in aversive contexts, but differs from protection because punishment is negatively valenced and motivates avoidance. Across three independent studies (Total N = 600) we applied computational modeling to examine model-based reinforcement learning for protection, reward, and punishment in humans. Decisions motivated by acquiring protection evoked a higher degree of model-based control than acquiring reward or avoiding punishment, with no significant differences in learning rate. The context-valence asymmetry characteristic of protection increased deployment of flexible decision strategies, suggesting model-based control depends on the context in which outcomes are encountered as well as the valence of the outcome.
first_indexed 2024-04-10T16:06:05Z
format Article
id doaj.art-82e68fec3e2f4f838d268acb34235222
institution Directory Open Access Journal
issn 1553-734X
1553-7358
language English
last_indexed 2024-04-10T16:06:05Z
publishDate 2022-12-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS Computational Biology
spelling doaj.art-82e68fec3e2f4f838d268acb342352222023-02-10T05:30:48ZengPublic Library of Science (PLoS)PLoS Computational Biology1553-734X1553-73582022-12-011812e101080510.1371/journal.pcbi.1010805Model-based prioritization for acquiring protection.Sarah M TashjianToby WiseDean MobbsProtection often involves the capacity to prospectively plan the actions needed to mitigate harm. The computational architecture of decisions involving protection remains unclear, as well as whether these decisions differ from other beneficial prospective actions such as reward acquisition. Here we compare protection acquisition to reward acquisition and punishment avoidance to examine overlapping and distinct features across the three action types. Protection acquisition is positively valenced similar to reward. For both protection and reward, the more the actor gains, the more benefit. However, reward and protection occur in different contexts, with protection existing in aversive contexts. Punishment avoidance also occurs in aversive contexts, but differs from protection because punishment is negatively valenced and motivates avoidance. Across three independent studies (Total N = 600) we applied computational modeling to examine model-based reinforcement learning for protection, reward, and punishment in humans. Decisions motivated by acquiring protection evoked a higher degree of model-based control than acquiring reward or avoiding punishment, with no significant differences in learning rate. The context-valence asymmetry characteristic of protection increased deployment of flexible decision strategies, suggesting model-based control depends on the context in which outcomes are encountered as well as the valence of the outcome.https://doi.org/10.1371/journal.pcbi.1010805
spellingShingle Sarah M Tashjian
Toby Wise
Dean Mobbs
Model-based prioritization for acquiring protection.
PLoS Computational Biology
title Model-based prioritization for acquiring protection.
title_full Model-based prioritization for acquiring protection.
title_fullStr Model-based prioritization for acquiring protection.
title_full_unstemmed Model-based prioritization for acquiring protection.
title_short Model-based prioritization for acquiring protection.
title_sort model based prioritization for acquiring protection
url https://doi.org/10.1371/journal.pcbi.1010805
work_keys_str_mv AT sarahmtashjian modelbasedprioritizationforacquiringprotection
AT tobywise modelbasedprioritizationforacquiringprotection
AT deanmobbs modelbasedprioritizationforacquiringprotection