Allocating risk mitigation across time

<p>This article is about priority-setting for work aiming to reduce existential risk. Its chief claim is that all else being equal we should prefer work earlier and prefer to work on risks that might come early. This is because we are uncertain about when we will have to face different risks,...

Full description

Bibliographic Details
Main Author: Cotton-Barratt, O
Format: Report
Language:English
Published: Future of Humanity Institute 2015
_version_ 1797112921053986816
author Cotton-Barratt, O
author_facet Cotton-Barratt, O
author_sort Cotton-Barratt, O
collection OXFORD
description <p>This article is about priority-setting for work aiming to reduce existential risk. Its chief claim is that all else being equal we should prefer work earlier and prefer to work on risks that might come early. This is because we are uncertain about when we will have to face different risks, because we expect diminishing returns of extra work, and because we expect that more people will work on these risks in the future.</p> <p>I explore this claim both qualitatively and with explicit models. I consider its implications for two questions: first, &ldquo;When is it best to do different kinds of work?&rdquo;; second, &ldquo;Which risks should we focus on?&rdquo;.</p> <p>As a major application, I look at the case of risk from artificial intelligence. The best strategies for reducing this risk depend on when the risk is coming. I argue that we may be underinvesting in scenarios where AI comes soon even though these scenarios are relatively unlikely, because we will not have time later to address them.</p>
first_indexed 2024-04-09T03:56:05Z
format Report
id oxford-uuid:790c25fb-df90-474d-91c9-9aa891bc8927
institution University of Oxford
language English
last_indexed 2024-04-09T03:56:05Z
publishDate 2015
publisher Future of Humanity Institute
record_format dspace
spelling oxford-uuid:790c25fb-df90-474d-91c9-9aa891bc89272024-03-14T15:12:49ZAllocating risk mitigation across timeReporthttp://purl.org/coar/resource_type/c_93fcuuid:790c25fb-df90-474d-91c9-9aa891bc8927EnglishSymplectic ElementsFuture of Humanity Institute2015Cotton-Barratt, O<p>This article is about priority-setting for work aiming to reduce existential risk. Its chief claim is that all else being equal we should prefer work earlier and prefer to work on risks that might come early. This is because we are uncertain about when we will have to face different risks, because we expect diminishing returns of extra work, and because we expect that more people will work on these risks in the future.</p> <p>I explore this claim both qualitatively and with explicit models. I consider its implications for two questions: first, &ldquo;When is it best to do different kinds of work?&rdquo;; second, &ldquo;Which risks should we focus on?&rdquo;.</p> <p>As a major application, I look at the case of risk from artificial intelligence. The best strategies for reducing this risk depend on when the risk is coming. I argue that we may be underinvesting in scenarios where AI comes soon even though these scenarios are relatively unlikely, because we will not have time later to address them.</p>
spellingShingle Cotton-Barratt, O
Allocating risk mitigation across time
title Allocating risk mitigation across time
title_full Allocating risk mitigation across time
title_fullStr Allocating risk mitigation across time
title_full_unstemmed Allocating risk mitigation across time
title_short Allocating risk mitigation across time
title_sort allocating risk mitigation across time
work_keys_str_mv AT cottonbarratto allocatingriskmitigationacrosstime