Information Structures for Causally Explainable Decisions

For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain <i>why</i> its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potent...

Full description

Bibliographic Details
Main Author: Louis Anthony Cox
Format: Article
Language:English
Published: MDPI AG 2021-05-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/23/5/601
_version_ 1827692607472402432
author Louis Anthony Cox
author_facet Louis Anthony Cox
author_sort Louis Anthony Cox
collection DOAJ
description For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain <i>why</i> its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs—the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user’s plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive “System 1” decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative “System 2” decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data. The resulting causally explainable decisions make efficient use of available information to achieve goals in uncertain environments.
first_indexed 2024-03-10T11:26:26Z
format Article
id doaj.art-04230ee0681f4376af555449058e2627
institution Directory Open Access Journal
issn 1099-4300
language English
last_indexed 2024-03-10T11:26:26Z
publishDate 2021-05-01
publisher MDPI AG
record_format Article
series Entropy
spelling doaj.art-04230ee0681f4376af555449058e26272023-11-21T19:33:36ZengMDPI AGEntropy1099-43002021-05-0123560110.3390/e23050601Information Structures for Causally Explainable DecisionsLouis Anthony Cox0Department of Business Analytics, University of Colorado School of Business, and MoirAI, 503 N. Franklin Street, Denver, CO 80218, USAFor an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain <i>why</i> its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs—the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user’s plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive “System 1” decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative “System 2” decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data. The resulting causally explainable decisions make efficient use of available information to achieve goals in uncertain environments.https://www.mdpi.com/1099-4300/23/5/601explainable AIXAIcausalitydecision analysisinformationexplanation
spellingShingle Louis Anthony Cox
Information Structures for Causally Explainable Decisions
Entropy
explainable AI
XAI
causality
decision analysis
information
explanation
title Information Structures for Causally Explainable Decisions
title_full Information Structures for Causally Explainable Decisions
title_fullStr Information Structures for Causally Explainable Decisions
title_full_unstemmed Information Structures for Causally Explainable Decisions
title_short Information Structures for Causally Explainable Decisions
title_sort information structures for causally explainable decisions
topic explainable AI
XAI
causality
decision analysis
information
explanation
url https://www.mdpi.com/1099-4300/23/5/601
work_keys_str_mv AT louisanthonycox informationstructuresforcausallyexplainabledecisions