Learning and testing causal models with interventions

© 2018 Curran Associates Inc.All rights reserved. We consider testing and learning problems on causal Bayesian networks as defined by Pearl [Pea09]. Given a causal Bayesian network M on a graph with n discrete variables and bounded in-degree and bounded “confounded components”, we show that O(log n)...

Full description

Bibliographic Details
Main Authors: Acharya, J, Bhattacharyya, A, Daskalakis, C, Kandasamy, S
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:English
Published: 2022
Online Access:https://hdl.handle.net/1721.1/143123
_version_ 1826202456016551936
author Acharya, J
Bhattacharyya, A
Daskalakis, C
Kandasamy, S
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Acharya, J
Bhattacharyya, A
Daskalakis, C
Kandasamy, S
author_sort Acharya, J
collection MIT
description © 2018 Curran Associates Inc.All rights reserved. We consider testing and learning problems on causal Bayesian networks as defined by Pearl [Pea09]. Given a causal Bayesian network M on a graph with n discrete variables and bounded in-degree and bounded “confounded components”, we show that O(log n) interventions on an unknown causal Bayesian network X on the same graph, and O(n/2) samples per intervention, suffice to efficiently distinguish whether X = M or whether there exists some intervention under which X and M are farther than in total variation distance. We also obtain sample/time/intervention efficient algorithms for: (i) testing the identity of two unknown causal Bayesian networks on the same graph; and (ii) learning a causal Bayesian network on a given graph. Although our algorithms are non-adaptive, we show that adaptivity does not help in general: Ω(log n) interventions are necessary for testing the identity of two unknown causal Bayesian networks on the same graph, even adaptively. Our algorithms are enabled by a new subadditivity inequality for the squared Hellinger distance between two causal Bayesian networks.
first_indexed 2024-09-23T12:07:44Z
format Article
id mit-1721.1/143123
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T12:07:44Z
publishDate 2022
record_format dspace
spelling mit-1721.1/1431232023-02-14T19:23:43Z Learning and testing causal models with interventions Acharya, J Bhattacharyya, A Daskalakis, C Kandasamy, S Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science © 2018 Curran Associates Inc.All rights reserved. We consider testing and learning problems on causal Bayesian networks as defined by Pearl [Pea09]. Given a causal Bayesian network M on a graph with n discrete variables and bounded in-degree and bounded “confounded components”, we show that O(log n) interventions on an unknown causal Bayesian network X on the same graph, and O(n/2) samples per intervention, suffice to efficiently distinguish whether X = M or whether there exists some intervention under which X and M are farther than in total variation distance. We also obtain sample/time/intervention efficient algorithms for: (i) testing the identity of two unknown causal Bayesian networks on the same graph; and (ii) learning a causal Bayesian network on a given graph. Although our algorithms are non-adaptive, we show that adaptivity does not help in general: Ω(log n) interventions are necessary for testing the identity of two unknown causal Bayesian networks on the same graph, even adaptively. Our algorithms are enabled by a new subadditivity inequality for the squared Hellinger distance between two causal Bayesian networks. 2022-06-14T18:55:08Z 2022-06-14T18:55:08Z 2018-01-01 2022-06-14T18:46:35Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/143123 Acharya, J, Bhattacharyya, A, Daskalakis, C and Kandasamy, S. 2018. "Learning and testing causal models with interventions." Advances in Neural Information Processing Systems, 2018-December. en https://papers.nips.cc/paper/2018/hash/78631a4bb5303be54fa1cfdcb958c00a-Abstract.html Advances in Neural Information Processing Systems Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Neural Information Processing Systems (NIPS)
spellingShingle Acharya, J
Bhattacharyya, A
Daskalakis, C
Kandasamy, S
Learning and testing causal models with interventions
title Learning and testing causal models with interventions
title_full Learning and testing causal models with interventions
title_fullStr Learning and testing causal models with interventions
title_full_unstemmed Learning and testing causal models with interventions
title_short Learning and testing causal models with interventions
title_sort learning and testing causal models with interventions
url https://hdl.handle.net/1721.1/143123
work_keys_str_mv AT acharyaj learningandtestingcausalmodelswithinterventions
AT bhattacharyyaa learningandtestingcausalmodelswithinterventions
AT daskalakisc learningandtestingcausalmodelswithinterventions
AT kandasamys learningandtestingcausalmodelswithinterventions