Private and Provably Efficient Federated Decision-Making

In this thesis, we study sequential multi-armed bandit and reinforcement learning in the federated setting, where a group of agents collaborates to improve their collective reward by communicating over a network. We first study the multi-armed bandit problem in a decentralized environment. We stu...

Full description

Bibliographic Details
Main Author: Dubey, Abhimanyu
Other Authors: Pentland, Alex P.
Format: Thesis
Published: Massachusetts Institute of Technology 2022
Online Access:https://hdl.handle.net/1721.1/143222
_version_ 1826207138786050048
author Dubey, Abhimanyu
author2 Pentland, Alex P.
author_facet Pentland, Alex P.
Dubey, Abhimanyu
author_sort Dubey, Abhimanyu
collection MIT
description In this thesis, we study sequential multi-armed bandit and reinforcement learning in the federated setting, where a group of agents collaborates to improve their collective reward by communicating over a network. We first study the multi-armed bandit problem in a decentralized environment. We study federated bandit learning under several real-world environmental constraints, such as differentially private communication, heavy-tailed perturbations, and the presence of adversarial corruptions. For each of these constraints, we present algorithms with near-optimal regret guarantees and maintain competitive experimental performance on real-world networks. We characterize the asymptotic and minimax rates for these problems via network-dependent lower bounds as well. These algorithms provide substantial improvements over existing work in a variety of real-world and synthetic network topologies. Next, we study the contextual bandit problem in a federated learning setting with differential privacy. In this setting, we propose algorithms that match the optimal rate (up to poly-logarithmic terms) with only a logarithmic communication budget. We extend our approach to heterogeneous federated learning via a kernel-based approach, and also provide a no-regret algorithm for private Gaussian process bandit optimization. Finally, we study reinforcement learning in both the multi-agent and federated setting with linear function approximation. We propose variants of least-squares value iteration algorithms that are provably no-regret with only a constant communication budget. We believe that the future of machine learning entails large-scale cooperation between various data-driven entities, and this work will be beneficial to the development of reliable, scalable, and secure decision-making systems.
first_indexed 2024-09-23T13:44:39Z
format Thesis
id mit-1721.1/143222
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T13:44:39Z
publishDate 2022
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1432222022-06-16T03:24:03Z Private and Provably Efficient Federated Decision-Making Dubey, Abhimanyu Pentland, Alex P. Program in Media Arts and Sciences (Massachusetts Institute of Technology) In this thesis, we study sequential multi-armed bandit and reinforcement learning in the federated setting, where a group of agents collaborates to improve their collective reward by communicating over a network. We first study the multi-armed bandit problem in a decentralized environment. We study federated bandit learning under several real-world environmental constraints, such as differentially private communication, heavy-tailed perturbations, and the presence of adversarial corruptions. For each of these constraints, we present algorithms with near-optimal regret guarantees and maintain competitive experimental performance on real-world networks. We characterize the asymptotic and minimax rates for these problems via network-dependent lower bounds as well. These algorithms provide substantial improvements over existing work in a variety of real-world and synthetic network topologies. Next, we study the contextual bandit problem in a federated learning setting with differential privacy. In this setting, we propose algorithms that match the optimal rate (up to poly-logarithmic terms) with only a logarithmic communication budget. We extend our approach to heterogeneous federated learning via a kernel-based approach, and also provide a no-regret algorithm for private Gaussian process bandit optimization. Finally, we study reinforcement learning in both the multi-agent and federated setting with linear function approximation. We propose variants of least-squares value iteration algorithms that are provably no-regret with only a constant communication budget. We believe that the future of machine learning entails large-scale cooperation between various data-driven entities, and this work will be beneficial to the development of reliable, scalable, and secure decision-making systems. Ph.D. 2022-06-15T13:04:38Z 2022-06-15T13:04:38Z 2022-02 2022-02-27T16:55:37.195Z Thesis https://hdl.handle.net/1721.1/143222 In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology
spellingShingle Dubey, Abhimanyu
Private and Provably Efficient Federated Decision-Making
title Private and Provably Efficient Federated Decision-Making
title_full Private and Provably Efficient Federated Decision-Making
title_fullStr Private and Provably Efficient Federated Decision-Making
title_full_unstemmed Private and Provably Efficient Federated Decision-Making
title_short Private and Provably Efficient Federated Decision-Making
title_sort private and provably efficient federated decision making
url https://hdl.handle.net/1721.1/143222
work_keys_str_mv AT dubeyabhimanyu privateandprovablyefficientfederateddecisionmaking