Defending non-Bayesian learning against adversarial attacks

Abstract This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state out of m alternatives. We focus on the...

Full description

Bibliographic Details
Main Authors: Su, Lili, Vaidya, Nitin H
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:English
Published: Springer Berlin Heidelberg 2021
Online Access:https://hdl.handle.net/1721.1/131300
_version_ 1811084015979986944
author Su, Lili
Vaidya, Nitin H
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Su, Lili
Vaidya, Nitin H
author_sort Su, Lili
collection MIT
description Abstract This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state out of m alternatives. We focus on the impact of adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local learning updates with consensus primitives. In particular, we consider the scenario where an unknown subset of agents suffer Byzantine faults—agents suffering Byzantine faults behave arbitrarily. We propose two learning rules. In our learning rules, each non-faulty agent keeps a local variable which is a stochastic vector over the m possible states. Entries of this stochastic vector can be viewed as the scores assigned to the corresponding states by that agent. We say a non-faulty agent learns the underlying truth if it assigns one to the true state and zeros to the wrong states asymptotically. In our first update rule, each agent updates its local score vector as (up to normalization) the product of (1) the likelihood of the cumulative private signals and (2) the weighted geometric average of the score vectors of its incoming neighbors and itself. Under reasonable assumptions on the underlying network structure and the global identifiability of the network, we show that all the non-faulty agents asymptotically learn the true state almost surely. We propose a modified variant of our first learning rule whose complexity per iteration per agent is $$O(m^2 n \log n)$$ O ( m 2 n log n ) , where n is the number of agents in the network. In addition, we show that this modified learning rule works under a less restrictive network identifiability condition.
first_indexed 2024-09-23T12:43:11Z
format Article
id mit-1721.1/131300
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T12:43:11Z
publishDate 2021
publisher Springer Berlin Heidelberg
record_format dspace
spelling mit-1721.1/1313002023-09-07T21:19:57Z Defending non-Bayesian learning against adversarial attacks Su, Lili Vaidya, Nitin H Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Abstract This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state out of m alternatives. We focus on the impact of adversarial agents on the performance of consensus-based non-Bayesian learning, where non-faulty agents combine local learning updates with consensus primitives. In particular, we consider the scenario where an unknown subset of agents suffer Byzantine faults—agents suffering Byzantine faults behave arbitrarily. We propose two learning rules. In our learning rules, each non-faulty agent keeps a local variable which is a stochastic vector over the m possible states. Entries of this stochastic vector can be viewed as the scores assigned to the corresponding states by that agent. We say a non-faulty agent learns the underlying truth if it assigns one to the true state and zeros to the wrong states asymptotically. In our first update rule, each agent updates its local score vector as (up to normalization) the product of (1) the likelihood of the cumulative private signals and (2) the weighted geometric average of the score vectors of its incoming neighbors and itself. Under reasonable assumptions on the underlying network structure and the global identifiability of the network, we show that all the non-faulty agents asymptotically learn the true state almost surely. We propose a modified variant of our first learning rule whose complexity per iteration per agent is $$O(m^2 n \log n)$$ O ( m 2 n log n ) , where n is the number of agents in the network. In addition, we show that this modified learning rule works under a less restrictive network identifiability condition. 2021-09-17T18:01:50Z 2021-09-17T18:01:50Z 2018-06-20 2020-09-24T20:58:22Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/131300 en https://doi.org/10.1007/s00446-018-0336-4 Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. Springer-Verlag GmbH Germany, part of Springer Nature application/pdf Springer Berlin Heidelberg Springer Berlin Heidelberg
spellingShingle Su, Lili
Vaidya, Nitin H
Defending non-Bayesian learning against adversarial attacks
title Defending non-Bayesian learning against adversarial attacks
title_full Defending non-Bayesian learning against adversarial attacks
title_fullStr Defending non-Bayesian learning against adversarial attacks
title_full_unstemmed Defending non-Bayesian learning against adversarial attacks
title_short Defending non-Bayesian learning against adversarial attacks
title_sort defending non bayesian learning against adversarial attacks
url https://hdl.handle.net/1721.1/131300
work_keys_str_mv AT sulili defendingnonbayesianlearningagainstadversarialattacks
AT vaidyanitinh defendingnonbayesianlearningagainstadversarialattacks