Defending non-Bayesian learning against adversarial attacks
Abstract This paper addresses the problem of non-Bayesian learning over multi-agent networks, where agents repeatedly collect partially informative observations about an unknown state of the world, and try to collaboratively learn the true state out of m alternatives. We focus on the...
Main Authors: | Su, Lili, Vaidya, Nitin H |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Language: | English |
Published: |
Springer Berlin Heidelberg
2021
|
Online Access: | https://hdl.handle.net/1721.1/131300 |
Similar Items
-
On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
by: Sanglee Park, et al.
Published: (2020-11-01) -
Conditional Generative Adversarial Network-Based Image Denoising for Defending Against Adversarial Attack
by: Haibo Zhang, et al.
Published: (2021-01-01) -
Defending Against Local Adversarial Attacks through Empirical Gradient Optimization
by: Boyang Sun, et al.
Published: (2023-01-01) -
Defending Against Adversarial Fingerprint Attacks Based on Deep Image Prior
by: Hwajung Yoo, et al.
Published: (2023-01-01) -
RLXSS: Optimizing XSS Detection Model to Defend Against Adversarial Attacks Based on Reinforcement Learning
by: Yong Fang, et al.
Published: (2019-08-01)