WAFFLe: Weight Anonymized Factorization for Federated Learning

In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices. In light of this need, federated learning has emerged as a popular training paradigm. However, many federated learning approaches tr...

Full description

Bibliographic Details
Main Authors: Weituo Hao, Nikhil Mehta, Kevin J. Liang, Pengyu Cheng, Mostafa El-Khamy, Lawrence Carin
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9770028/
_version_ 1828747988526170112
author Weituo Hao
Nikhil Mehta
Kevin J. Liang
Pengyu Cheng
Mostafa El-Khamy
Lawrence Carin
author_facet Weituo Hao
Nikhil Mehta
Kevin J. Liang
Pengyu Cheng
Mostafa El-Khamy
Lawrence Carin
author_sort Weituo Hao
collection DOAJ
description In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices. In light of this need, federated learning has emerged as a popular training paradigm. However, many federated learning approaches trade transmitting data for communicating updated weight parameters for each local device. Therefore, a successful breach that would have otherwise directly compromised the data instead grants whitebox access to the local model, which opens the door to a number of attacks, including exposing the very data federated learning seeks to protect. Additionally, in distributed scenarios, individual client devices commonly exhibit high statistical heterogeneity. Many common federated approaches learn a single global model; while this may do well on average, performance degrades when the i.i.d. assumption is violated, underfitting individuals further from the mean and raising questions of fairness. To address these issues, we propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks. Experiments on MNIST, FashionMNIST, and CIFAR-10 demonstrate WAFFLe’s significant improvement to local test performance and fairness while simultaneously providing an extra layer of security.
first_indexed 2024-04-14T04:53:12Z
format Article
id doaj.art-9664d034e2ae481793250833ae1c17f4
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-04-14T04:53:12Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-9664d034e2ae481793250833ae1c17f42022-12-22T02:11:13ZengIEEEIEEE Access2169-35362022-01-0110492074921810.1109/ACCESS.2022.31729459770028WAFFLe: Weight Anonymized Factorization for Federated LearningWeituo Hao0https://orcid.org/0000-0002-0031-9236Nikhil Mehta1Kevin J. Liang2https://orcid.org/0000-0002-0221-9108Pengyu Cheng3Mostafa El-Khamy4Lawrence Carin5Duke University, Durham, NC, USADuke University, Durham, NC, USADuke University, Durham, NC, USADuke University, Durham, NC, USASOC Research and Development, Samsung Semiconductor Incorporation (SSI), San Diego, CA, USAKing Abdullah University of Science and Technology, Thuwal, Saudi ArabiaIn domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices. In light of this need, federated learning has emerged as a popular training paradigm. However, many federated learning approaches trade transmitting data for communicating updated weight parameters for each local device. Therefore, a successful breach that would have otherwise directly compromised the data instead grants whitebox access to the local model, which opens the door to a number of attacks, including exposing the very data federated learning seeks to protect. Additionally, in distributed scenarios, individual client devices commonly exhibit high statistical heterogeneity. Many common federated approaches learn a single global model; while this may do well on average, performance degrades when the i.i.d. assumption is violated, underfitting individuals further from the mean and raising questions of fairness. To address these issues, we propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks. Experiments on MNIST, FashionMNIST, and CIFAR-10 demonstrate WAFFLe’s significant improvement to local test performance and fairness while simultaneously providing an extra layer of security.https://ieeexplore.ieee.org/document/9770028/Federated learningIndian buffet processpersonalization and fairness
spellingShingle Weituo Hao
Nikhil Mehta
Kevin J. Liang
Pengyu Cheng
Mostafa El-Khamy
Lawrence Carin
WAFFLe: Weight Anonymized Factorization for Federated Learning
IEEE Access
Federated learning
Indian buffet process
personalization and fairness
title WAFFLe: Weight Anonymized Factorization for Federated Learning
title_full WAFFLe: Weight Anonymized Factorization for Federated Learning
title_fullStr WAFFLe: Weight Anonymized Factorization for Federated Learning
title_full_unstemmed WAFFLe: Weight Anonymized Factorization for Federated Learning
title_short WAFFLe: Weight Anonymized Factorization for Federated Learning
title_sort waffle weight anonymized factorization for federated learning
topic Federated learning
Indian buffet process
personalization and fairness
url https://ieeexplore.ieee.org/document/9770028/
work_keys_str_mv AT weituohao waffleweightanonymizedfactorizationforfederatedlearning
AT nikhilmehta waffleweightanonymizedfactorizationforfederatedlearning
AT kevinjliang waffleweightanonymizedfactorizationforfederatedlearning
AT pengyucheng waffleweightanonymizedfactorizationforfederatedlearning
AT mostafaelkhamy waffleweightanonymizedfactorizationforfederatedlearning
AT lawrencecarin waffleweightanonymizedfactorizationforfederatedlearning