FedMed: A Federated Learning Framework for Language Modeling

Federated learning (FL) is a privacy-preserving technique for training a vast amount of decentralized data and making inferences on mobile devices. As a typical language modeling problem, mobile keyboard prediction aims at suggesting a probable next word or phrase and facilitating the human-machine...

Full description

Bibliographic Details
Main Authors: Xing Wu, Zhaowang Liang, Jianjia Wang
Format: Article
Language:English
Published: MDPI AG 2020-07-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/20/14/4048
_version_ 1827712711485554688
author Xing Wu
Zhaowang Liang
Jianjia Wang
author_facet Xing Wu
Zhaowang Liang
Jianjia Wang
author_sort Xing Wu
collection DOAJ
description Federated learning (FL) is a privacy-preserving technique for training a vast amount of decentralized data and making inferences on mobile devices. As a typical language modeling problem, mobile keyboard prediction aims at suggesting a probable next word or phrase and facilitating the human-machine interaction in a virtual keyboard of the smartphone or laptop. Mobile keyboard prediction with FL hopes to satisfy the growing demand that high-level data privacy be preserved in artificial intelligence applications even with the distributed models training. However, there are two major problems in the federated optimization for the prediction: (1) aggregating model parameters on the server-side and (2) reducing communication costs caused by model weights collection. To address the above issues, traditional FL methods simply use averaging aggregation or ignore communication costs. We propose a novel Federated Mediation (FedMed) framework with the adaptive aggregation, mediation incentive scheme, and topK strategy to address the model aggregation and communication costs. The performance is evaluated in terms of perplexity and communication rounds. Experiments are conducted on three datasets (i.e., Penn Treebank, WikiText-2, and Yelp) and the results demonstrate that our FedMed framework achieves robust performance and outperforms baseline approaches.
first_indexed 2024-03-10T18:20:24Z
format Article
id doaj.art-f971eb4b901547d0a9da373b40dab203
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-10T18:20:24Z
publishDate 2020-07-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-f971eb4b901547d0a9da373b40dab2032023-11-20T07:25:21ZengMDPI AGSensors1424-82202020-07-012014404810.3390/s20144048FedMed: A Federated Learning Framework for Language ModelingXing Wu0Zhaowang Liang1Jianjia Wang2School of Computer Engineering and Science, Shanghai University, Shanghai 200444, ChinaSchool of Computer Engineering and Science, Shanghai University, Shanghai 200444, ChinaSchool of Computer Engineering and Science, Shanghai University, Shanghai 200444, ChinaFederated learning (FL) is a privacy-preserving technique for training a vast amount of decentralized data and making inferences on mobile devices. As a typical language modeling problem, mobile keyboard prediction aims at suggesting a probable next word or phrase and facilitating the human-machine interaction in a virtual keyboard of the smartphone or laptop. Mobile keyboard prediction with FL hopes to satisfy the growing demand that high-level data privacy be preserved in artificial intelligence applications even with the distributed models training. However, there are two major problems in the federated optimization for the prediction: (1) aggregating model parameters on the server-side and (2) reducing communication costs caused by model weights collection. To address the above issues, traditional FL methods simply use averaging aggregation or ignore communication costs. We propose a novel Federated Mediation (FedMed) framework with the adaptive aggregation, mediation incentive scheme, and topK strategy to address the model aggregation and communication costs. The performance is evaluated in terms of perplexity and communication rounds. Experiments are conducted on three datasets (i.e., Penn Treebank, WikiText-2, and Yelp) and the results demonstrate that our FedMed framework achieves robust performance and outperforms baseline approaches.https://www.mdpi.com/1424-8220/20/14/4048federated learninglanguage modelingcommunication efficiencytopK ranking
spellingShingle Xing Wu
Zhaowang Liang
Jianjia Wang
FedMed: A Federated Learning Framework for Language Modeling
Sensors
federated learning
language modeling
communication efficiency
topK ranking
title FedMed: A Federated Learning Framework for Language Modeling
title_full FedMed: A Federated Learning Framework for Language Modeling
title_fullStr FedMed: A Federated Learning Framework for Language Modeling
title_full_unstemmed FedMed: A Federated Learning Framework for Language Modeling
title_short FedMed: A Federated Learning Framework for Language Modeling
title_sort fedmed a federated learning framework for language modeling
topic federated learning
language modeling
communication efficiency
topK ranking
url https://www.mdpi.com/1424-8220/20/14/4048
work_keys_str_mv AT xingwu fedmedafederatedlearningframeworkforlanguagemodeling
AT zhaowangliang fedmedafederatedlearningframeworkforlanguagemodeling
AT jianjiawang fedmedafederatedlearningframeworkforlanguagemodeling