Privacy-preserving federated learning framework with irregular-majority users

In response to the existing problems that the federated learning might lead to the reduction of aggregation efficiency by handing the majority of irregular users and the leak of parameter privacy by adopting plaintext communication, a framework of privacy-preserving robust federated learning was pro...

Full description

Bibliographic Details
Main Authors: CHEN Qianxin, BI Renwan, XIONG Jinbo, LIN Jie, JIN Biao
Format: Article
Language:English
Published: POSTS&TELECOM PRESS Co., LTD 2022-02-01
Series:网络与信息安全学报
Subjects:
Online Access:http://www.infocomm-journal.com/cjnis/CN/10.11959/j.issn.2096-109x.2022011
Description
Summary:In response to the existing problems that the federated learning might lead to the reduction of aggregation efficiency by handing the majority of irregular users and the leak of parameter privacy by adopting plaintext communication, a framework of privacy-preserving robust federated learning was proposed for ensuring the robustness of the irregular user based on the designed security division protocol. PPRFL could enable the model and its related information to aggregate in ciphertext on the edge server facilitate users to calculate the model reliability locally for reducing the additional communication overhead caused by the adoption of the security multiplication protocol in conventional methods, apart from lowering the high computational overhead resulted from homomorphic encryption with outsourcing computing to two edge servers. Based on this, user could calculate the loss value of the model through jointly using the verification sets issued by the edge server and that held locally after parameter updating of the local model. Then the model reliability could be dynamically updated as the model weight together with the historic information of the loss value. Further, the model weight was scaled under the guidance of prior knowledge, and the ciphertext model and ciphertext weight information are sent to the edge server to aggregate and update the global model parameters, ensuring that global model changes are contributed by high-quality data users, and improving the convergence speed. Through the security analysis of the Hybrid Argument model, the demonstration shows that PPRFL can effectively protect the privacy of model parameters and intermediate interaction parameters including user reliability. The experimental results show that the PPRFL scheme could still achieve the accuracy of 92% when all the participants in the federated aggregation task are irregular users, with the convergence efficiency 1.4 times higher than that of the PPFDL. Besides, the PPRFL scheme could still reach the accuracy of 89% when training data possessed by 80% of the users in the federated aggregation task were noise data, with the convergence efficiency 2.3 times higher than that of the PPFDL.
ISSN:2096-109X