Indexed by:
Abstract:
Federated learning is emerged as an attractive paradigm regarding the data privacy problem, clients train the deep neural network on their local datasets, there is no need to upload their local data to a center server, and gradients are shared instead. However, recent studies show that adversaries can reconstruct the training images at high resolution from the gradients, such a break of data privacy is possible even in trained deep networks. To protect data privacy, a secure aggregation scheme against inverting attack is proposed for federated learning. The gradients are encrypted before sharing, and an adversary is unable to launch various attacks based on gradients. To improve the efficiency of data aggregation schemes, a new way of building shared keys is proposed, and a client build shared keys with 2a other clients, but not all the clients in the system. Besides, the gradient inversion attacks are also tested, and a gradient inversion attack is proposed, which enable the adversary to reconstruct the training data based on gradient. The simulation results show the proposed scheme can protect an honest but curious parameter server from reconstructing the training data.
Keyword:
Reprint Author's Address:
Email:
Source :
INTERNATIONAL JOURNAL OF INFORMATION SECURITY
ISSN: 1615-5262
Year: 2023
Issue: 4
Volume: 22
Page: 919-930
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:19
Cited Count:
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: