High Precision Method of Federated Learning Based on Cosine Similarity and Differential Privacy
- Resource Type
- Conference
- Authors
- Wang, Jia; Li, Yazheng; Ye, Ronghang; Li, Jianqiang
- Source
- 2022 IEEE International Conferences on Internet of Things (iThings) and IEEE Green Computing & Communications (GreenCom) and IEEE Cyber, Physical & Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics) ITHINGS-GREENCOM-CPSCOM-SMARTDATA-CYBERMATICS Internet of Things (iThings) and IEEE Green Computing & Communications (GreenCom) and IEEE Cyber, Physical & Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics), 2022 IEEE International Conferences. :533-540 Aug, 2022
- Subject
- Communication, Networking and Broadcast Technologies
Computing and Processing
Signal Processing and Analysis
Training
Privacy
Adaptation models
Differential privacy
Social computing
Federated learning
Data integrity
Federated Learning
Node Contribution
Cosine Similarity
- Language
Federated learning has emerged as an efficient way to exploit distributed data in recent years. It allows multiple client nodes to collaboratively train an optimized machine learning model without revealing the participants’ data. However, there are some shortcomings in the existing federated learning algorithms: the existence of poor quality nodes leads to undesirable effects on the overall gradient descent direction of the model. And the typical iterative training and noising way also make the privacy loss of the model reaches the privacy budget quickly. Hence, the model training may stop before convergence. In this paper, we propose a federated learning gradient adaptive aggregation method based on cosine similarity and a central node privacy protection method based on random differential privacy to solve the above-mentioned problems respectively. The proposed scheme was implemented on two public general datasets (MNIST and SVHN) and a medical dataset (UCI diabetes). Experimental results show that the federated learning adaptive method based on cosine similarity could effectively eliminate the negative influence of nodes with poor data quality on the overall model and maintain stable performance. The central node privacy protection method based on random differential privacy could also improve the performance of the training model under the same privacy budget and privacy loss threshold.