Privacy Preserving Federated Learning Efficiency Optimization Algorithm based on Differential Privacy
Main Article Content
Abstract
With the advancement of information technology, data security and user privacy protection have become paramount. To achieve efficient privacy protection in a federated learning environment, a differential privacy algorithm is designed using the eXtreme Gradient Boosting (XGBoost) algorithm. This algorithm optimizes the privacy protection process by applying differential privacy to the optimal segmentation point in a weak classifier. Additionally, to address the multi-party collaboration challenge in federated learning, a differential privacy construction scheme based on multi-party collaboration is proposed. The results indicate that the running times of differential privacy algorithms based on multi-party collaboration, XGBoost, and the traditional differential privacy algorithm were 16.2s, 22.1s, and 29.5s, respectively. The optimized approach improved efficiency by 45.08% compared to the traditional algorithm. Overall, the differential privacy-based federated learning efficiency optimization algorithm can ensure privacy protection while enhancing accuracy and efficiency, providing significant technical support.
Introduction: This paper proposes a privacy-preserving joint learning efficiency optimization algorithm based on differential privacy, and designs a differential privacy-preserving algorithm based on XGBoost (DP-XGB). This algorithm enhances privacy preservation by introducing differential privacy at the optimal segmentation point in the weak learner, thereby improving both data security and model accuracy. Building on this foundation, the research further proposes a differential privacy construction scheme (FDP-XGB) based on multi-party collaboration, integrating joint learning techniques to address potential privacy leakage during multi-party collaboration.
Objectives: By applying differential privacy to the optimal splitting point among weak learners, DP-XGB optimizes the privacy protection process, thereby enhancing both data security and model accuracy. FDP-XGB is introduced to safeguard privacy in a joint learning environment, effectively addressing the issue of privacy leakage that can occur during multi-party collaboration.
Methods: We first enhance the original data and obtains weak learners using the XGBoost algorithm. These weak learners are then combined to form a strong learner, and a differential privacy protection algorithm is constructed. Building on this foundation, the second section develops a multi-party collaborative privacy protection algorithm within a federated learning environment.
Results: The results indicate that the running times of differential privacy algorithms based on multi-party collaboration, XGBoost, and the traditional differential privacy algorithm were 16.2s, 22.1s, and 29.5s, respectively. The optimized approach improved efficiency by 45.08% compared to the traditional algorithm. Overall, the differential privacy-based federated learning efficiency optimization algorithm can ensure privacy protection while enhancing accuracy and efficiency, providing significant technical support.
Conclusions: This study proposes a privacy protection technology that combines the XGBoost differential privacy protection algorithm with federated learning to address privacy security and data silos in data mining. FDP-XGB demonstrated the highest prediction accuracy when comparing true and predicted data values, outperforming DP-XGB. For a data volume of 18×104, the computation times for XGBoost, DP-XGB, and FDP-XGB were 29.5 seconds, 22.1 seconds, and 16.5 seconds, respectively, with resource consumption rates of 48.5%, 24.9%, and 21.1%.