With the rapid growth of data and the increasing awareness of privacy protection, data privacy issues have become particularly important in the field of machine learning. Federated learning, as a distributed learning method, achieves collaborative training of models while preserving data privacy by keeping the data stationary and allowing the model to move. However, during the federated learning process, there is still a risk of privacy leakage when aggregating the intermediate parameters of models trained by different data providers. Researchers have found that adding noise to the intermediate parameters of the model using differential privacy can effectively prevent privacy inference on the data contributors. Nevertheless, there exists an inherent trade-off between the accuracy and privacy in federated learning models under differential privacy. Strengthening privacy protection often leads to a decrease in model performance. This trade-off becomes more pronounced in complex deep learning models that require multiple iterations to converge. To address the issues of data privacy, data silos, and the trade-off between data privacy leakage and model availability in deep learning within federated learning, this paper proposes a relaxed differential privacy federated learning approach. It reduces the impact of noise on the final results by selectively perturbing gradients when data providers return intermediate model parameters. Experiments demonstrate that this approach achieves a high level of accuracy while preserving data privacy. Additionally, it exhibits superior performance in terms of computational efficiency, striking a well-balanced compromise between accuracy and privacy.