Federated learning (FL) is an emerging distributed learning framework that can reduce privacy leakage risks by not explicitly sharing private data, but FL still has privacy leakage issues. Recently, many FL security schemes were proposed to improve the protection capabilities of FL, but they cannot realize a desirable tradeoff between privacy and model performance. In this work, we propose a dynamic privacy-enhanced federated learning algorithm based on gradients perturbation to solve above issue. First, instead of directly using original feature of training data, we generate new feature of training data by maximizing the distance between training data and resulting data that inverse mapping by new feature, and minimizing the distance between original and new feature. Next, the dynamic mixed gradients are obtained by combining gradients generated by original and new feature with variable weights. The dynamic mixed gradients-based model training can effectively prevent the original training data leakage and adjust privacy preservation strength adaptively. Furthermore, to make iDLG difficult to infer labels of training data, we perturb mixed gradients by transforming part of positive gradients into negative gradients. Finally, the experiments demonstrate that the algorithm proposed in this paper can effectively resist iDLG attack, and it has significant advantages in protection effect compared with other security schemes.