In order to solve the problem of poor learning effect caused by data heterogeneity among different participants in the existing federated learning methods, this paper proposes a federated data augmentation algorithm based on heterogeneity assessment (FDA-HA) from the perspective of mitigating the effect of heterogeneity and protecting user data privacy, which mitigates the degree of data heterogeneity among participants by generating adversarial networks for data augmentation and at the same time safeguards fairness in the data augmentation process under the premise of protecting data privacy while guaranteeing fairness in the data enhancement process. Experimental results on MNIST, FashionMNIST, and Cifar10 datasets show that compared with mainstream federated learning algorithms, this algorithm improves the accuracy by 7.96% and 13.44% in data scenarios with different degrees of heterogeneity, and at the same time, it has a certain degree of fairness.