In this article, we address one of the challenges in federated learning, which is the presence of non-identically distributed data, leading to inconsistent optimization directions for individual client models. In this paper, we propose a method called FedMPM, which utilizes multi-stage private models. We combine independently trained models with strong local information and fine-tuned historical local models with strong global information to achieve personalization. This approach aims to make rational use of models obtained in each round of communication. Additionally, we introduce the concept of prototype learning and incorporate regularization terms to enhance the performance of historical local models. We conduct extensive experiments on three datasets, FMNIST, CIFAR10, and CIFAR100, simulating scenarios with different sample distributions, demonstrating the effectiveness of the proposed model.