Federated learning is a distributed machine learning framework that allows multiple clients to collaborate in training a global model. In federated learning, many clients can train together and make full use of information resources. However, in practical scenarios, the data owned by different users are not from the same distribution. The integral data is heterogeneous, which will slow down the training speed of federated learning and reduce the performance of the model. And in this paper, we focus on addressing label heterogeneity and propose a clustering federated learning framework that utilizes the flow of solution procedure (FSP) matrix. And we set up a small amount of public dataset at the central server. The client participating in the training generates FSP matrix on this dataset. The central server clusters the clients into several groups according to these matrixes. The clients clustered into the same group are considered to have more similar data distributions. We also conduct quantities of experiments on CIFAR10 and MNIST datasets to verify that our method performs better than the traditional federated learning and some other algorithms in terms of performance and has good stability. It can function properly in various scenarios with non-independent and non-identically distributed data.