One of the primary challenges in cloud-edge environments is efficiently utilizing significant amounts of data on edge devices for machine learning tasks, enabling adaptation to increasingly complex computing and service scenarios. Federated Learning (FL) is a machine learning paradigm that enables collaborative training of models involving multiple data warehouses in a privacy-preserving manner. However, classical federated learning has poor convergence on highly heterogeneous data, which limits its performance of global model on each edge device. The emergence of Personalized Federated Learning (PFL) effectively alleviates data heterogeneity, but learning a personalized model may incur greater overheads. In this paper, we propose an efficient FL framework named as ASPFL, which uses dynamic sparse training for personalized federated learning to maintain model performance while reducing computational and communication overheads in cloud-edge environments. By adaptively allocating the dynamic sparsity from a global perspective to explore sparse network structure during training, ASPFL improves the independent parameter exploration process of local sparse training to adapt to various heterogeneous situations and solves the Non-IID challenge of FL. The abundant experimental results show that ASPFL outperforms state-of-the-art methods in performance, overheads, and convergence speed in PFL.