Diffusion models (DMs) is a new generative model based on deep learning with computational vision, which has been applied to image generation, image enhancement, image restoration, text-to-image and other fields. Compared with traditional autoregressive models, variational autoencoders (VAEs), energy-based models (EBMs), flow-based models (FBMs) and generative adversarial network (GAN) models, diffusion models can generate images with strong diversity and significant details, and have many advantages such as easier training and flexibility compared with generative adversarial networks. This paper mainly summarizes the existing image generation models based on diffusion model, compares them, and proposes possible improvement schemes.