Generative models are an important class of models in the field of artificial intelligence. Unlike models that aim to establish a mapping between inputs and outputs by extracting features, the goal of generative models is to learn the distribution of training data and generate new samples with similar features from a latent space. In the development of generative models, efforts have been made to improve the diversity of generated samples, progressing from VAE (variational auto-encoders). VAE achieves random sampling and generation of diverse images by transforming the bottleneck layer of the encoder from predicting low-dimensional feature maps to predicting positions sampled from a normal distribution. Subsequently, vector quantized variational autoencoders(VQ-VAE) and VQ-VAE-2 models were introduced, followed by the first-generation DALL-E model, and then diffusion models. We will discuss the basic principles of each model, the strengths and weaknesses of the models, and the innovations introduced by each model compared to its predecessors. After that, an overview of the evaluation metrics specific to generation models is provided. Finally, the applications that generation model enables are summarized. This review provides creative ideas for the future development of generative models.