Although deep convolutional neural networks have made breakthroughs in the accuracy and speed of single-image super-resolution, there are still many unsolved problems: firstly, How to refine the texture problem when performing super-resolution processing at a larger magnification ratio. Secondly, the existing convolutional neural network image super-resolution algorithms are prone to overfitting and insufficient convergence of the loss function. Aiming at two problems, an image super-resolution method based on generative adversarial network is proposed. The feature map is spatially transformed on the network to solve the problem of refined texture, combined with CycleGAN and SRGAN, the network structure is improved and the loss function is optimized, and the SRCICGAN algorithm is proposed to restore the four times down-sampled image to solve the loss function problem. The experiment is compared with the latest six methods on three data sets. The PSNR and SSIM indicators are 1.92% and 5.49% higher in the Flickr2K data set, respectively, and better visual effects can be obtained in terms of detailed texture.