The application of Generative Adversarial Network to image texture transfer is a hot research area in computer vision. We propose an unsupervised image texture transfer method in this paper, in which we develop a texture loss network using VGG19 model on the basis of generative adversarial network. To quantify the image texture information, we use the mean and variance of multiple low-level feature layers in this network. The texture loss refers to the differences in texture features between the input image and target image, acting on the generator for updating the parameters. Furthermore, using a U-net full convolutional network in the generator is an effective way to preserve the multi-scale features of the input image. In loss function, the reconstruction loss is calculated by the generated image and the target image in their grayscale space. We use a variety of unpaired datasets for experiment with unsupervised training. The results show that compared with other texture transfer networks, this method can better constrain the transfer performance of the generator and achieve the goal of arbitrary texture transfer.