Recently, deep learning technologies have been utilized in many scientific domains successfully. Convolution neural networks are common used in image understanding problems. However, to train a convolution neural network model with huge amount of images is time-consuming task. Most of deep learning frameworks, such as Caffe, TensorFlow, Torch, Keras, MxNet, and so forth, support GPU to train model fast; especially executing these models on multiple GPUs. In this work, we present the comparison of computation performance of AlexNet among different GPU servers and hyperparameters. The results shows that GPU servers with high bandwidth rate, NVLINK, can achieve better performance than others.