Deep learning is a computationally demanding field. So the GPU used has a very important impact on the effectiveness of deep learning. In addition, deep learning requires huge amounts of memory. High throughput means better performance, faster iterations, and more efficient completion of experiments. Therefore, the choice of GPU basically determines the success and experience of deep learning. This paper analyzes four commonly used computer deep learning model structures, VGG, ResNet50, MobileNet, InceptionV3, and their memory requirement. At the same time, this paper also analyzes the relationship between computing power and memory of several mainstream graphics cards produced by Nvidia which are often used for deep learning, including GeForce, Tesla, and etc. Therefore, in the case of limited medical conditions, such as the inability to purchase expensive graphics cards, or do not have enough memory graphics cards, an existing suitable computer model can be found to help medical personnel to perform gland image segmentation, so as to achieve a more efficient, fast and economical scheme.