Current plant phenotyping studies have focused extensively on plant time-series image studies utilizing deep learning. Such study images are easier to obtain but costly to annotate, and contrast learning is one method for efficient label training. The growth of plants is slow, their image sequences change little over time, and their semantic information is simple. Previous contrast pre-training models struggle to distinguish between positive samples from different enhanced views of the same image and similar negative samples from different images. For this reason, this paper proposes a contrastive learning method with a priori distance embedding (PDE) for plant time-series images. Different phenological stages of plants correspond to different semantic information in the images. The method converts this essential domain knowledge into a priori distances between image pairs and performs comparative learning pre-training; the weights can then be transferred to the semantic segmentation task. Based on this method, the results of experiments on time-series images of cherries demonstrate that the PDE comparative learning method can be effectively applied to the pre-training of semantic segmentation of plant time-series images, and has a wide range of potential applications in computer vision plant phenotypic research.