Recent years have witnessed advancements in mesh applications in immersive media. However, the geometry compression of the mesh remains a significant challenge due to the complex structure, including vertices' coordinates and connectivity. Although existing methods compress connectivity efficiently, the cost of encoding vertices is still excessive because of insufficient exploration of spatial correlations among vertices. To improve the coding performance, we propose a lossless mesh vertices' compression framework for predicting and encoding ordered mesh coordinates based on encoded connectivity. Specifically, the proposed approach employs attention-based graph convolution and a multi-layer perceptron network to predict vertices' coordinates based on encoded connectivity. Then the residual between the ground truth and the prediction is coded losslessly using a learning-based factorized entropy model. We conduct experiments on the base mesh dataset in Moving Picture Experts Group(MPEG) video-based dynamic mesh coding (V-DMC) in different compression levels. The results demonstrate that our framework outperforms other methods regarding vertices, compression.