Neural combinatorial optimization aims to use neural networks to speed up the solving process of combinatorial optimization problems, i.e., finding the optimal solution of a problem instance from a finite set of feasible solutions that minimize a given objective function. Recently, researchers have applied convolutional neural networks to predict the optimal solution's cost (defined by the objective function) to give as extra input to an exact solver to speed up the solving process. In this paper, we investigate whether graph representations that explicitly model the inherent constraints in combinatorial optimization problems would improve the performance of predicting the optimal solution's cost. Specifically, we use graph neural networks with neighborhood aggregation and graph Transformer models to capture and embed the knowledge in the graph representations of combinatorial optimization problems. We also propose a benchmark dataset containing the Traveling Salesman Problem (TSP) and Job-Shop Scheduling Problem (JSSP), and through the empirical evaluation, we show that graph Transformer models achieve an average loss decrease of 61.05% on TSP and 66.53% on JSSP compared to the baseline convolutional neural networks.