Although PoW has achieved great success over the last decade, its core problem, energy waste, has not been well answered. The purpose of PoL is to use the wasted computing power in another field that is also consuming massive computing power, training large-scale deep learning models. The success of PoW is largely due to that its proof can be verified in constant time. In this paper, we propose a new method for replacing cryptopuzzles with large-scale model training based on the observation of the similarity between the gradient changes during model training and hash functions. Our new consensus mechanism can verify a proof in constant time. Experiments show that the performance of our proposed new consensus mechanism meets expectations.