In recent years, pre-training models using supervised contrastive loss have defeated the cross-entropy loss widely adopted to solve classification problems using deep learning. However, this approach is limited by its inability to directly train neural network models. To overcome this difficulty, we propose a novel loss function based on supervised contrastive loss, which can directly train deep models. Our extensive evaluations of the Cifar10 and the Mnist datasets show that our method achieves good performance on Cifar10 and the Mnist datasets.