Sparse representation methods have been researched widely in recent years. Sparse representation classification methods, such as sparse representation classifier (SRC) and label-consistent K-SVD (singular value decomposition) learn classification parameters, dictionary, and sparse representation simultaneously, so that they find an optimal sparse representation to discriminate categories. However, these classifiers use least square error (LSE) strategy to design the classifiers. LSE of the empirical risk is not optimal for classifiers because even if a training sample correctly classified, it may increase the empirical cost. We, therefore, introduce the hinge loss to design the classifier. The hinge loss is employed in support vector machines, and it shows better performance than LSE based methods. We provide an optimization algorithm to minimize the proposed criterion that is the linear combination of the hinge loss and sparse representation error. Experimental results show that the proposed method exhibited conventional sparse representation classification methods.