Although deep neural networks (DNNs) have achieved excellent generalization performance on well-labelled datasets, the real-world training sets usually suffer from label noise. Owing to the powerful representational ability, DNNs may be vulnerable to the noisy data thus resulting in a sharp deterioration of test performance. To tackle this problem, we propose a novel cross-training framework to leverage two synergic deep architectures, where each member plays the roles of both teacher of generating curriculums and student of updating model parameters via a novel loss function. It is convenient to train our proposed model directly without any prior information such as noise rate. The state-of-the-art experimental results on several benchmark datasets under different types of label noise have demonstrated the effectiveness and robustness of our method.