The natural distribution of monitoring data is imbalanced, which has a negative impact on the training of intelligent diagnosis models. Although researchers have proposed data-level and algorithm-level methods to solve this problem, these methods are only applicable to small imbalance scenarios. In order to correct the anomalies of model training under large imbalance scenarios, this paper proposes a gradient harmonized loss that coordinates the gradients of each class to prevent the majority class in the imbalanced data from dominating the training. The coordination of gradients is based on the similarity of the sample gradients, and the compression of similar gradients is achieved by defining different penalty rules for each class. Taking into account the computational efficiency and the training difficulty, the proposed method is further optimized in terms of gradient dimensionality reduction and parameter simplification respectively. The proposed method was verified using two sample sets with different imbalance ratios and compared with traditional methods. The results showed that the proposed method greatly improved the performance of the DCNN model in large imbalance scenarios.