While fusing label outputs from multiple classification models, simply averaging the results can no longer meet the needs of accuracy in use, so probability-based methods, such as D-S Evidence Theory, Bayesian Models, etc., are used to fuse and fix the outcome vectors from the models. The traditional D-S Evidence Theory has a good interpretability, but faces the problems of invalidation or distortion of results because of conflicting evidences. Existing works always focus on the weights calculated by evidences, however, the analysis of evidence sources, namely the classification models in our case, are ignored in those methods. Therefore, A novel label fusion method based on D-S Evidence Theory that adds subjective weights on the basis of former works is proposed in this paper. The prior knowledge can be shown in the fusion result by comprehensively analyzing subjective and objective parameters, e.g. evidence distributions, evidence sources and even application environments. The experiments proved that our method can effectively handle highly conflicting evidences, the label probability distribution is more concentrated, meaning the uncertainty of the result is lower and the accuracy is higher than other methods.