Fine-Grained Medical Image Classification (FGMIC) aims to identify disease subclasses within the corresponding metaclass. Due to insufficient labeled images and confusing image samples, the accuracy of FGMIC is limited and the trustworthiness of the model is also affected. In this paper, we utilize evidence theory to measure prediction uncertainty and improve the trustworthiness of FGMIC through multiple evidence fusion. Specifically, we consider FGMIC as a hierarchical classification process. At each layer, we construct an evidential classifier to extract classification evidence. Evidence extracted from all layers forms multi-grained evidence. Then, multi-grained evidence are fused through the Dirichlet hyper-PDF, so that evidence of coarse-grained layer classes can be used to enhance the corresponding evidence of fine-grained layer classes. Moreover, the scanned 3D medical image of a patient can generally be divided into three 2D views, with different views containing different features and uncertainties of the pathological region. Inspired by this, the evidential classifier of each layer is split into three sub-evidential classifiers, where one sub-evidential classifier is built on a view. Then, classification evidence from different views is fused using uncertainty-weighted fusion. Experiments on two cancer subtype classification tasks validate that multiple evidence fusion can not only improve prediction accuracy, but also reduce uncertainty and improve the trustworthiness.