Automatic diagnosis for medical images is a significant research problem of computer-aided diagnosis to reduce the workload of doctors in recent years. However, existing deep learning approaches for diagnoses are usually black-box models with implicit decision-making processes that make them inexplainable. To alleviate the issue, in this paper, we propose a Knowledge-driven Interpretable Network (KdINet) for interpretable disease classification of medical images. KdINet first exploits the pretrained CNN module and the hierarchical representation module to learn two different disease feature representations (i.e., visual disease features and hierarchical disease features). Subsequently, the joint training of two disease features by KdINet’s disease classifier learns a hierarchical classification criterion to infer the diagnosis and generate corresponding interpretable justifications (i.e., ancestral disease paths of the diagnosis). Extensive experiments on three datasets demonstrate that our KdINet achieves significantly higher effectiveness than state-of-the-art approaches on disease classification metrics.