This paper generates a novel person re-identification (ReID) model that is trained with a joint loss function in a multi-branch structure, enabling the model to not only extract discriminative features but also focus on the extraction of foreground information, thereby improving its performance in the presence of occlusion. Numerous research has demonstrated that the multi-granularity features learned by the model can contain a wealth of information and details to characterize the entire image of a person, enabling the model to learn additional discriminative features and enhance its performance. An understanding of the human body can help the model extract human-specific features, which can reduce the interference of background noise and overcome the influence of body position variations to achieve the alignment of feature matching. In light of this, we present the ReID model MGHK (Mutil Granularity and Human Knowledge), which is comprised of global branches, fine-grained branches, and human key point branches. Through comparison experiments with a variety of representative ReID models, the results show that MGHK achieves better performance on several publicly available datasets. On Markrt1501, DukeMTMC, CUHK03 and MSMT17, Rankl reached 95.72%, 90.31%, 75.79% and 82.42%, respectively. mAP reached 87.09%, 77.53%, 72.95% and 59.31%, respectively. The Rankl reached 58.0% and 71.40% in the Partial-Reid and Occluded ReID datasets, respectively. mAP reached 57.68% and 66.21%. In addition, through Grad-CAM visualization analysis, extending MGHK's knowledge of human key points allows the acquisition of a broader diversity of human characteristics.