Aiming to address the issue of information loss during the feature extraction process of High-Resolution Network (HRNet) in human pose estimation tasks, a high-resolution human pose estimation network called CDLNet (CDL-Attention Network) is proposed. Building upon the foundation of HRNet, CDLNet incorporates multiple attention mechanisms. By introducing the coordinate attention module and constructing Coorneck and Coorblock modules to replace the commonly used Bottleneck and Basicblock modules in HRNet, CDLNet effectively captures both channel and position information from the feature maps, enabling the model to better localize and identify target regions. Furthermore, the multi-scale attention mechanism is introduced to capture channel and spatial information on feature maps with 1/16 resolution, and the non-local attention module is added at 1/32 resolution to expand the receptive field and extract useful information. Experimental validation is conducted on the publicly available MS COCOVAL 2017 dataset. The results demonstrate that the method in this paper achieves a 3.4% improvement in mean Average Precision (mAP). It effectively enhances the feature extraction capability, reduces information loss, and improves the accuracy of human pose estimation compared with the HRNet.