The complex glyph structures and diverse writing styles of ancient Chinese character images lead to suboptimal performance when existing image retrieval methods are directly applied to datasets of these images. Addressing the impact on retrieval caused by the complex glyph structure and rich detail information characteristic of ancient Chinese character images, a multi-layer feature adaptive fusion model for ancient Chinese character image retrieval is designed. Firstly, a local feature extraction module is constructed to obtain low-level feature maps with different spatial receptive fields. Secondly, to better fuse features of varying scales, an improved ASFF(Adaptively Spatial Feature Fusion) method is employed to build the PSASFF(Pixel Shuffle Adaptively Spatial Feature Fusion) module for adaptive fusion of multi-layer features extracted by the designed network. Finally, in order to reduce the influence of the writing style of ancient Chinese characters on retrieval, the cosine similarity scores of image retrieval before and after fine processing are weighted and used as the ultimate similarity scores. The retrieval method proposed achieves an average retrieval accuracy of mAP@50 and mAP@30 of 0.8537 and 0.9576 respectively on the ancient Chinese character image dataset. Experimental results demonstrate the effectiveness of this method for ancient Chinese character image retrieval.