Traditional low-resolution rock core images fail to capture the details and features of rocks, limiting the accurate analysis of rock properties. Therefore, researchers have proposed supervised and unsupervised digital rock super-resolution reconstruction methods. Supervised methods require paired data and manual annotation, while unsupervised methods generate high-resolution images through self-learning that are suitable for real-world scenarios with a lack of paired data. However, the current unsupervised methods still face challenges in terms of consistency and feature capturing. To address this problem, a novel unsupervised single-image super-resolution model called NL-CycleGAN is proposed. This model employs a non-local attention-guided CycleGAN framework to effectively capture low-level pixel variations between unpaired source and target images while preserving the overall tone. To evaluate the performance of NL-CycleGAN, we conduct both quantitative and qualitative tests using the DeepRock-SR dataset. In terms of quantitative evaluation, our method achieved the lowest perceptual loss (LPIPS) metric compared to existing methods. Additionally, our model obtained the highest performance in peak signal-to-noise ratio (PSNR), and multi-scale structural similarity index measure (MS-SSIM). In qualitative testing, the comparison images clearly show that our model accurately captures the intricate details of pores in carbonate and sandstone, as well as the complex crack patterns in coal. In conclusion, our model contributes to improving rock resolution and provides more accurate image processing techniques for rock property analysis and geological research.