In recent years, image super-resolution (SR) has made remarkable progress in areas such as natural images or text images. However, in the field of digital wallchart image super-resolution, existing methods have failed to preserve the finer details of text regions while restoring graphics. To address this challenge, we present a new model called Real Text-SwinIR (RT-SwinIR), which employs a novel plug-and-play Attention-based Learned Text Loss (LTL) technique to enhance the architecture's ability to render clear text structure while preserving the clarity of graphics. To evaluate the effectiveness of our method, we have collected a dataset of digital wallcharts and subjected them to a two-order degradation process that simulates real-world damage, including creases and stains on wallcharts, as well as noise and blurriness caused by compression during computer network transmission. On the proposed dataset, RT-SwinIR achieves the best 0.58 on Learned Text Loss and 0.11 on LPIPS, reduced by an average of 41.4% and 35.3%, respectively. Experiments have shown that our method outperforms prior works in digital wallchart image super-resolution, indicating its superior visual perceptual performance. [ABSTRACT FROM AUTHOR]