Scene text image super-resolution (STISR) aims to enhance the resolution and visual quality of low-resolution scene text images, thereby improving the performance of some text-related downstream vision tasks. However, many existing STISR methods treat scene text images as general images while ignoring text-specific properties such as the particular structure of text images. Although some methods elaborated on introducing a certain edge detection operator to obtain the hard edges for improving the quality of super-resolved images, the extracted hard edges are binary and prone to generate aliasing edges. In view of the above considerations, we propose a novel soft-edge-guided significant coordinate attention network for STISR. Specifically, we apply soft edges to assist text image super-resolution, which is the probabilistic edges that can reflect a complete edge description on text images. In addition, some proposed approaches exploit both channel and spatial attention for effective image enhancement, but they all ignore the location information hiding in text images. To explore the key position-dependent features embedded in scene text images, we elaborately incorporate the coordinate attention into the process of STISR, which can capture long-term dependencies in one spatial direction while retaining precise position information in another one. Furthermore, we propose a new attention mechanism, called significant coordinate attention, to enable the network to focus more on the significant text region. The extensive experimental results demonstrate that our newly proposed method performs favorably against state-of-the-art methods in terms of both quantitative and qualitative assessments. The code will be available at https://github.com/kbzhang0505/SegSCoAN.