The new coronavirus (COVID-19) pandemic poses a serious risk to the public's health and continues to have a negative influence on the economy. Currently, computed tomography (CT) imaging is the most effective tool for examining COVID-19 lung infections in medical imaging technology. The diagnosis of COVID-19 is aided by high-resolution medical photographs since they can show more distinct anatomical characteristics of the human body. However, obtaining clear high-resolution medical images is challenging due to factors such as the imaging environment, system limitations and human error. Therefore, we propose a deep multi-scale fusion network based on prior knowledge to achieve super-resolution reconstruction of COVID-19 CT images. The network comprises cascaded multi-scale residual blocks that fully extract multi-receptive field ratio features of CT images and leverage the ability of the network to extract deep image semantic features at different scales. Additionally, CT image texture changes reflect internal organ tissue changes; fine texture features are more helpful for networks to recover detailed image information; therefore, we inject prior knowledge into the learning process by using non-sampling contourlet transform (NSCT) to combine NSCT high-frequency information with images so that our network can learn finer texture features and maintain richer details than spatial domain information during training. Experiment results on two publicly available COVID-19 CT image datasets show that our suggested method achieves greater PSNR and SSIM values while better preserving precise texture information when compared to existing cutting-edge super-resolution technologies.