In lightweight Super Resolution (SR) tasks, balancing performance and efficiency is a challenge for CNN-based models. The Transformer architecture has been introduced into super-resolution tasks due to its global self-attention mechanism. However, existing lightweight image super-resolution networks based on Transformers typically employ localized attention to sparsify their attention, thereby limiting their receptive field and uniformity of pixel weight distribution. To address these challenges, this paper introduces the Shift Ladder Transformer for Super Resolution (slTSR), a lightweight model that melds lightweight convolutional layers with a distinct residual Shift Ladder Transformer block. With the integration of the Shift Ladder Transformer Layer (SLTL) strategy, slTSR effectively amalgamates information from diverse spatial windows, enhancing information exchange. Experiments on diverse datasets consistently highlight the exceptional performance and efficiency of slTSR.