Network sparsification is an effective technique for Deep Neural Network (DNN) inference acceleration. However, existing sparsification solutions often rely on structured sparsity, which has limited benefits. This is because many sparse storage formats introduce substantial memory and computation overhead for address generation and gradient update, or they are only applicable during the inference, neglecting the training phase.In this paper, we propose a novel compilation optimization design called TSTC that enables efficient training via structured sparse tensor compilation. TSTC introduces a novel sparse format, Tensorization-aware Index Entity (TIE), that efficiently represents structured sparse tensors by eliminating repeated indices and reducing storage overhead. The TIE format is applied in the Address-carry flow (AC flow) pass, optimizing the data layout at the computational graph layer. Additionally, a shape inference pass utilizes the address-carry flow to derive optimized tensor shapes. Furthermore, an operator-level AC flow optimization pass generates efficient addresses for structured sparse tensors. TSTC is a versatile design that can be efficiently integrated into existing frameworks or compilers. As a result, TSTC achieves 3.64×, 5.43×, 4.89×, and 3.91× speedup compared to state-of-the-art sparse formats on VGG16, ResNet-18, MobileNetV1 and MobileNetV2, respectively.