With the advent of the era of Big Data, Internet transmission has become more convenient and rapid, and the textual information published and received by each person every day is exploding. Text summarization technology can automatically summarize long texts to obtain more concise summary information. However, constructing a summary model that can generate long texts with fluent semantics is still a complex research problem, which usually suffers from long-distance dependency problems, unregistered words problems, and poor readability problems. In this paper, we proposes a Long-Short Transformer based text Summarization model (LSTS) that captures both local and global information of the input document while using a pointer network to alleviate the problem of unlabeled words and introducing an attention mechanism in the Decoder to alleviate the problem of repeated words. We conducted sufficient experiments in the two large-scale datasets to verify the model’s effectiveness.