Many transformer-based encoder-decoder models have made significant progress on summary generating tasks. And the availability of pre-trained models further improves its performance with self-supervised objectives on large text corpora. However, most models' architectures and their training criteria pay more attention to the lexical and syntactic structure rather than semantic similarity. In this paper, we augment training data in semantic space and propose Augmentation-based Semantic Mechanism (ASM) for encoder feedback with corresponding criterion to capture global semantic meanings. Notably, we enhance the encoder's comprehension of summaries in semantic space, and facilitate the integration of global semantics and local syntax during generating summaries. By leveraging pre-trained language models, we have driven our results to a new level (45.11 on CNN/DailyMail, 45.35 on XSum in ROUGE-1). Additionally, the human evaluation and further experiments also validate the effectiveness of our proposed method for generating abstractive summaries. Our augmented data and source code for summarization will be made public.