In few-shot text classification, the lack of significant features limits models from generalizing to data not included in the training set. Data augmentation is a solution to the classification tasks; however, the standard augmentation methods in natural language processing are not feasible in few-shot learning. In this study, we explore data augmentation in few-shot text classification. We propose saliency-equivalent concatenation (SEC) 1 1 Our code is available at https://github.com/IKMLab/SEC.. The core concept of SEC is to append additional key information to an input sentence to help a model understand the sentence easier. In the proposed method, we first leverage a pre-trained language model to generate several novel sentences for each sample in datasets. Then we leave the most relevant one and concatenate it with the original sentence as additional information for each sample. Our experiments on the two few-shot text classification tasks verified that the proposed method can boost the performance of meta-learning models and outperform the previous unsupervised data augmentation methods.