More than 80% of human civilization information exists in the text. Knowledge extraction aims to obtain structured knowledge from text, a sub-task to build a knowledge graph. Current methods mainly target closed scenarios, but with the growth of knowledge, this will face challenges. Few-shot learning can extract new relations in open scenes. We proposed a bootstrapping method, Topic Model (TS), based on Neural Snowball to improve performance further. Specifically, we designed a new framework and embedded the instance selector tBERT that selects quality sentences. The relation classifier is trained from filtered sentences and a small amount of labeled data to predict new relations. Experiments show that our approach can screen out more diverse sentences for better few-shot relation learning. Moreover, Topic Snowball achieved significant improvement compared to Neural Snowball when the number of seed instances is small. Codes and datasets will be released soon.