Few-shot classification aims to adapt the knowledge learned from base classes with sufficient data to new classes with limited data, where meta-learning methods are usually leveraged for this challenging task. However, most existing algorithms suffer from insufficient representation and testing bias issues, accordingly failing to exploit useful semantic information while being prone to cause the gap of classification accuracy between training classes and testing classes. To this end, we propose the Self-Supervised Continuous Meta-Learning (SS-CML) framework to simultaneously handle the mentioned problems, which consists of two key modules. i.e., Self-Supervised Embedding network and Self-Supervised GNN. Specifically, Self-Supervised Embedding network can extract informative semantic information from training images so that the learned prototype are more representative for the classification task. Moreover, Self-Supervised GNN learn reactions between nodes without true labels, which can improve the reliability of knowledge prior to classify images of new classes, thereby reducing the excessive dependence of training classes and alleviating the testing bias issue. Furthermore, these two modules are jointly leveraged in our SS-CML to generalize the prior knowledge to novel classes. Extensive experimental results on MiniImageNet and TieredImageNet show up the effectiveness of both self-supervised branches which boost classification performance.