The rapid development of remote sensing provides abundant data for land cover classification. Nevertheless, densely labeling newly-acquired data is an expensive job. With labeled data from different imaging locations, obtaining good performance on unlabeled test data has two challenges: (1) Due to the geographic shift, deep models trained on labeled data lack generalization to other geographic locations. (2) The samples of different classes in remote sensing images with diverse scenes are extremely imbalanced. To conquer these obstacles, we propose a class-aware regularized self-distillation learning method. For the former, the model trained on labeled data is multi-round self-distilled on unlabeled data with the supervision of pseudo-labels, and each class is set with specific distillation rounds. For the latter, we assign a tailored weight to each class during the self-distillation learning. In the end, our proposal achieves a mIoU of 49.83%, ranking third place in Track SLM of 2022 IEEE GRSS Data fusion contest.