Chinese Spelling Check (CSC) is a task to detect and correct Chinese misspellings. Existing methods have adopted pre-trained language models (PLMs) and achieved state-of-the-art performances. However, due to the Dropout strategy, the PLM-based models suffer from inconsistency between the training and testing stages, i.e., the semantic representation during training is inconsistent with the counterpart during testing. Besides, the PLM-based models prefer to correct the erroneous characters to semantically proper or commonly used ones rather than the ground truths. And they usually neglect phonological and visual similarity knowledge between characters. In this paper, we propose a consistent and contrastive learning approach with character similarity for CSC (CCCSpell). CCCSpell is composed of a BERT-based detector and a character similarity-based corrector. The detector uses bidirectional Kullback-Leibler divergence to minimize the gap between the output distributions of two sub-models sampled by Dropout, which can enhance the consistency of the semantic representations when training and inferring to improve the testing performance. Besides, the detector employs a contrastive optimization objective to increase the confidence of targets and decrease the counterpart of common characters, which can help avoid predicting common ones. After that, the corrector tries to incorporate phonological and visual similarity knowledge into detection results by calculating the levenshtein edit distance between the shape/pronunciation representations of characters, and utilizes the confusion set to help select golden characters. Experiments are conducted on SIGHAN and OCR datasets, and the results demonstrate that CCCSpell outperforms all baseline models and achieves new state-of-the-art performances.