Contrastive learning has shown its potential in many unsupervised tasks, including hashing. However, the representations obtained by contrastive learning generally fail to produce no-table margins between semantic classes. Different semantic samples around the boundary are likely to collide into the same hash code. In this paper, we propose a novel Semantic Centralized Contrastive Hashing (SCCH) to allow the learned features closer to their semantic centers and more applicable to hashing. Specifically, a semantic centralization strategy is proposed by pulling strongly augmented samples towards weakly augmented ones since the weak are closer to semantic centers than the strong. Moreover, quantization directly after contrastive learning would damage the learned similarity relationship. We provide a solution to eliminate the mismatch of similarity metrics between contrastive learning and hashing mapping. Extensive experiments on three benchmark datasets demonstrate that SCCH outperforms the existing state-of-the-art methods.