Scene understanding in adverse weather conditions (e.g., rainy and foggy days) has drawn increasing attention, raising some specific benchmarks and algorithms. However, rain streaks in images and videos can significantly degrade visual quality and reduce the effectiveness of computer vision algorithms. Concerning on lack of rainy-clean pairs training samples in real rainy scenes, we propose an unsupervised deraining method that does not require any explicit rain-clean image pairs for training. Instead, our approach leverages the statistical properties of rain streaks and clean regions to learn a rain removal model. Specifically, we choose contrastive learning that encourages the generator to produce derained images that are indistinguishable from clean images while the discriminator distinguishes between real clean images and rain streak patterns. Furthermore, we incorporate a semi-supervised mechanism during segmentation module that exploits low-confidence Pseudo-label to be treated as negative samples for certain classes to facilitate contrastive learning. Our experiments demonstrate that our integrated framework achieve better performance on the given challenging scenarios such as heavy rain and dynamic scenes.