The transferability of adversarial examples in black-box attacks is crucial for the performance of the attack. However, the gap between models weakens the attack performance when it is transferred from the surrogate model to the target model. To maintain attack ability, an intuitive solution is diversified surrogate models, but these models are difficult to obtain. We believe that diverse input images are an effective simulation and focus on finding a simple but effective data augmentation method. To this end, we propose a novel and effective method called Color Jitter Transformation (CJT). By adjusting the hue, saturation, and brightness of images, CJT naturally enhances the diversity of input patterns, making the generated adver-sarial examples more robust and transferable. Furthermore, our method can be generally combined with existing attack methods. Extensive experiments demonstrate the effectiveness of our method. Particularly, compared to the state-of-the-art transferable attacks, our method has improved the success rates for normal training models and defense models by nearly 12.0% and 10.4%, respectively.