In recent years, image enhancement work has been well developed to improve the quality of images, which is beneficial to subsequent task such as object detection, license plate recognition and anomaly detection. But the selection of methods to solve the problems in different scenes is still the main work of image enhancement. In the dust weather environment, due to the absorption and scattering of light by the dust particles suspended in the air, the image obtained by the image acquisition device has the characteristics of yellowish reddish and blurred image, which seriously affects the visual perception of the human eye. And the lack of datasets for dust image enhancement further increases the difficulty of the related task. Thus, based on the color and contour features of the real captured dust images, this paper proposes a synthetic dust image dataset for training deep learning networks. Also, based on the feature transformation idea of Recurrent Generative Adversarial Networks (CycleGAN), we presents a dust image enhancement algorithm which uses an end - to-end deep learning network and avoids the dependence on physical imaging models. Comparison of state-of-the-art approaches available in the literature, our proposed approach obtains better subjective and objective evaluation results on the test set of our proposed database.