This study introduces a deep learning-driven virtual fitting system that allows users to virtually experiment with garments, resulting in visually appealing effects. Firstly, We utilize cutting-edge style transfer algorithms to apply the user's input image style onto the virtual scene. Then, we employ a generative adversarial network (GAN) on the modified image to generate content while maintaining image details. Specifically, The initial step in achieving a realistic try-on and detailed clothing representation involves predicting the semantic layout of the reference image to be altered after the try-on process, followed by determining its image contents based on the predicted semantic layout. The network model obtained by training on the clothing dataset can realize the virtual try-on system for people through the network. Finally, the user's try-on operation is completed in the virtual environment, and the final effect picture is generated. The implementation and construction of the overall system is based on Python Web. Experimental results show that the system achieves more accurate and satisfactory recommendations. In conclusion, this system can effectively achieve virtual try-on operations with high visual effects.