With the development of automatic speech recognition and text-to-speech technology, high-quality voice conversion can be achieved by extracting source content information and target speaker information to reconstruct waveforms. However, current methods still require improvement in terms of inference speed. In this study, we propose a lightweight VITS-based voice conversion model that uses the HuBERT-Soft model to extract content information features. Unlike the original VITS model, we use the inverse short-time Fourier transform to replace the most computationally expensive part. Through subjective and objective experiments on synthesized speech, the proposed model is capable of natural speech generation and it is very efficient at inference time. Experimental results show that our model can generate samples at over 5000 KHz on the 3090 GPU and over 250 KHz on the i9-10900K CPU, achieving faster speed in comparison to baseline methods using the same hardware configuration.