We propose a cross-lingual TTS model based on the neural network. The model is capable of synthesizing speech across languages and translating the speaker's timbre. It uses a few seconds of untranscribed reference audio of the target speaker to synthesize the new speech of that speaker. The model consists of a separate speaker encoder, STT Translator, synthesizer, and vocoder. We decouple speaker information and speech to build a speaker recognition network. Our synthesizer is mainly built based on the Tacotron model and is divided into three parts: encoder, attention mechanism and decoder. The vocoder, on the other hand, is based on two methods, WaveRNN and HiFi-GAN, and serves to predict the synthesized waveform using the Mel spectrum. We conducted experiments to analyze the effectiveness of our approach. Besides, we also analyzed the effect of different datasets on the training effect.