Text-to-Speech (TTS) is a popular application in the field of Natural Language Processing (NLP). Convolutional 1D (Conv1D) is an essential operation in TTS models, accounting for a significant portion of their computation. Currently, the implementation of Dilated Conv1D in TensorFlow is inefficient due to redundant memory move operations. This paper proposes a Zero-Split-Merge (ZSM) solution to enhance the performance of Dilated Conv1D, and reduce the overall compute time of TTS models According to experiment results, up to 6.3x performance gain was seen by adopting the proposed ZSM solution.