Identifying synergistic drug combinations is paramount significance in addressing complex diseases while reducing the risks of toxicities and other adverse effects. Although a plethora of computational methods have been proposed in this domain, most of them are underpinned only by physicochemical or biological features. Recently, Chemical Language Models (CLMs) are shown to be capable of learning better representations that hold utility across diverse tasks, from molecular property prediction, de novo drug design, drug-target interaction, and more. In this study, we proposed CLMSyn, a continuous prompt for CLM aided synergistic drug combinations prediction. Unlike existing works employ CLMs for downstream tasks, we adopt the prompt learning to fine-tune CLM, that is, only train small-scale prompt while keeping CLM fixed. Furthermore, we harness the the multi-head attention mechanism to fuse the learned vector from the CLM, chemical descriptors and gene expression of cell line. A comprehensive array of experiments conducted on a benchmark dataset, encompassing four distinct synergy types, substantiates the superior performance of CLMSyn when contrasted against existing state-of-the-art methods. These empirical findings provide compelling evidence attesting to the efficacy of CLMSyn as a potent instrumentality in expediting the identification of pioneering combination therapies.