Sentiment classification, as an important natural language processing task, refers to the assignment of a given text into multiple predefined sentiment categories. For classification tasks, the emergence of BERT has a significant impact on model performance, but its original intention is mainly to learn the semantic information of the text. In the MLM task, the ability to learn text grammar is still limited, and there is still much space for improvement based on BERT. In this paper, a DCNN & Embed-TreeLSTM dual channel model based on improved BERT output is proposed to improve the performance of the sentiment classification. Firstly, BERT’s improved output and the tree structure embedded by vector space after dependency syntax analysis are obtained. Then, based on the pre-training model and the serialized tree structure, the convolutional network and the cyclic network are constructed respectively. Finally, the output of the two is weighted and fused, so that the model can not only effectively learn the semantic information in the text, but also capture the grammatical information in the text hierarchy. The experimental test is completed on sentiment relationship data sets with obvious grammatical structure, and the accuracy is improved by about 2% compared with the mainstream fine-tuning method, which proves that the model has good generalization ability.