Most of the existing text classification methods focus on improving the feature expression of the text, and only use the label in the final classification, while ignoring the semantic information of the label itself. Existing research uses the attention mechanism to calculate the correlation between text and labels through text vectors and label feature vectors, but the attention mechanism is easily disturbed by the noise generated by similar label semantic information, and cannot focus on more relevant labels characteristic words. To address the above issues, this paper proposes a graph attention network text classification model fused with label embeddings. By extracting text feature words and tags to construct a new feature word-label graph and send it to the graph attention network for joint learning to reduce the matching error between tags and feature words. Graph attention networks can better focus on feature nodes that are more relevant to classification tasks to extract richer semantic information. Experimental results on five text classification datasets show that the proposed model achieves better classification performance than other baseline models.