Deep learning is extensively employed in attack traffic detection, exhibiting outstanding performance. To enhance model effectiveness, security personnel acquires additional traffic data from public sources for training. However, public source data may not always be reliable due to potentially inaccurate attack labels. By deliberately modifying labels, attackers can insert backdoors into neural networks, enabling undetected network attacks and posing significant risks. In this paper, we introduce CTP (Cluster and Train with Pruning), an effective method to counter data poisoning attacks and bolster the defense capabilities of attack traffic detection models. CTP consists of two components: activation layer clustering and pruning training. Firstly, activation layer clustering visually reveals poisoned data with manipulated labels, thwarting attackers' attempts to poison the model. Secondly, pruning training reduces the likelihood of model neurons being poi-soned, further mitigating the risk of poisoned data. These steps significantly strengthen deep learning models' resistance to data poisoning attacks. We conduct experiments on three popular datasets, and the results indicate that our proposed method effectively enhances the model's defense capabilities.