Numerous fields including the telecom industry's ability to predict customer churn, have been transformed by neural network models. However, these models' vulnerability to adversarial attacks pose questions about their security and reliability. In this study, we test the robustness of a neural network model created for the classification of telecom churn data by measuring how well it performs under three different adversarial attacks: The Projected Gradient Descent (PGD), Boundary attack, and Carlini Wagner attacks. We first created a convolutional neural network model specifically designed for the telecom churn dataset. A large dataset containing pertinent customer attributes, past usage trends, and churn labels is used to train the model. The models functionality is enhanced to precisely classify customer churn by utilizing appropriate strategies. The trained neural network model undergoes three distinct adversarial attacks. The PGD attack looks for perturbations that maximize themodel's loss function within a given - bound. The Boundary attack crosses decision boundaries in order to provide adversarial cases that result in misclassification. The Carlini attack uses an optimization-based methodology to produce the fewest possible model-fooling perturbations. In this research compare the accuracy of the neural network model before and after each adversarial attack to determine the impact of these attacks. We investigate range of defense mechanisms designed especially for telecom churn data categorization in order to increase the model's resistance to these attacks. By implementing these strategies, our aim is to mitigate the impact of adversarial perturbations and increase the model's resistance to attacks from adversaries.