The process of identifying and categorizing lung cancer in its early stages is difficult, yet doing so will improve patient survival rates. There is a wealth of research that segments and categorizes lung nodules using convolution neural networks (CNN). Numerous factors, including batch size, epochs, learning rate, optimizers, activation function, and weight initialization, affect the model's performance. To categorize CT scan images as benign, malignant, or normal, a deep CNN model is developed. Optimizers are crucial to the training process of the model because they adjust the weight and learning rate of the network in each epoch, which lowers loss and raises classifier accuracy. Using ReLU and selu as activation functions, this work offers a thorough analysis of the performance for a number of optimizers, including Nesterov Adaptive Momentum (Nadam), Adaptive Delta (Adadelta), Adaptive Max Pooling (Adamax), Stochastic Gradient Descent (SGD), Adaptive gradient (Adagrad), Adaptive Momentum (Adam), Adaptive Gradient Descent (Adagrad), and Root Mean Square Propagation (RMSProp)[1][1][1][1][1][1]. The best training accuracy for the ReLu activation function is 92.67%, with a validation accuracy of 97.92, and for the Selu activation function, it is 95.42%, with a validation accuracy of 96.88, according to the Adam optimizer. The study aims to assess various optimization techniques in deep CNNs for lung cancer classification using CT scan images. It analyzes different optimizers to understand their impact on model convergence, accuracy, and generalization. This insight is crucial for selecting the most effective optimization methods in lung cancer diagnosis.