Ensuring the utmost security of loT systems is imperative, and robust botnet detection plays a pivotal role in achieving this goal. Deep learning-based approaches have been widely employed for botnet detection. However, the lack of interpretability and transparency in these models can limit these models' effectiveness. In this research, we present a Deep Neural Network (DNN) model specifically designed for the detection of loT botnet attack types. Our model performs exceptionally, demonstrating outstanding performance of classification metrics with 99% accuracy, F1 score, recall, and precision. To gain deeper insights into our DNN model's behaviour, we employ seven different post hoc explanation techniques to provide local expla-nations. We evaluate the quality of Explainable AI (XAI) methods using metrics such as high faithfulness, monotonicity, complexity, and sensitivity. Our findings highlight the effectiveness of XAI techniques in enhancing the interpretability and transparency of the DNN model for loT botnet detection. Specifically, our results indicate that DeepLIFT yields high faithfulness, high consistency, low complexity, and low sensitivity among all the explainers.