Requirements classification is an important step in requirements analysis. Recent studies mostly adopt machine learning techniques to automate requirements classification. In particular, state-of-the-art technologies such as Bidirectional Encoder Representations from Transformers (BERT) have significantly improved the accuracy of classification tasks. However, most of these models are considered black-box models, and thus the rationale of the classification is unclear, affecting the improvement of the classification model. In this paper, we propose a technique for improving requirements classification models based on an explainable AI (XAI) framework. Specifically, our proposed approach first trains a concern extraction model to identify requirements concerns. Then, the requirements classification model is analyzed using an explainability framework to generate the explanation, which may shed light on the noise in the dataset. Finally, we denoise the training dataset and fine-tune the model to improve its performance. We evaluate our proposed technique using an existing requirements classification approach and two existing requirements datasets. The experimental results demonstrate that our approach significantly improves the model performance. Specifically, the accuracy improved by 7.68%, the recall improved by 7.44%, and the F1 improved by 7.80% overall.