Deep Learning algorithms are considered “black-box” algorithms because it is not possible to analyse how they find the final result. This greatly limits their application in several domains, especially in fields like medicine, where errors can harm patients. To overcome this limitation, explainable AI techniques have been developed that allow us to understand the features of the input that have been relevant to the system to find the result. Most authors do not pay enough attention to explainable AI techniques, creating very basic and uninformative representations. For this reason, we analyse different heatmap-based eXplainable AI techniques for different medical problems related to chest x-rays classification, depending on the classification problem: binary and mutilabel. In our methodology, we divide the techniques into two groups to address the explainability in Artificial Intelligence applied to medicine, and show five representative examples of different visualisation techniques.