Artificial intelligence has increasingly pervaded both everyday life and business environments, assuming a role in supporting human decision-making. These systems have grown progressively intricate and effective, carrying the potential to unearth valuable insights across various applications. Nevertheless, for Artificial Intelligence to gain widespread acceptance, human trust in its outcomes is essential. Trustworthy Artificial Intelligence is a term used to describe legal, ethically compliant, and technically sound Artificial Intelligence. In this paper, we design a proposal with the aim to introduce trustworthiness in Artificial Intelligence models. In the proposed workflow, we consider from a set of models contained in a model repository, the computation of a trustworthy index obtained from the prediction explainability in both training and testing phases. The latter is to consider the most reliable model from the trustworthy index point of view.