As the demand for interpretable machine learning approaches increases, there is an increasing need for human involvement to provide diverse explanations for model decisions. This is crucial for enhancing trust and transparency in AI-based systems, leading to the emergence of the Explainable Artificial Intelligence (XAI) field. In this paper, we design a novel counterfactual explanation model, CELS, which learns a saliency map for the interest of an instance and generates a counterfactual explanation guided by the learned saliency map. CELS adopts a gradient-based approach composed of three interdependent modules that combine to generate sparse counterfactual explanations that are easily understood by end users. To the best of our knowledge, this is the first attempt to guide the perturbation to generate a counterfactual explanation via a learned saliency map. To validate our model, we conducted experiments using five popular real-world time-series datasets obtained from the UCR repository. The experimental results demonstrate the superiority of our model in achieving higher sparsity, proximity, and interpretability of counterfactual explanations when compared to other state-of-the-art baselines.