EXplainable Artificial Intelligence ( XAI) methods have gained much momentum lately given their ability to shed the light on the decision function of opaque machine learning models. There are two dominating XAI paradigms: feature attribution and counterfactual explanation methods. While the first family of methods explains why the model made a decision, counterfactual methods aim at answering what-if the input is slightly different and results in another classification decision. Most of the research efforts have focused on answering the why question for time series data modality. In this paper, we aim at answering the what-if question by finding a good balance between a set of desirable counterfactual explanation properties. We propose Shapelet-guided Counterfactual Explanation (SG-CF), a novel optimization-based model that generates interpretable, intuitive post-hoc counterfactual explanations of time series classification models that balance validity, proximity, sparsity, and contiguity. Our experimental results on nine real-world time-series datasets show that our proposed method can generate counterfactual explanations that balance all the desirable counterfactual properties in comparison with other competing baselines.