Recent years have witnessed the increasing deployment of recommender systems as decision-support tools, application of recommender systems is ubiquitous in our daily life. As algorithms used in recommender systems become more inscrutable, some recommender systems remain black-box for most people, which makes it difficult for people to fully trust such systems. Therefore, improving the user-recommender systems interactions is attracting increasing attention, one of the hot topics is explainable recommendation. In this paper, we investigate the possibility of providing context-aware explanations for recommendations and discuss the potential methods of evaluating such explanations.