Sequential anomaly detection has been studied for decades because of its wide spectrum of applications and obtained significant improvement in recent years by utilizing deep learning techniques. As an increasing number of anomaly detection models are applied to high-stake tasks involving human beings, it is critical to understand the reasons why the samples are labeled as anomalies. In this work, we propose a Globally and Locally Explainable Anomaly Detection (GLEAD) framework targeting sequential data. Especially, considering that the anomalies are usually diverse, we make use of the multi-head self-attention techniques to derive representations for sequences as well as prototypes, which capture a variety of patterns in anomalies. The attention mechanism highlights the abnormal entries with high attention weights in the abnormal sequences for the local explanation. Moreover, the prototypes of anomalies encoding the common patterns of abnormal sequences are derived to achieve the global explanation. Experimental results on two sequential anomaly detection datasets show that our approach can detect abnormal sequences and provide local and global explanations.