Despite the great success of deep neural networks (DNNs) in computer vision, they are vulnerable to adversarial attacks. Given a well-trained DNN and an image $x$, a malicious and imperceptible perturbation $\varepsilon$ can be easily crafted and added to $x$ to generate an adversarial example $x^{\prime}$. The output of the DNN in response to $x^{\prime}$ will be different from that of the DNN in response to $x$ • To shed light on how to defend DNNs against such adversarial attacks, in this paper, we apply statistical methods to model and analyze adversarial perturbations $\varepsilon$ crafted by FGSM, PGD, and CW attacks. It is shown statistically that (1) the adversarial perturbations $\varepsilon$ crafted by FGSM, PGD, and CW attacks can all be modelled in the Discrete Cosine Transform (DCT) domain by the Transparent Composite Model (TCM) based on generalized Gaussian (GGTCM); (2) CW attack puts more perturbation energy in the background of an image than in the object of the image, while there is no such distinction for FGSM and PGD attacks; and (3) the energy of adversarial perturbation in the case of CW attack is more concentrated on DC components than in the case of FGSM and PGD attacks.