Neural networks have been shown to be vulnerable to carefully crafted adversarial examples. Recently, new adversarial attacks, including dispersion reduction (DR), have been proposed, and shown to be transferable across different computer vision tasks. This means that an ensemble of different defense/detection mechanisms can be evaded all at once. Unlike previous attack methods, the DR attack minimizes the dispersion of an internal feature map providing state-of-the-art results. In this paper, we propose an algorithm to detect the adversarial examples generated by different adversarial attacks, including the dispersion reduction, projected gradient descent, diverse inputs method and momentum iterative fast gradient sign method. Our approach employs 1D Gabor filter responses, and detects adversarial examples generated from different surrogate neural network models and datasets with high accuracy.