Recent studies have shown that deep learning models such as image classification and object detection models are vulnerable to attacks from adversarial examples. These malicious examples may trigger potential security threats and cause damage in security-related fields. In this paper, in order to find model robustness weakness, we obtain the adversarial robustness metric against several adversarial attacks, including gradient-based attacks, optimization-based attacks, etc. The metric quantizes the adversarial robustness of deep learning models and can be used as a guide for extra model training. Meanwhile, we design a testing platform to achieve model adversarial robustness evaluation. Utilizing the front and back end separation strategy, the testing platform consist of 2 modules: one is multiple adversarial example attacks executor and the other is HCI module. In addition, the tool allows parallel comparison among different adversarial attack algorithms under given conditions and measures the quality of generated adversarial examples utilizing FID(Fréchet Inception Distance) metrics. Besides, we developed ERFGSM(edge-RFGSM), a new gradient-based attack using edge information through canny operator.