Deep neural network classifiers are susceptible to be deceived by adversarial examples made by attackers, resulting in misclassification. Although the white-box attack performance of the adversarial attacks is very high, the transferability of the adversarial attacks against the unknown model is always very low, and the transferability enhancement of the adversarial examples often brings about an increase in the magnitude of the perturbations, and the concealment of the adversarial examples is weakened. In this paper, we propose a new multi-level smoothing filter network, which is embedded in the iteration of the adversarial attack algorithms to enhance the continuity of adjacent pixels of adversarial perturbations. The experimental results show that after adding our multi-level smoothing filter network, the adversarial examples generated by the attack algorithms are more likely to deceive the target neural networks, and the transferable attack performance and perturbation magnitude compression are improved.