Deep neural networks (DNN) are applied in various fields because they afford good performance. However, they seem vulnerable to adversarial examples. Therefore, studies are actively investigating adversarial defenses to improve the robustness of DNNs as well as adversarial attacks to generate more intensive adversarial perturbations. Most existing attack methods excessively modify images by generating overly large distortions or adding perturbations to regions with little impact on the DNN. As a result, the local smoothness of images is not maintained, and changes can be easily detected using a steganalysis-based detector. In this study, we propose a contour attack method to evade steganalysis-based detection. This method extracts the contour region from an image and adds perturbations only to applicable regions, thus maintaining the local smoothness. It can solve the problems mentioned above and effectively evade steganalysis-based detectors. Our experimental results demonstrate that the local smoothness is better than that in the case of original attack methods and that the detection evasion rate for a steganalysis detector is up to 19.9% higher. Further, structural similarity (SSIM) measurements and image comparisons demonstrate the improved imperceptibility of perturbations. In particular, the SSIM values for images subjected to the contour attack are close to 1.0.