In the domain of 3D point cloud classification, deep learning based classifiers have made significant progress, while they have been also proven to be vulnerable on the adversarial at-tack at the same time. Some recent works employ the attack methods that devised for image classification such as projected gradient descent (PGD) to attack the 3D classifiers, but their performances seem quite limited when faced with statistical operations including point cloud denoising and point cloud upsampling. In this paper, we propose ‘SmoothAttack’, a new attack that can craft adversarial point clouds robust to statistical operations. SmoothAttack can be easily applied in both global constraint and pointwise constraint. Besides, we analyze the directions of perturbations onto the point cloud during the iteration process, where SmoothAttack can some-how stabilize the direction and make full use of the adversarial budgets. Experiments validate that our ‘SmoothAttack’ can raise the attack success rates against statistical defenses up to 98% for untargeted attack and 91% for targeted attack on ModelNet40 database when fooling the classifiers Point-Net and DGCNN.