Since deep neural networks (DNNs) have been widely used in various domains, their robustness issues have caused widespread concern. Nowadays, most researches focus on the adversarial attack in the 2D domain, and the related study in the 3D scene is limited. In this paper, we study the adversarial robustness of 3D face reconstruction, which is widely used in real-world applications. Since the 3D face reconstruction mainly relies on predicting 3D Morphable Models (3DMM) parameters, a simple way to implement adversarial attacks is manually setting the weights for different parameters to generate adversarial perturbations, denoted as Parameter-Oriented Attack (POA). This method has advantages of high practicality and disadvantages of inconvenience and uncertainty. Due to the dissimilar structures and training strategies of various 3D face reconstruction networks, POA has a low generalizability between different models. So we further propose an adaptive parameter-oriented attack (APOA), whose weights of parameters are driven by the loss themselves and can automatically search for the optimal solution in both image-specific and universal attacks. Extensive experiments conducted on four popular 3D face reconstruction models prove the effectiveness of our method and the vulnerability of 3D face reconstruction under adversarial attacks.