With the rise of intelligent manufacturing, prognostics and health management(PHM) has developed rapidly as an important part of intelligent manufacturing. Existing deep learning-based PHM methods are data-dependent. However, sensor data often contains noise and is redundant and high-dimensional, making it difficult for the PHM methods to learn a stable set of model parameters, so the methods are likely to be wrong when disturbed. However, the factory hopes that the PHM methods are robust enough to adapt to various perturbations, so it is necessary to perform robustness evaluation on the existing methods in advance for easy deployment. Although the existing robust theoretical analysis methods for neural networks can obtain tight robust boundaries, they consume a lot of computing resources and are difficult to scale to large neural networks. To slove this problem, We design a benchmark for robustness analysis of large deep learning PHM models, in which we test the model robustness using a variety of perturbations to simulate the actual production environment of the factory. Specifically, Gaussian noise is used to test the robustness of the model to background noise; random mask is used to test the robustness of the model to data loss. We hope that our robustness benchmark can serve as a reference for designing PHM models to improve the robustness of factory PHM models.