Neural networks are known to be vulnerable to adversarial examples, which are inputs that are obtained by adding small, imperceptible perturbations to valid inputs, and that are designed to be misclassified. Local robustness verification can verify that a neural network is robust wrt. any perturbation to a specific input within a certain distance. This distance is called robustness radius. We conducted empirical study on the robustness radii of different inputs. We observed that the robustness radii of correctly classified inputs are much larger than that of natural misclassified (because of inaccuracy) inputs and adversarial examples, especially those from strong adversarial attacks. Another observation is that the robustness radii of correctly classified inputs often follow a normal distribution. Based on these two observations, we propose to leverage local robustness verification techniques to validate inputs for neural networks. Experiments show that our approach can protect neural networks from adversarial examples and improve their accuracy.