The problem of generating adversarial examples for text has been studied extensively; however, the detection of adversarial examples is largely underexplored. This paper studies the usage of Local Outlier Factors (LOF) to detect and filter adversarial examples from training data. Our experiments demonstrate that removing examples detected by LOF restores the performance of LSTM, CNN, and transformer-based classifiers on common sentence classification tasks. Our proposed technique outperforms DISP and FGWS, two state of the art detection techniques for identifying adversarial examples.