Feature selection is an important data pre-processing procedure in the classification task, whose main purpose is to remove negligible/redundant features to reduce the computational cost, while improving the performance of the subsequent machine learning method. However, most feature selection methods can only work well with the complete data, but show poorly in the presence of the missing information, especially with a high missing rate, due probably to the severe irrelevance and redundancy of features. To address this issue, we proposed a novel feature selection method, namely non-negative latent factor incorporated duplicate maximal information coefficient (NLF-DMIC), which improves the effectiveness of feature selection for the classification of incomplete data. The NLF-DMIC method is fulfilled by the following three steps: (1) select category-friendly features by MIC based on “partial sample strategy” roughly; (2) use the NLF model to make an imputation for the missing data in terms of the filtered features; and (3) select features again by an improved maximal information coefficient (i.e., low-redundancy MIC, LMIC) method on the complete dataset. Finally, experiments on an artificially synthetic dataset and 8 real datasets show that the proposed NLF-DMIC method outperforms some state-of-the-art feature selection methods.