To solve the "dimension disaster" problem caused by high-dimensional data, this paper proposes a new feature selection algorithm that combines kernel functions with sparse learning. Firstly, each dimension feature of the data set is mapped to a high-dimensional kernel space by using a kernel function, and each dimension of the data set performs linear feature selection in the high-dimensional kernel space to achieve nonlinear feature selection in low-dimensional space. Secondly, we sparsely reconstruct the features mapped to the kernel space to get a sparse representation of the original data set. Meanwhile, we use l 1 -norm to construct feature scoring selection mechanism and select the optimal feature subset. Finally, this data subset is used for classification experiments. Experimental results on public datasets showed that the proposed algorithm can conduct feature selection better, and the accuracy of classification can be increased by about 4%.