In the real-world applications, data sets are often imbalanced, where samples from the majority class are significantly more than the minority class. Learning from such a data set likely results in a biased classifier that has a higher predictive accuracy only over the majority class. In this paper, a new sampling strategy based on SMOTE is proposed to balance the dataset, then a feature selection method based on Relief is presented and used to find key features of the given imbalanced learning problem. After that, random forest algorithm is used as a classifier. The method is evaluated using six benchmark data sets and two practical application datasets. Compared with SMOTE and original Relief algorithm, the proposed method is more applicable and effective on the imbalanced data.