In recent years, many researchers have explored electroencephalogram-based (EEG-based) emotion recognition in a variety of ways, but a few studies have investigated on facial emotion recognition for hearing-impaired and normal people. In order to compare and analyze the differences in facial emotion recognition between hearing-impaired and normal people, we established emotional EEG dataset of hearing-impaired subjects and normal subjects based on facial affective picture stimulation, which contains five kinds of emotion (happiness, neutral, sadness, fear and anger) with 15 hearing-impaired and 15 normal subjects. The collected EEG signals are filtered and artifact are removed by preprocessing. Then, differential entropy (DE), power spectral density (PSD) and wavelet entropy (WE) features were extracted, and the linear support vector machine (SVM-linear) as selected optimal classifier is used for ten-fold cross-validation emotion classification. The results show that the DE feature achieved the best emotion recognition accuracy for both hearing-impaired subjects (40.8%) and normal subjects (45.5%).