EEG based Facial Emotions Recognition system (FERS) is a critical challenge for machine-level applications to comprehend the precise emotional changes in any type of human. Accurate artefact features are unable to extract changes in expressions, and current technologies make this a tough process. As a result, facial feature extraction and separation is becoming increasingly important to researchers. Deep learning (GA, DET, ADA Boosting, CONN, and FUCNN) were used in previous machine learning, but automatic feature extraction and weight balancing were not possible. In this paper, the GoogleNet-7 deep learning algorithm (RDL) is suggested for accurate facial recognition and emotion calculation under a variety of scenarios, including poor lighting, sunglasses, long hair, or other partially obscuring the face items, and low resolution based facial photos. Face identification, rotation, and data capture are just a few of the pre-processing procedures that limit feature extraction for FERS. The GoogleNet-7 can process facial images utilising 165 convolutional layers, and testing, training, and scaling have all improved as a result of its development. The GoogleNet-7 design, which is built on python 3.7.8, can provide automatic monitoring of hidden facial traits. The suggested methodology uses a soft max mechanism to calculate classification accuracy, and it can also distinguish facial expressions (sadness, pleasure, fear, surprise, disgust) with greater accuracy. This model achieves 99.12 percent accuracy, 99.51 percent recall, 0.94 percent F1 score, and 98.11 percent sensitivity, which outperforms the technique and competes with current CONN models.