Emotions play a vital role in a person's life and it affects their physically. Therefore, emotions can be manipulated by measuring customer perception of a place. People use facial expressions as a tool to express emotional states. Face recognition is always an exciting and challenging area for research on computer perspective. In this study, seven emotions such as anger, disgust, fear, happy, sad, surprise and neutral states have been classified through facial expression images. This paper presents a novel CNN-LSVM hybrid approach for multimodal emotion recognition. The proposed approach uses two datasets namely CK+ and FER-2013 images for emotion recognition. During CNN experimentation, several texture features have been extracting which have been used for classification purposes. The classification of texture features is achieved through the linear support vector machine technique. The LSVM model is responsible to classify the texture features. Throughout CNN, different hyperparameters such as batch size, epochs, and momentum have been used. With help of the CNN-LSVM approach, 92.3% average classification accuracy for different emotions has been achieved.