Deep Learning Based Motor Imagery Intention Classification Using Electroencephalogram Signal
- Resource Type
- Conference
- Authors
- Dip, Muhammad Sudipto Siam; Hasan, Md Anik; Kabir, Sumaiya; Motin, Mohammod Abdul
- Source
- 2023 IEEE 9th International Women in Engineering (WIE) Conference on Electrical and Computer Engineering (WIECON-ECE) Electrical and Computer Engineering (WIECON-ECE), 2023 IEEE 9th International Women in Engineering (WIE) Conference on. :143-147 Nov, 2023
- Subject
- Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Training
Deep learning
Neural activity
Artificial neural networks
Motors
Feature extraction
Brain modeling
Brain computer interface
motor imagery intention classification
sequential forward feature selection
spatial distribution
support vector machine
- Language
- ISSN
- 2837-8245
Recently, various machine learning and deep learning techniques have been used in motor imagery intention classification to keep up with the rapid advances in human-machine interaction. Nevertheless, there is a limited understanding of feature selection to assist in identifying the most informative and discriminatory characteristics of neural activity in different spatial locations. This study proposes a deep neural network that has the ability to learn to classify motor imagery intention using selected electroencephalogram features across different channels. We extracted three sets of features from each channel and used sequential forward feature selection with support vector machines to select the best features. The proposed model was evaluated with a publicly available dataset after training and testing the deep neural network with the selected features. Our system model achieved an average of 85.85 ± 1.35% accuracy for classifying two motor imagery scenarios. These results demonstrate that the proposed method effectively identifies the most informative and discriminative characteristics of neural activity at different spatial locations to differentiate motor imagery intentions and contribute to future prosthetics and brain-computer interface (BCI) technology.