Multi-selection Attention for Multimodal Aspect-level Sentiment Classification
- Resource Type
- Conference
- Authors
- Miao, Yuqing; Luo, Ronghai; Liu, Tonglai; Zhang, Wanzhen; Cai, Guoyong; Zhou, Ming
- Source
- 2022 IEEE 13th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP) Parallel Architectures, Algorithms and Programming (PAAP), 2022 IEEE 13th International Symposium on. :1-6 Nov, 2022
- Subject
- Computing and Processing
Deep learning
Image recognition
Text recognition
Target recognition
Bit error rate
Programming
Parallel architectures
multimodal sentiment classification
aspect-level sentiment classification
multi-selection attention mechanism
residual connection
BERT
deep learning
- Language
Multimodal aspect-level sentiment classification aims to utilize images to recognize the sentiment polarity of target aspects in text. To address the issues of low utilization of inter-modal complementary information and vanishing gradients, a multimodal aspect-level sentiment based on multi-selection attention mechanism is proposed. Multi-selection attention mechanism explicitly considers the contribution of different modalities to aspects and utilizes shared features and private features of image modality to enhance sentiment expression of target aspects. On this basis, inspired by residual connections in ResNet and encoder-decoders in U-Net, a simple and effective residual encoder-decoder is proposed to mine deep information and avoid vanishing gradients. The experimental results on two public sentiment datasets show that the proposed model can better utilize images to supplement textual modality and improve the accuracy of sentiment classification.