As an important element of emotional brain-computer interfaces, electroencephalography (EEG) signals have made significant progress in emotion recognition due to their high temporal resolution and reliability. However, EEG signals vary widely among individuals and do not satisfy temporal non-stationarity. Furthermore, trained models cannot maintain good classification accuracy for new individuals or new sessions during the inference stage. Although domain adaptation has been employed to address these issues, most approaches that consider different subjects or sessions as a single source domain ignore the large discrepancies between source domains, while methods that consider multi-source domains need to construct a domain adaptation branch for each source domain. Here, we propose a novel emotion recognition method, i.e., multi-source attention-based dynamic residual transfer (MS-ADRT). We introduce a dynamic feature extractor, in which the model uses an attention module to induce parameters to vary with the sample, implicitly enabling multi-source domain adaptation by adapting to the sample, thus reducing multi-source domain adaptation to single-source domain adaptation. Maximum mean discrepancy (MMD) and maximum classifier discrepancy (MCD)-based adversarial training are also used to narrow distances between source and target domains and facilitate the feature extractor to mine domain-invariant and sentiment-distinguishable features. We compared our algorithm with representative methods using the SEED and SEED-IV datasets, and experimentally verified that our method outperforms other state-of-the-art approaches. The proposed method provides a more effective transfer learning pathway for EEG-based sentiment analysis under multi-source scenarios.