98%
921
2 minutes
20
Multimodal sentiment analysis is an important area of artificial intelligence. It integrates multiple modalities such as text, audio, video and image into a compact multimodal representation and obtains sentiment information from them. In this paper, we improve two modules, i.e., feature extraction and feature fusion, to enhance multimodal sentiment analysis and finally propose an attention-based two-layer bidirectional GRU (AB-GRU, gated recurrent unit) multimodal sentiment analysis method. For the feature extraction module, we use a two-layer bidirectional GRU network and connect two layers of attention mechanisms to enhance the extraction of important information. The feature fusion part uses low-rank multimodal fusion, which can reduce the multimodal data dimensionality and improve the computational rate and accuracy. The experimental results demonstrate that the AB-GRU model can achieve 80.9% accuracy on the CMU-MOSI dataset, which exceeds the same model type by at least 2.5%. The AB-GRU model also possesses a strong generalization capability and solid robustness.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3934/mbe.2023822 | DOI Listing |
Eur J Pain
October 2025
Postgraduate Program in Rehabilitation Sciences, Nove de Julho University, São Paulo, Brazil.
Background: Chronic nonspecific neck pain (CNSNP) is a prevalent and complex condition. Although many studies have evaluated the effectiveness of transcutaneous electrical nerve stimulation (TENS), interferential current (IFC), therapeutic exercise (TE), and manual therapy (MT) individually, this study aimed to determine whether adding IFC and/or TENS to a Multimodal Therapeutic Intervention Program (MTIP) would produce better outcomes than the MTIP alone concerning functional capacity, pain intensity, pain catastrophising, kinesiophobia and overall perceived effect in individuals with CNSNP.
Methods: Seventy-five individuals with CNSNP were randomly assigned to one of three groups: MTIP, MTIP + IFC, or MTIP + TENS.
IEEE J Biomed Health Inform
September 2025
As the impact of chronic mental disorders increases, multimodal sentiment analysis (MSA) has emerged to improve diagnosis and treatment. In this paper, our approach leverages disentangled representation learning to address modality heterogeneity with self-supervised learning as a guidance. The self-supervised learning is proposed to generate pseudo unimodal labels and guide modality-specific representation learning, preventing the acquisition of meaningless features.
View Article and Find Full Text PDFEntropy (Basel)
August 2025
School of Mathematical Sciences, Capital Normal University, Beijing 100048, China.
Multimodal sentiment analysis (MSA) benefits from integrating diverse modalities (e.g., text, video, and audio).
View Article and Find Full Text PDFCureus
July 2025
Anesthesiology and Perioperative Medicine, Medical College of Georgia, Augusta University, Augusta, USA.
Natural language processing (NLP) has become an essential tool in healthcare, enabling sentiment analysis to extract insights from patient reviews, clinician notes, and medical research. This study evaluates the effectiveness of three NLP models - Bidirectional Encoder Representations from Transformers (BERT), Valence Aware Dictionary and sEntiment Reasoner (VADER), and Flair - in analyzing patient sentiment from physician reviews. A total of 1,486 reviews of 30 pain management specialists in Atlanta, GA, were collected from Healthgrades, with sentiment scores derived from each model and compared to patient-provided numerical ratings.
View Article and Find Full Text PDFSci Rep
August 2025
Beijing Institute of Graphic Communication, Beijing, 102600, China.
In existing multimodal sentiment analysis methods, only the last layer output of BERT is typically used for feature extraction, neglecting abundant information from intermediate layers. This paper proposes an Aspect-level Multimodal Sentiment Analysis Model with Multi-scale Feature Extraction (AMSAM-MFE). The model conducts sentiment analysis on both text and images.
View Article and Find Full Text PDF