98%
921
2 minutes
20
The released CMRxRecon2024 dataset is currently the largest and most protocol-diverse publicly available k-space dataset including multimodality and multiview cardiac MRI data from 330 healthy volunteers, and each one covers standardized and commonly used clinical protocols.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950877 | PMC |
http://dx.doi.org/10.1148/ryai.240443 | DOI Listing |
Sci Rep
August 2025
College of Physical Education, Baicheng Normal University, Baicheng, Jilin, 137000, China.
This paper presents a novel system for optimizing Tai Chi movement training using computer vision and deep learning technologies. We developed a comprehensive framework incorporating multi-view pose estimation, temporal feature extraction, and real-time movement assessment to address the challenges of traditional Tai Chi instruction. The system employs spatial-temporal graph convolutional networks enhanced with attention mechanisms for accurate movement evaluation, combined with personalized feedback generation through augmented reality and multi-modal interfaces.
View Article and Find Full Text PDFMed Biol Eng Comput
August 2025
School of Electrical Engineering, Shenyang University of Technology, Shenyang, 110870, Liaoning, China.
Electroencephalography (EEG) usage in emotion recognition has garnered significant interest in brain-computer interface (BCI) research. Nevertheless, in order to develop an effective model for emotion identification, features need to be extracted from EEG data in terms of multi-view. In order to tackle the problems of multi-feature interaction and domain adaptation, we suggest an innovative network, IF-MMCL, which leverages multi-modal data in multi-view representation and integrates an individual focused network.
View Article and Find Full Text PDFPLoS One
August 2025
Research Center for Biomedical Engineering, Medical Innovation and Research Division, Chinese PLA General Hospital, Beijing, People's Republic of China.
We introduce PiCCL (Primary Component Contrastive Learning), a self-supervised contrastive learning framework that utilizes a multiplex Siamese network structure consisting of many identical branches rather than 2 to maximize learning efficiency. PiCCL is simple and light weight, it does not use asymmetric networks, intricate pretext tasks, hard to compute loss functions or multimodal data, which are common for multiview contrastive learning frameworks and could hinder performance, simplicity, generalizability and explainability. PiCCL obtains multiple positive samples by applying the same image augmentation paradigm to the same image numerous times, the network loss is calculated using a custom designed Loss function named PiCLoss (Primary Component Loss) to take advantage of PiCCL's unique structure while keeping it computationally lightweight.
View Article and Find Full Text PDFNeural Netw
August 2025
College of Computer Science and Technology, National University of Defense Technology, Changsha, 410073, China. Electronic address:
The Segment Anything Model (SAM) has gained significant attention for its impressive performance in image segmentation. However, it lacks proficiency in referring video object segmentation (RVOS) due to the need for precise user-interactive prompts and a limited understanding of different modalities, such as language and vision. This paper presents the RefSAM model, which explores the potential of SAM for RVOS by incorporating multi-view information from diverse modalities and successive frames at different timestamps in an online manner.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
August 2025
Human action understanding serves as a foundational pillar in the field of intelligent motion perception. Skeletons serve as a modality- and device-agnostic representation for human modeling, and skeleton-based action understanding has potential applications in humanoid robot control and interaction. However, existing works often lack the scalability and generalization required to handle diverse action understanding tasks.
View Article and Find Full Text PDF