Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

The released CMRxRecon2024 dataset is currently the largest and most protocol-diverse publicly available k-space dataset including multimodality and multiview cardiac MRI data from 330 healthy volunteers, and each one covers standardized and commonly used clinical protocols.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950877PMC
http://dx.doi.org/10.1148/ryai.240443DOI Listing

Publication Analysis

Top Keywords

multimodality multiview
8
k-space dataset
8
cardiac mri
8
cmrxrecon2024 multimodality
4
multiview k-space
4
dataset boosting
4
boosting universal
4
universal machine
4
machine learning
4
learning accelerated
4

Similar Publications

This paper presents a novel system for optimizing Tai Chi movement training using computer vision and deep learning technologies. We developed a comprehensive framework incorporating multi-view pose estimation, temporal feature extraction, and real-time movement assessment to address the challenges of traditional Tai Chi instruction. The system employs spatial-temporal graph convolutional networks enhanced with attention mechanisms for accurate movement evaluation, combined with personalized feedback generation through augmented reality and multi-modal interfaces.

View Article and Find Full Text PDF

Electroencephalography (EEG) usage in emotion recognition has garnered significant interest in brain-computer interface (BCI) research. Nevertheless, in order to develop an effective model for emotion identification, features need to be extracted from EEG data in terms of multi-view. In order to tackle the problems of multi-feature interaction and domain adaptation, we suggest an innovative network, IF-MMCL, which leverages multi-modal data in multi-view representation and integrates an individual focused network.

View Article and Find Full Text PDF

PiCCL: A lightweight multiview contrastive learning framework for image classification.

PLoS One

August 2025

Research Center for Biomedical Engineering, Medical Innovation and Research Division, Chinese PLA General Hospital, Beijing, People's Republic of China.

We introduce PiCCL (Primary Component Contrastive Learning), a self-supervised contrastive learning framework that utilizes a multiplex Siamese network structure consisting of many identical branches rather than 2 to maximize learning efficiency. PiCCL is simple and light weight, it does not use asymmetric networks, intricate pretext tasks, hard to compute loss functions or multimodal data, which are common for multiview contrastive learning frameworks and could hinder performance, simplicity, generalizability and explainability. PiCCL obtains multiple positive samples by applying the same image augmentation paradigm to the same image numerous times, the network loss is calculated using a custom designed Loss function named PiCLoss (Primary Component Loss) to take advantage of PiCCL's unique structure while keeping it computationally lightweight.

View Article and Find Full Text PDF

The Segment Anything Model (SAM) has gained significant attention for its impressive performance in image segmentation. However, it lacks proficiency in referring video object segmentation (RVOS) due to the need for precise user-interactive prompts and a limited understanding of different modalities, such as language and vision. This paper presents the RefSAM model, which explores the potential of SAM for RVOS by incorporating multi-view information from diverse modalities and successive frames at different timestamps in an online manner.

View Article and Find Full Text PDF

Human action understanding serves as a foundational pillar in the field of intelligent motion perception. Skeletons serve as a modality- and device-agnostic representation for human modeling, and skeleton-based action understanding has potential applications in humanoid robot control and interaction. However, existing works often lack the scalability and generalization required to handle diverse action understanding tasks.

View Article and Find Full Text PDF