Deep learning framework for interpretable quality control of echocardiography video.

Med Phys

Department of Ultrasound Medicine at the Affiliated Hospital of Medical School, Nanjing University, Nanjing, China.

Published: June 2025


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Background: Echocardiography (echo) has become an indispensable tool in modern cardiology, offering real-time imaging that helps clinicians evaluate heart function and identify abnormalities. Despite these advantages, the acquisition of high-quality echo is time-consuming, labor-intensive, and highly subjective.

Purpose: The objective of this study is to introduce a comprehensive system for the automated quality control (QC) of echo videos. This system focuses on real-time monitoring of key imaging parameters, reducing the variability associated with manual QC processes.

Methods: Our multitask network analyzes cardiac cycle integrity, anatomical structures (AS), depth, cardiac axis angle (CAA), and gain. The network consists of a shared convolutional neural network (CNN) backbone for spatial feature extraction, along with three additional modules: (1) a bidirectional long short-term memory (Bi-LSTM) phase analysis (PA) module for detecting cardiac cycles and QC targets; (2) an oriented object detection head for AS analysis and depth/CAA quantification; and (3) a classification head for gain analysis. The model was trained and tested on a dataset of 1331 echo videos. Through model inference, a comprehensive score is generated, offering easily interpretable insights.

Results: The model achieved a mean average precision of 0.962 for AS detection, with PA yielding average frame errors of 1.603 1.181 (end-diastolic) and 1.681 1.332 (end-systolic). The gain classification model demonstrated robust performance (Area Under the Curve > 0.98), and the overall processing speed reached 112.4 frames per second. On 203 randomly collected echo videos, the model achieved a kappa coefficient of 0.79 for rating consistency compared to expert evaluations CONCLUSIONS: Given the model's performance on the clinical dataset and its consistency with expert evaluations, our results indicate that the model not only delivers real-time, interpretable quality scores but also demonstrates strong clinical reliability.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12149689PMC
http://dx.doi.org/10.1002/mp.17722DOI Listing

Publication Analysis

Top Keywords

echo videos
12
interpretable quality
8
quality control
8
videos model
8
model achieved
8
expert evaluations
8
model
6
echo
5
deep learning
4
learning framework
4

Similar Publications

Introduction: Segmentation of echocardiograms plays a crucial role in clinical diagnosis. Beyond accuracy, a major challenge of video echocardiogram analysis is the temporal consistency of consecutive frames. Stable and consistent segmentation of cardiac structures is essential for a reliable fully automatic echocardiogram interpretation.

View Article and Find Full Text PDF

Background: In 2019, NHS England launched the second version of the Saving Babies' Lives Care Bundle (SBLCBv2), recommendations that maternity providers are expected to fully implement, in an ongoing effort to reduce stillbirths and preterm births. Although stillbirth rates have seen an overall significant reduction since the inception of the SBLCB, experiences of maternity care in England are deteriorating. This study aimed to explore service users' experiences of SBLCBv2-informed maternity care to help understand the aspects of care they received positively and those needing improvement.

View Article and Find Full Text PDF

Marine ecosystems are facing pressures from climate change and anthropogenic activities. While the induced impacts are widely observed, studied and modelled to define projections and management advice, the evolution of marine biodiversity still needs to be described and understood at local scales. The northern part of the Bay of Biscay is particularly concerned since at the edge of two marine provinces that discriminate Lusitanian and Boreal species, and with intense fishing pressure due to the presence of many commercial species.

View Article and Find Full Text PDF

Objective: The segmentation of ultrasound video objects aims to delineate specific anatomical structures or areas of injury in sequential ultrasound imaging data. Current methods exhibit promising results, but struggle with key aspects of ultrasound video analysis. They insufficiently capture inter-frame object motion, resulting in unsatisfactory segmentation for dynamic or low-contrast scenarios.

View Article and Find Full Text PDF

Echocardiographic video-driven multi-task learning model for coronary artery disease diagnosis and severity grading.

Front Bioeng Biotechnol

July 2025

Medical Ultrasound Image Computing (MUSIC) Laboratory, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China.

Introduction: Echocardiography is a first-line noninvasive test for diagnosing coronary artery disease (CAD), but it depends on time-consuming visual assessments by experts.

Methods: This study constructed an echocardiographic video-driven multi-task learning model, denoted Intelligent echo for CAD (IE-CAD), to facilitate CAD screening and stenosis grading. A 3DdeeplabV3+ backbone and multi-task learning were simultaneously incorporated into the core frame of the IE-CAD model to capture the dynamic myocardial contours.

View Article and Find Full Text PDF