Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Objective: The increasing number of coronary computed tomography angiography (CCTA) requests raised concerns about dose exposure. New dose reduction strategies based on artificial intelligence have been proposed to overcome limitations of iterative reconstruction (IR) algorithms. Our prospective study sought to explore the added value of deep-learning image reconstruction (DLIR) in comparison with a hybrid IR algorithm (adaptive statistical iterative reconstruction-veo [ASiR-V]) in CCTA, even in clinical challenging scenarios, as obesity, heavily calcified vessels and coronary stents.

Methods: We prospectively included 103 consecutive patients who underwent CCTA. Data sets were reconstructed with ASiR-V and DLIR. For each reconstruction signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) was calculated, and qualitative assessment was made with a four-point Likert scale by two independent and blinded radiologists with different expertise.

Results: Both SNR and CNR were significantly higher in DLIR (SNR-DLIR median value [interquartile range] of 13.89 [11.06-16.35] and SNR-ASiR-V 25.42 [22.46-32.22], P < 0.001; CNR-DLIR 16.84 [9.83-27.08] vs CNR-ASiR-V 10.09 [5.69-13.5], P < 0.001).Median qualitative score was 4 for DLIR images versus 3 for ASiR-V ( P < 0.001), with a good interreader reliability [intraclass correlation coefficient(2,1)e intraclass correlation coefficient(3,1) 0.60 for DLIR and 0.62 and 0.73 for ASiR-V].In the obese and in the "calcifications and stents" groups, DLIR showed significantly higher values of SNR (24.23 vs 11.11, P < 0.001 and 24.55 vs 14.09, P < 0.001, respectively) and CNR (16.08 vs 8.04, P = 0.008 and 17.31 vs 10.14, P = 0.003) and image quality.

Conclusions: Deep-learning image reconstruction in CCTA allows better SNR, CNR, and qualitative assessment than ASiR-V, with an added value in the most challenging clinical scenarios.

Download full-text PDF

Source
http://dx.doi.org/10.1097/RCT.0000000000001537DOI Listing

Publication Analysis

Top Keywords

image reconstruction
12
deep-learning image
8
qualitative assessment
8
snr cnr
8
dlir
6
image
5
reconstruction
5
ccta
5
deep learning
4
learning image
4

Similar Publications

Significance: Melanoma's rising incidence demands automatable high-throughput approaches for early detection such as total body scanners, integrated with computer-aided diagnosis. High-quality input data is necessary to improve diagnostic accuracy and reliability.

Aim: This work aims to develop a high-resolution optical skin imaging module and the software for acquiring and processing raw image data into high-resolution dermoscopic images using a focus stacking approach.

View Article and Find Full Text PDF

Significance: The spatial and temporal distribution of fluorophore fractions in biological and environmental systems contains valuable information about the interactions and dynamics of these systems. To access this information, fluorophore fractions are commonly determined by means of their fluorescence emission spectrum (ES) or lifetime (LT). Combining both dimensions in temporal-spectral multiplexed data enables more accurate fraction determination while requiring advanced and fast analysis methods to handle the increased data complexity and size.

View Article and Find Full Text PDF

Background: In contrast-enhanced digital mammography (CEDM) and contrast-enhanced digital breast tomosynthesis (CEDBT), low-energy (LE) and high-energy (HE) images are acquired after injection of iodine contrast agent. Weighted subtraction is then applied to generate dual-energy (DE) images, where normal breast tissues are suppressed, leaving iodinated objects enhanced. Currently, clinical systems employ a dual-shot (DS) method, where LE and HE images are acquired with two separate exposures.

View Article and Find Full Text PDF

Lightweight hybrid Mamba2 for unsupervised medical image registration.

Med Phys

September 2025

School of Computer, Electronics and Information, Guangxi University, Nanning, China.

Background: Deformable medical image registration is a critical task in medical imaging-assisted diagnosis and treatment. In recent years, medical image registration methods based on deep learning have made significant success by leveraging prior knowledge, and the registration accuracy and computational efficiency have been greatly improved. Models based on Transformers have achieved better performance than convolutional neural network methods (ConvNet) in image registration.

View Article and Find Full Text PDF

Background: Four-dimensional magnetic resonance imaging (4D-MRI) holds great promise for precise abdominal radiotherapy guidance. However, current 4D-MRI methods are limited by an inherent trade-off between spatial and temporal resolutions, resulting in compromised image quality characterized by low spatial resolution and significant motion artifacts, hindering clinical implementation. Despite recent advancements, existing methods inadequately exploit redundant frame information and struggle to restore structural details from highly undersampled acquisitions.

View Article and Find Full Text PDF