Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Background: Motion during data acquisition leads to artifacts in computed tomography (CT) reconstructions. In cases such as cardiac imaging, not only is motion unavoidable, but evaluating the motion of the object is of clinical interest. Reducing motion artifacts has typically been achieved by developing systems with faster gantry rotation or via algorithms which measure and/or estimate the displacement. However, these approaches have had limited success due to both physical constraints as well as the challenge of estimating non-rigid, temporally varying, and patient-specific motion fields.
Purpose: To develop a novel reconstruction method which generates time-resolved, artifact-free images without estimation or explicit modeling of the motion.
Methods: We describe an analysis-by-synthesis approach which progressively regresses a solution consistent with the acquired sinogram. In our method, we focus on the movement of object boundaries. Not only are the boundaries the source of image artifacts, but object boundaries can simultaneously be used to represent both the object as well as its motion over time without the need for an explicit motion model. We represent the object boundaries via a signed distance function (SDF) which can be efficiently modeled using neural networks. As a result, optimization can be performed under spatial and temporal smoothness constraints without the need for explicit motion estimation.
Results: We illustrate the utility of DiFiR-CT in three imaging scenarios with increasing motion complexity: translation of a small circle, heart-like change in an ellipse's diameter, and a complex topological deformation. Compared to filtered backprojection, DiFiR-CT provides high quality image reconstruction for all three motions without hyperparameter tuning or change to the architecture. We also evaluate DiFiR-CT's robustness to noise in the acquired sinogram and found its reconstruction to be accurate across a wide range of noise levels. Lastly, we demonstrate how the approach could be used for multi-intensity scenes and illustrate the importance of the initial segmentation providing a realistic initialization. Code and supplemental movies are available at https://kunalmgupta.github.io/projects/DiFiR-CT.html.
Conclusions: Projection data can be used to accurately estimate a temporally-evolving scene without the need for explicit motion estimation using a neural implicit representation and analysis-by-synthesis approach.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10684274 | PMC |
http://dx.doi.org/10.1002/mp.16157 | DOI Listing |