Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
In bone cancer imaging, positron emission tomography (PET) is ideal for the diagnosis and staging of bone cancers due to its high sensitivity to malignant tumors. The diagnosis of bone cancer requires tumor analysis and localization, where accurate and automated wholebody bone segmentation (WBBS) is often needed. Current WBBS for PET imaging is based on paired Computed Tomography (CT) images. However, mismatches between CT and PET images often occur due to patient motion, which leads to erroneous bone segmentation and thus, to inaccurate tumor analysis. Furthermore, there are some instances where CT images are unavailable for WBBS. In this work, we propose a novel multimodal fusion network (MMF-Net) for WBBS of PET images, without the need for CT images. Specifically, the tracer activity ($\lambda$-MLAA), attenuation map ($\mu$-MLAA), and synthetic attenuation map ($\mu$-DL) images are introduced into the training data. We first design a multi-encoder structure employed to fully learn modalityspecific encoding representations of the three PET modality images through independent encoding branches. Then, we propose a multimodal fusion module in the decoder to further integrate the complementary information across the three modalities. Additionally, we introduce revised convolution units, SE (Squeeze-and-Excitation) Normalization and deep supervision to improve segmentation performance. Extensive comparisons and ablation experiments, using 130 whole-body PET image datasets, show promising results. We conclude that the proposed method can achieve WBBS with moderate to high accuracy using PET information only, which potentially can be used to overcome the current limitations of CT-based approaches, while minimizing exposure to ionizing radiation.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/JBHI.2024.3501386 | DOI Listing |