Hepatocellular carcinoma (HCC) can be potentially discovered from abdominal computed tomography (CT) studies under varied clinical scenarios (e.g., fully dynamic contrast-enhanced [DCE] studies, noncontrast [NC] plus venous phase [VP] abdominal studies, or NC-only studies).
View Article and Find Full Text PDFIEEE Trans Med Imaging
October 2022
Radiological images such as computed tomography (CT) and X-rays render anatomy with intrinsic structures. Being able to reliably locate the same anatomical structure across varying images is a fundamental task in medical image analysis. In principle it is possible to use landmark detection or semantic segmentation for this task, but to work well these require large numbers of labeled data for each anatomical structure and sub-structure of interest.
View Article and Find Full Text PDFPurpose: Accurate prognostic stratification of patients with oropharyngeal squamous cell carcinoma (OPSCC) is crucial. We developed an objective and robust deep learning-based fully-automated tool called the DeepPET-OPSCC biomarker for predicting overall survival (OS) in OPSCC using [F]fluorodeoxyglucose (FDG)-PET imaging.
Experimental Design: The DeepPET-OPSCC prediction model was built and tested internally on a discovery cohort ( = 268) by integrating five convolutional neural network models for volumetric segmentation and ten models for OS prognostication.
IEEE Trans Med Imaging
October 2021
Large-scale datasets with high-quality labels are desired for training accurate deep learning models. However, due to the annotation cost, datasets in medical imaging are often either partially-labeled or small. For example, DeepLesion is such a large-scale CT image dataset with lesions of various types, but it also has many unlabeled lesions (missing annotations).
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2020
Karyotyping, consisting of single chromosome segmentation and classification, is widely used in the cytogenetic analysis for chromosome abnormality detection. Many studies have reported automatic chromosome classification with high accuracy. Nevertheless, they usually require manual chromosome segmentation beforehand.
View Article and Find Full Text PDFIEEE Trans Med Imaging
January 2021
The acquisition of large-scale medical image data, necessary for training machine learning algorithms, is hampered by associated expert-driven annotation costs. Mining hospital archives can address this problem, but labels often incomplete or noisy, e.g.
View Article and Find Full Text PDFMed Image Anal
October 2020
Although having achieved great success in medical image segmentation, deep learning-based approaches usually require large amounts of well-annotated data, which can be extremely expensive in the field of medical image analysis. Unlabeled data, on the other hand, is much easier to acquire. Semi-supervised learning and unsupervised domain adaptation both take the advantage of unlabeled data, and they are closely related to each other.
View Article and Find Full Text PDFPattern Recognit
February 2019
The muscular dystrophies are made up of a diverse group of rare genetic diseases characterized by progressive loss of muscle strength and muscle damage. Since there is no cure for muscular dystrophy and clinical outcome measures are limited, it is critical to assess the progression of MD objectively. Imaging muscle replacement by fibrofatty tissue has been shown to be a robust biomarker to monitor disease progression in DMD.
View Article and Find Full Text PDFSynthesized medical images have several important applications. For instance, they can be used as an intermedium in cross-modality image registration or used as augmented training samples to boost the generalization capability of a classifier. In this work, we propose a generic cross-modality synthesis approach with the following targets: 1) synthesizing realistic looking 2D/3D images without needing paired training data, 2) ensuring consistent anatomical structures, which could be changed by geometric distortion in cross-modality synthesis and 3) more importantly, improving volume segmentation by using synthetic data for modalities with limited training samples.
View Article and Find Full Text PDFMed Image Comput Comput Assist Interv
October 2016
Automated pancreas segmentation in medical images is a prerequisite for many clinical applications, such as diabetes inspection, pancreatic cancer diagnosis, and surgical planing. In this paper, we formulate pancreas segmentation in magnetic resonance imaging (MRI) scans as a graph based decision fusion process combined with deep convolutional neural networks (CNN). Our approach conducts pancreatic detection and boundary segmentation with two types of CNN models respectively: 1) the tissue detection step to differentiate pancreas and non-pancreas tissue with spatial intensity context; 2) the boundary detection step to allocate the semantic boundaries of pancreas.
View Article and Find Full Text PDFMed Image Comput Comput Assist Interv
October 2016
In order to deal with ambiguous image appearances in cell segmentation, high-level shape modeling has been introduced to delineate cell boundaries. However, shape modeling usually requires sufficient annotated training shapes, which are often labor intensive or unavailable. Meanwhile, when applying the model to different datasets, it is necessary to repeat the tedious annotation process to generate enough training data, and this will significantly limit the applicability of the model.
View Article and Find Full Text PDF