J Med Imaging (Bellingham)
July 2025
Purpose: 3D ultrasound delivers high-resolution, real-time images of soft tissues, which are essential for pain research. However, manually distinguishing various tissues for quantitative analysis is labor-intensive. We aimed to automate multilayer segmentation in 3D ultrasound volumes using minimal annotated data by developing generative reinforcement network plus (GRN+), a semi-supervised multi-model framework.
View Article and Find Full Text PDFBackground: Available studies on chronic lower back pain (cLBP) typically focus on one or a few specific tissues rather than conducting a comprehensive layer-by-layer analysis. Since three-dimensional (3-D) images often contain hundreds of slices, manual annotation of these anatomical structures is both time-consuming and error-prone.
Objectives: We aim to develop and validate a novel approach called InterSliceBoost to enable the training of a segmentation model on a partially annotated dataset without compromising segmentation performance.
Background: The capsulorhexis is one of the most important and challenging maneuvers in cataract surgery. Automated analysis of the anterior capsulotomy could aid surgical training through the provision of objective feedback and guidance to trainees.
Purpose: To develop and evaluate a deep learning-based system for the automated identification and semantic segmentation of the anterior capsulotomy in cataract surgery video.