98%
921
2 minutes
20
Semantic segmentation of medical images is pivotal in applications like disease diagnosis and treatment planning. While deep learning automates this task effectively, it struggles in ultra low-data regimes for the scarcity of annotated segmentation masks. To address this, we propose a generative deep learning framework that produces high-quality image-mask pairs as auxiliary training data. Unlike traditional generative models that separate data generation from model training, ours uses multi-level optimization for end-to-end data generation. This allows segmentation performance to guide the generation process, producing data tailored to improve segmentation outcomes. Our method demonstrates strong generalization across 11 medical image segmentation tasks and 19 datasets, covering various diseases, organs, and modalities. It improves performance by 10-20% (absolute) in both same- and out-of-domain settings and requires 8-20 times less training data than existing approaches. This greatly enhances the feasibility and cost-effectiveness of deep learning in data-limited medical imaging scenarios.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12260076 | PMC |
http://dx.doi.org/10.1038/s41467-025-61754-6 | DOI Listing |
J Ultrasound Med
September 2025
Department of Ultrasound, Donghai Hospital Affiliated to Kangda College of Nanjing Medical University, Lianyungang, China.
Objective: The aim of this study is to evaluate the prognostic performance of a nomogram integrating clinical parameters with deep learning radiomics (DLRN) features derived from ultrasound and multi-sequence magnetic resonance imaging (MRI) for predicting survival, recurrence, and metastasis in patients diagnosed with triple-negative breast cancer (TNBC) undergoing neoadjuvant chemotherapy (NAC).
Methods: This retrospective, multicenter study included 103 patients with histopathologically confirmed TNBC across four institutions. The training group comprised 72 cases from the First People's Hospital of Lianyungang, while the validation group included 31 cases from three external centers.
Comput Methods Biomech Biomed Engin
September 2025
School of Medicine, Tzu Chi University, Hualien, Taiwan.
This study explores deep feature representations from photoplethysmography (PPG) signals for coronary artery disease (CAD) identification in 80 participants (40 with CAD). Finger PPG signals were processed using multilayer perceptron (MLP) and convolutional neural network (CNN) autoencoders, with performance assessed via 5-fold cross-validation. The CNN autoencoder model achieved the best results (recall 96.
View Article and Find Full Text PDFTransl Vis Sci Technol
September 2025
Department of Ophthalmology, University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania, USA.
Purpose: To evaluate choroidal vasculature using a novel three-dimensional algorithm in fellow eyes of patients with unilateral chronic central serous chorioretinopathy (cCSC).
Methods: Patients with unilateral cCSC were retrospectively included. Automated choroidal segmentation was conducted using a deep-learning ResUNet model.
J Integr Neurosci
August 2025
School of Computer Science, Guangdong Polytechnic Normal University, 510665 Guangzhou, Guangdong, China.
Background: Emotion recognition from electroencephalography (EEG) can play a pivotal role in the advancement of brain-computer interfaces (BCIs). Recent developments in deep learning, particularly convolutional neural networks (CNNs) and hybrid models, have significantly enhanced interest in this field. However, standard convolutional layers often conflate characteristics across various brain rhythms, complicating the identification of distinctive features vital for emotion recognition.
View Article and Find Full Text PDFJ Pharm Anal
August 2025
Shanghai Key Laboratory of Regulatory Biology, Institute of Biomedical Sciences and School of Life Sciences, East China Normal University, Shanghai, 200241, China.
Current experimental and computational methods have limitations in accurately and efficiently classifying ion channels within vast protein spaces. Here we have developed a deep learning algorithm, GPT2 Ion Channel Classifier (GPT2-ICC), which effectively distinguishing ion channels from a test set containing approximately 239 times more non-ion-channel proteins. GPT2-ICC integrates representation learning with a large language model (LLM)-based classifier, enabling highly accurate identification of potential ion channels.
View Article and Find Full Text PDF