98%
921
2 minutes
20
Thyroid nodule classification and segmentation in ultrasound images are crucial for computer-aided diagnosis; however, they face limitations owing to insufficient labeled data. In this study, we proposed a multi-view contrastive self-supervised method to improve thyroid nodule classification and segmentation performance with limited manual labels. Our method aligns the transverse and longitudinal views of the same nodule, thereby enabling the model to focus more on the nodule area. We designed an adaptive loss function that eliminates the limitations of the paired data. Additionally, we adopted a two-stage pre-training to exploit the pre-training on ImageNet and thyroid ultrasound images. Extensive experiments were conducted on a large-scale dataset collected from multiple centers. The results showed that the proposed method significantly improves nodule classification and segmentation performance with limited manual labels and outperforms state-of-the-art self-supervised methods. The two-stage pre-training also significantly exceeded ImageNet pre-training.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.compbiomed.2024.108087 | DOI Listing |
Neural Netw
August 2025
College of Communication Engineering, Jilin University, Changchun, China. Electronic address:
To address the problems of low signal-to-noise ratio, significant individual differences between subjects, and class imbalance in P300-based brain-computer interface (BCI), this paper proposes a novel Inception-based two-stage ensemble framework (ITSEF) to improve detection accuracy. Firstly, an Inception-based convolutional neural network (ICNN) is designed to extract multi-scale features and conduct cross-channel learning. In addition, a two-stage ensemble framework (TSEF) combined with a pre-training and fine-tuning strategy is developed, aiming to enhance the classification performance of the minority class and improve the generalization ability of the model.
View Article and Find Full Text PDFComput Med Imaging Graph
September 2025
School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China. Electronic address:
Introduction: Unsupervised deep learning methods can improve the image quality of positron emission tomography (PET) images without the need for large-scale datasets. However, these approaches typically require training a distinct network for each patient, making the reconstruction process extremely time-consuming and limiting their clinical applicability. In this paper, our research objective is to develop an efficient unsupervised learning framework for unsupervised PET image reconstruction, in order to fulfill the clinical requirement for real-time imaging capabilities.
View Article and Find Full Text PDFJ Neural Eng
August 2025
Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy.
. AI-based neural decoding reconstructs visual perception by leveraging generative models to map brain activity measured through functional magnetic resonance imaging (fMRI) into the observed visual stimulus..
View Article and Find Full Text PDFBrief Bioinform
July 2025
Shanghai Key Laboratory of Maternal Fetal Medicine, Clinical and Translational Research Center of Shanghai First Maternity and Infant Hospital, Frontier Science Center for Stem Cell Research, School of Life Sciences and Technology, Tongji University, Shanghai 200092, China.
Designing high-affinity molecules for protein targets (especially novel protein families) is a crucial yet challenging task in drug discovery. Recently, there has been tremendous progress in structure-based 3D molecular generative models that incorporate structural information of protein pockets. However, the capacity for molecular representation learning and the generalization for capturing interaction patterns need substantial further developments.
View Article and Find Full Text PDFRecent fluorescence diagnostic tools have demonstrated effectiveness in detecting early-stage neoplasmatic tissue and monitoring therapy, allowing rapid non-invasive live imaging diagnosis. However, varying light conditions in environments and modalities of observation systems introduce multi-level noises to acquired images, causing degraded image quality. Deep learning (DL) has shown great potential in improving image quality, but its performance may be limited when dealing with insufficient labeled training data and the challenges of acquiring high-quality multi-modality fluorescence images in specific biomedical tasks.
View Article and Find Full Text PDF