98%
921
2 minutes
20
The Microscopic Agglutination Test (MAT) is widely recognized as the gold standard for diagnosing zoonosis leptospirosis. However, the MAT relies on subjective evaluations by human experts, which can lead to inconsistencies and inter-observer variability. In this study, we aimed to emulate expert assessments using deep learning and convert them into reproducible numerical outputs to achieve greater objectivity. By leveraging a pre-trained DenseNet121, the network benefits from better initialization, facilitating more effective training. We validated our approach using an in-house dataset, and the experimental results demonstrate that the proposed network achieved accurate agglutination rate estimates. In addition, we employed UMAP, a dimensionality reduction technique, to visualize the learned feature representations, revealing that the network captured image features indicative of Leptospira abundance. Overall, our findings suggest that deep learning can consistently estimate agglutination rates in a manner that approximates expert evaluations and that enhancing interpretability provides visual cues that could aid in understanding the behavior of deep learning models, potentially facilitating future clinical integration.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.mimet.2025.107249 | DOI Listing |
J Ultrasound Med
September 2025
Department of Ultrasound, Donghai Hospital Affiliated to Kangda College of Nanjing Medical University, Lianyungang, China.
Objective: The aim of this study is to evaluate the prognostic performance of a nomogram integrating clinical parameters with deep learning radiomics (DLRN) features derived from ultrasound and multi-sequence magnetic resonance imaging (MRI) for predicting survival, recurrence, and metastasis in patients diagnosed with triple-negative breast cancer (TNBC) undergoing neoadjuvant chemotherapy (NAC).
Methods: This retrospective, multicenter study included 103 patients with histopathologically confirmed TNBC across four institutions. The training group comprised 72 cases from the First People's Hospital of Lianyungang, while the validation group included 31 cases from three external centers.
Comput Methods Biomech Biomed Engin
September 2025
School of Medicine, Tzu Chi University, Hualien, Taiwan.
This study explores deep feature representations from photoplethysmography (PPG) signals for coronary artery disease (CAD) identification in 80 participants (40 with CAD). Finger PPG signals were processed using multilayer perceptron (MLP) and convolutional neural network (CNN) autoencoders, with performance assessed via 5-fold cross-validation. The CNN autoencoder model achieved the best results (recall 96.
View Article and Find Full Text PDFTransl Vis Sci Technol
September 2025
Department of Ophthalmology, University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania, USA.
Purpose: To evaluate choroidal vasculature using a novel three-dimensional algorithm in fellow eyes of patients with unilateral chronic central serous chorioretinopathy (cCSC).
Methods: Patients with unilateral cCSC were retrospectively included. Automated choroidal segmentation was conducted using a deep-learning ResUNet model.
J Integr Neurosci
August 2025
School of Computer Science, Guangdong Polytechnic Normal University, 510665 Guangzhou, Guangdong, China.
Background: Emotion recognition from electroencephalography (EEG) can play a pivotal role in the advancement of brain-computer interfaces (BCIs). Recent developments in deep learning, particularly convolutional neural networks (CNNs) and hybrid models, have significantly enhanced interest in this field. However, standard convolutional layers often conflate characteristics across various brain rhythms, complicating the identification of distinctive features vital for emotion recognition.
View Article and Find Full Text PDFJ Pharm Anal
August 2025
Shanghai Key Laboratory of Regulatory Biology, Institute of Biomedical Sciences and School of Life Sciences, East China Normal University, Shanghai, 200241, China.
Current experimental and computational methods have limitations in accurately and efficiently classifying ion channels within vast protein spaces. Here we have developed a deep learning algorithm, GPT2 Ion Channel Classifier (GPT2-ICC), which effectively distinguishing ion channels from a test set containing approximately 239 times more non-ion-channel proteins. GPT2-ICC integrates representation learning with a large language model (LLM)-based classifier, enabling highly accurate identification of potential ion channels.
View Article and Find Full Text PDF