98%
921
2 minutes
20
Numerous COVID-19 diagnostic imaging Artificial Intelligence (AI) studies exist. However, none of their models were of potential clinical use, primarily owing to methodological defects and the lack of implementation considerations for inference. In this study, all development processes of the deep-learning models are performed based on strict criteria of the "KAIZEN checklist", which is proposed based on previous AI development guidelines to overcome the deficiencies mentioned above. We develop and evaluate two binary-classification deep-learning models to triage COVID-19: a slice model examining a Computed Tomography (CT) slice to find COVID-19 lesions; a series model examining a series of CT images to find an infected patient. We collected 2,400,200 CT slices from twelve emergency centers in Japan. Area Under Curve (AUC) and accuracy were calculated for classification performance. The inference time of the system that includes these two models were measured. For validation data, the slice and series models recognized COVID-19 with AUCs and accuracies of 0.989 and 0.982, 95.9% and 93.0% respectively. For test data, the models' AUCs and accuracies were 0.958 and 0.953, 90.0% and 91.4% respectively. The average inference time per case was 2.83 s. Our deep-learning system realizes accuracy and inference speed high enough for practical use. The systems have already been implemented in four hospitals and eight are under progression. We released an application software and implementation code for free in a highly usable state to allow its use in Japan and globally.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10799049 | PMC |
http://dx.doi.org/10.1038/s41598-024-52135-y | DOI Listing |
J Ultrasound Med
September 2025
Department of Ultrasound, Donghai Hospital Affiliated to Kangda College of Nanjing Medical University, Lianyungang, China.
Objective: The aim of this study is to evaluate the prognostic performance of a nomogram integrating clinical parameters with deep learning radiomics (DLRN) features derived from ultrasound and multi-sequence magnetic resonance imaging (MRI) for predicting survival, recurrence, and metastasis in patients diagnosed with triple-negative breast cancer (TNBC) undergoing neoadjuvant chemotherapy (NAC).
Methods: This retrospective, multicenter study included 103 patients with histopathologically confirmed TNBC across four institutions. The training group comprised 72 cases from the First People's Hospital of Lianyungang, while the validation group included 31 cases from three external centers.
Comput Methods Biomech Biomed Engin
September 2025
School of Medicine, Tzu Chi University, Hualien, Taiwan.
This study explores deep feature representations from photoplethysmography (PPG) signals for coronary artery disease (CAD) identification in 80 participants (40 with CAD). Finger PPG signals were processed using multilayer perceptron (MLP) and convolutional neural network (CNN) autoencoders, with performance assessed via 5-fold cross-validation. The CNN autoencoder model achieved the best results (recall 96.
View Article and Find Full Text PDFTransl Vis Sci Technol
September 2025
Department of Ophthalmology, University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania, USA.
Purpose: To evaluate choroidal vasculature using a novel three-dimensional algorithm in fellow eyes of patients with unilateral chronic central serous chorioretinopathy (cCSC).
Methods: Patients with unilateral cCSC were retrospectively included. Automated choroidal segmentation was conducted using a deep-learning ResUNet model.
J Integr Neurosci
August 2025
School of Computer Science, Guangdong Polytechnic Normal University, 510665 Guangzhou, Guangdong, China.
Background: Emotion recognition from electroencephalography (EEG) can play a pivotal role in the advancement of brain-computer interfaces (BCIs). Recent developments in deep learning, particularly convolutional neural networks (CNNs) and hybrid models, have significantly enhanced interest in this field. However, standard convolutional layers often conflate characteristics across various brain rhythms, complicating the identification of distinctive features vital for emotion recognition.
View Article and Find Full Text PDFJ Pharm Anal
August 2025
Shanghai Key Laboratory of Regulatory Biology, Institute of Biomedical Sciences and School of Life Sciences, East China Normal University, Shanghai, 200241, China.
Current experimental and computational methods have limitations in accurately and efficiently classifying ion channels within vast protein spaces. Here we have developed a deep learning algorithm, GPT2 Ion Channel Classifier (GPT2-ICC), which effectively distinguishing ion channels from a test set containing approximately 239 times more non-ion-channel proteins. GPT2-ICC integrates representation learning with a large language model (LLM)-based classifier, enabling highly accurate identification of potential ion channels.
View Article and Find Full Text PDF