98%
921
2 minutes
20
Astronomy is entering an unprecedented era of big-data science, driven by missions like the ESA's Gaia telescope, which aims to map the Milky Way in three dimensions. Gaia's vast dataset presents a monumental challenge for traditional analysis methods. The sheer scale of this data exceeds the capabilities of manual exploration, necessitating the utilization of advanced computational techniques. In response to this challenge, we developed a novel approach leveraging deep learning to estimate the metallicity of fundamental mode (ab-type) RR Lyrae stars from their light curves in the Gaia optical G-band. Our study explores applying deep-learning techniques, particularly advanced neural-network architectures, in predicting photometric metallicity from time-series data. Our deep-learning models demonstrated notable predictive performance, with a low mean absolute error (MAE) of 0.0565, the root mean square error (RMSE) of 0.0765, and a high R2 regression performance of 0.9401, measured by cross-validation. The weighted mean absolute error (wMAE) is 0.0563, while the weighted root mean square error (wRMSE) is 0.0763. These results showcase the effectiveness of our approach in accurately estimating metallicity values. Our work underscores the importance of deep learning in astronomical research, particularly with large datasets from missions like Gaia. By harnessing the power of deep-learning methods, we can provide precision in analyzing vast datasets, contributing to more precise and comprehensive insights into complex astronomical phenomena.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11358884 | PMC |
http://dx.doi.org/10.3390/s24165203 | DOI Listing |
J Ultrasound Med
September 2025
Department of Ultrasound, Donghai Hospital Affiliated to Kangda College of Nanjing Medical University, Lianyungang, China.
Objective: The aim of this study is to evaluate the prognostic performance of a nomogram integrating clinical parameters with deep learning radiomics (DLRN) features derived from ultrasound and multi-sequence magnetic resonance imaging (MRI) for predicting survival, recurrence, and metastasis in patients diagnosed with triple-negative breast cancer (TNBC) undergoing neoadjuvant chemotherapy (NAC).
Methods: This retrospective, multicenter study included 103 patients with histopathologically confirmed TNBC across four institutions. The training group comprised 72 cases from the First People's Hospital of Lianyungang, while the validation group included 31 cases from three external centers.
Comput Methods Biomech Biomed Engin
September 2025
School of Medicine, Tzu Chi University, Hualien, Taiwan.
This study explores deep feature representations from photoplethysmography (PPG) signals for coronary artery disease (CAD) identification in 80 participants (40 with CAD). Finger PPG signals were processed using multilayer perceptron (MLP) and convolutional neural network (CNN) autoencoders, with performance assessed via 5-fold cross-validation. The CNN autoencoder model achieved the best results (recall 96.
View Article and Find Full Text PDFTransl Vis Sci Technol
September 2025
Department of Ophthalmology, University of Pittsburgh, School of Medicine, Pittsburgh, Pennsylvania, USA.
Purpose: To evaluate choroidal vasculature using a novel three-dimensional algorithm in fellow eyes of patients with unilateral chronic central serous chorioretinopathy (cCSC).
Methods: Patients with unilateral cCSC were retrospectively included. Automated choroidal segmentation was conducted using a deep-learning ResUNet model.
J Integr Neurosci
August 2025
School of Computer Science, Guangdong Polytechnic Normal University, 510665 Guangzhou, Guangdong, China.
Background: Emotion recognition from electroencephalography (EEG) can play a pivotal role in the advancement of brain-computer interfaces (BCIs). Recent developments in deep learning, particularly convolutional neural networks (CNNs) and hybrid models, have significantly enhanced interest in this field. However, standard convolutional layers often conflate characteristics across various brain rhythms, complicating the identification of distinctive features vital for emotion recognition.
View Article and Find Full Text PDFJ Pharm Anal
August 2025
Shanghai Key Laboratory of Regulatory Biology, Institute of Biomedical Sciences and School of Life Sciences, East China Normal University, Shanghai, 200241, China.
Current experimental and computational methods have limitations in accurately and efficiently classifying ion channels within vast protein spaces. Here we have developed a deep learning algorithm, GPT2 Ion Channel Classifier (GPT2-ICC), which effectively distinguishing ion channels from a test set containing approximately 239 times more non-ion-channel proteins. GPT2-ICC integrates representation learning with a large language model (LLM)-based classifier, enabling highly accurate identification of potential ion channels.
View Article and Find Full Text PDF