98%
921
2 minutes
20
Most current generative adversarial network (GAN) cannot simultaneously consider the quality and diversity of generated samples due to limited data and variable working condition. To solve the problem, a Transformer-based conditional GAN transfer learning network is proposed. Firstly, a transformer-based conditional GAN (TCGAN) generative network is constructed with sample label information, enhancing the quality of generated data while retaining the diversity of generated signals. Secondly, a transfer learning network based on TCGAN is established, and a "generation-transfer" collaborative training strategy based on the expectation maximization is introduced to realize parallel updating of the parameters of the generative network and the transfer network. Finally, the effectiveness of the proposed method is verified using bearing datasets from CWRU and the self-made KUST-SY. The results show that the proposed method can generate higher quality data than comparative methods such as TTS-GAN and CorGAN, which provides a new solution for improving the cross-domain fault diagnosis performance.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11861598 | PMC |
http://dx.doi.org/10.1038/s41598-025-91424-y | DOI Listing |
Nucleic Acids Res
August 2025
School of Biotechnology and Key Laboratory of Industrial Biotechnology of Ministry of Education, Jiangnan University, Wuxi 214122, China.
Core promoters are essential regulatory elements that control transcription initiation, but accurately predicting and designing their strength remains challenging due to complex sequence-function relationships and the limited generalizability of existing AI-based approaches. To address this, we developed a modular platform integrating rational library design, predictive modelling, and generative optimization into a closed-loop workflow for end-to-end core promoter engineering. Conserved and spacer region of core promoters exert distinct effects on transcriptional strength, with the former driving large-scale variation and the latter enabling finer gradation.
View Article and Find Full Text PDFComput Biol Med
August 2025
Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, and Emory University, 55 Park Pl NE, Atlanta, 30303, GA, USA.
Generative AI for image synthesis has significantly progressed with the advent of advanced diffusion models. These models have set new benchmarks in creating high-quality and meaningful visual information. In this paper, we introduce TransUNET-DDPM, a novel framework that fuses transformer-based architectures with denoising diffusion probabilistic models (DDPMs) to generate high-quality, 2D and 3D intrinsic connectivity networks (ICNs).
View Article and Find Full Text PDFACM Trans Knowl Discov Data
November 2024
Electrical and Computer Engineering Department, Rutgers University, New Brunswick, NJ, USA.
Process data constructed from event logs provides valuable insights into procedural dynamics over time. The confidential information in process data, together with the data's intricate nature, makes the datasets not sharable and challenging to collect. Consequently, research is limited using process data and analytics in the process mining domain.
View Article and Find Full Text PDFPLoS One
August 2025
Center for Cyberphysical Systems, Department of Computer Science, Khalifa University, United Arab Emirates.
Hyperspectral data consists of continuous narrow spectral bands. Due to this, it has less spatial and high spectral information. Convolutional neural networks (CNNs) emerge as a highly contextual information model for remote sensing applications.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
July 2025
The success of deep learning in computer vision over the past decade has hinged on large labeled datasets and strong pretrained models. In data-scarce settings, the quality of these pretrained models becomes crucial for effective transfer learning. Image classification and self-supervised learning have traditionally been the primary methods for pretraining CNNs and transformer-based architectures.
View Article and Find Full Text PDF