Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

The stochastic synthesis of extreme, rare climate scenarios is vital for risk and resilience models aware of climate change, directly impacting society in different sectors. However, creating high-quality variations of under-represented samples remains a challenge for several generative models. This paper investigates quantizing reconstruction losses for helping variational autoencoders (VAE) better synthesize extreme weather fields from conventional historical training sets. Building on the classical VAE formulation using reconstruction and latent space regularization losses, we propose various histogram-based penalties to the reconstruction loss that explicitly reinforces the model to synthesize under-represented values better. We evaluate our work using precipitation weather fields, where models usually strive to synthesize well extreme precipitation samples. We demonstrate that bringing histogram awareness to the reconstruction loss improves standard VAE performance substantially, especially for extreme weather events.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11292023PMC
http://dx.doi.org/10.1038/s41598-024-52773-2DOI Listing

Publication Analysis

Top Keywords

quantizing reconstruction
8
reconstruction losses
8
extreme weather
8
weather fields
8
reconstruction loss
8
losses improving
4
weather
4
improving weather
4
weather data
4
data synthesis
4

Similar Publications

Fault identification for rolling bearing based on ITD-ILBP-Hankel matrix.

ISA Trans

August 2025

School of Automation, Shenyang Aerospace University, Shenyang, Liaoning Province 110136, China. Electronic address:

When a failure occurs in bearings, vibration signals are characterized by strong non-stationarity and nonlinearity. Therefore, it is difficult to sufficiently dig fault features. 1D local binary pattern (1D-LBP) has the advantageous feature to effectively extract local information of signals.

View Article and Find Full Text PDF

Retinal vessel segmentation driven by structure prior tokens.

Med Phys

August 2025

Laboratory of Advanced Theranostic Materials and Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China.

Background: Accurate retinal vessel segmentation from Optical Coherence Tomography Angiography (OCTA) images is vital in ophthalmic medicine, particularly for the early diagnosis and monitoring of diseases, such as diabetic retinopathy and hypertensive retinopathy. The retinal vascular system exhibits complex characteristics, including branching, crossing, and continuity, which are crucial for precise segmentation and subsequent medical analysis. However, traditional pixel-wise vessel segmentation methods focus on learning how to effectively divide each pixel into different categories, relying mainly on local features, such as intensity and texture, and often neglecting the intrinsic structural properties of vessels.

View Article and Find Full Text PDF

DiffRaman: A conditional latent denoising diffusion probabilistic model for enhancing bacterial identification via Raman spectra generation under limited data.

Anal Chim Acta

October 2025

State Key Laboratory of Precision Measurement Technology and Instruments, Tsinghua University, Beijing, 100084, China. Electronic address:

Raman spectroscopy has attracted significant attention in various biochemical detection fields, especially in the rapid identification of pathogenic bacteria. The integration of this technology with deep learning to facilitate automated bacterial Raman spectroscopy diagnosis has emerged as a key focus in recent research. However, the diagnostic performance of existing deep learning methods largely depends on a sufficient dataset, and in scenarios where there is a limited availability of Raman spectroscopy data, it is inadequate to fully optimize the numerous parameters of deep neural networks.

View Article and Find Full Text PDF

Post-training quantization (PTQ) for transformer-based large foundation models (LFMs) significantly accelerates model inference and relieves memory constraints, without incurring model training. However, existing methods face three main issues: 1) The scaling factors, which are commonly used in scale reparameterization based weight-activation quantization for mitigating the quantization errors, are mostly hand-crafted defined which may lead to suboptimal results; 2) The formulation of current quantization error defined by L2-norm ignores the directional shifts after quantization; 3) Most methods are devised tailored for single scenario, i.e.

View Article and Find Full Text PDF

Lensless cameras have emerged as a common method to extend depth of field (DoF) in computational imaging due to their simple and compact structure. Current lensless extended depth-of-field (EDoF) cameras are primarily designed to generate a depth-invariant point spread function (PSF). This strategy often sacrifices diffraction efficiency to ensure PSF consistency across varying depths.

View Article and Find Full Text PDF