Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Motivated by questions arising at the intersection of information theory and geometry, we compare two dissimilarity measures between finite categorical distributions. One is the well-known Jensen-Shannon divergence, which is easy to compute and whose square root is a proper metric. The other is what we call the minmax divergence, which is harder to compute. Just like the Jensen-Shannon divergence, it arises naturally from the Kullback-Leibler divergence. The main contribution of this paper is a proof showing that the minmax divergence can be tightly approximated by the Jensen-Shannon divergence. The bounds suggest that the square root of the minmax divergence is a metric, and we prove that this is indeed true in the one-dimensional case. The general case remains open. Finally, we consider analogous questions in the context of another Bregman divergence and the corresponding Burbea-Rao (Jensen-Bregman) divergence.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12386000PMC
http://dx.doi.org/10.3390/e27080854DOI Listing

Publication Analysis

Top Keywords

jensen-shannon divergence
16
minmax divergence
16
divergence
11
square root
8
tight bounds
4
jensen-shannon
4
bounds jensen-shannon
4
minmax
4
divergence minmax
4
divergence motivated
4

Similar Publications

Predicting the binding affinity between proteins and ligands is a critical task in drug discovery. Although various computational methods have been proposed to estimate ligand target affinity, the method of Yasuda et al. (2022) ranks affinities based on the dynamic behavior obtained from molecular dynamics (MD) simulations without requiring structural similarity among ligand substituents.

View Article and Find Full Text PDF

Motivated by questions arising at the intersection of information theory and geometry, we compare two dissimilarity measures between finite categorical distributions. One is the well-known Jensen-Shannon divergence, which is easy to compute and whose square root is a proper metric. The other is what we call the minmax divergence, which is harder to compute.

View Article and Find Full Text PDF

Deep learning models rely heavily on extensive training data, but obtaining sufficient real-world data remains a major challenge in clinical fields. To address this, we explore methods for generating realistic synthetic multivariate fall data to supplement limited real-world samples collected from three fall-related datasets: SmartFallMM, UniMib, and K-Fall. We apply three conventional time-series augmentation techniques, a Diffusion-based generative AI method, and a novel approach that extracts fall segments from public video footage of older adults.

View Article and Find Full Text PDF

Machine learning (ML) has become a standard tool for the exploration of the chemical space. Much of the performance of such models depends on the chosen database for a given task. Here, this aspect is investigated for "chemical tasks" including the prediction of hybridization, oxidation, substituent effects, and aromaticity, starting from an initial "restricted" database (iRD).

View Article and Find Full Text PDF

Breed-specific gut microbiota and enterotype divergence in Chinese indigenous ducks.

Front Microbiol

July 2025

Key Laboratory of Natural Microbial Medicine Research of Jiangxi Province, College of Life Sciences, Jiangxi Science and Technology Normal University, Nanchang, China.

The gut microbiota of domestic ducks plays an important role in digestion and absorption, immune regulation, and overall health. However, our knowledge about the gut microbial composition in ducks of various phylogeny is insufficient, especially if raised in the same farm environment. In this study, 260 fecal samples from 15 Chinese indigenous duck breeds living in a uniformed farm were collected and 16 S rRNA gene sequencing was performed.

View Article and Find Full Text PDF