Publications by authors named "Michal Borsky"

Introduction: The field of automatic respiratory analysis focuses mainly on breath detection on signals such as audio recordings, or nasal flow measurement, which suffer from issues with background noise and other disturbances. Here we introduce a novel algorithm designed to isolate individual respiratory cycles on a thoracic respiratory inductance plethysmography signal using the non-invasive signal of the respiratory inductance plethysmography belts.

Purpose: The algorithm locates breaths using signal processing and statistical methods on the thoracic respiratory inductance plethysmography belt and enables the analysis of sleep data on an individual breath level.

View Article and Find Full Text PDF

The genetic basis of the human vocal system is largely unknown, as are the sequence variants that give rise to individual differences in voice and speech. Here, we couple data on diversity in the sequence of the genome with voice and vowel acoustics in speech recordings from 12,901 Icelanders. We show how voice pitch and vowel acoustics vary across the life span and correlate with anthropometric, physiological, and cognitive traits.

View Article and Find Full Text PDF

The diagnosis of sleep disordered breathing depends on the detection of respiratory-related events: apneas, hypopneas, snores, or respiratory event-related arousals from sleep studies. While a number of automatic detection methods have been proposed, their reproducibility has been an issue, in part due to the absence of a generally accepted protocol for evaluating their results. With sleep measurements this is usually treated as a classification problem and the accompanying issue of localization is not treated as similarly critical.

View Article and Find Full Text PDF

The goal of this study was to investigate the performance of different feature types for voice quality classification using multiple classifiers. The study compared the COVAREP feature set; which included glottal source features, frequency warped cepstrum and harmonic model features; against the mel-frequency cepstral coefficients (MFCCs) computed from the acoustic voice signal, acoustic-based glottal inverse filtered (GIF) waveform, and electroglottographic (EGG) waveform. Our hypothesis was that MFCCs can capture the perceived voice quality from either of these three voice signals.

View Article and Find Full Text PDF