Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Ultrasonic vocalizations (USVs) analysis is a well-recognized tool to investigate animal communication. It can be used for behavioral phenotyping of murine models of different disorders. The USVs are usually recorded with a microphone sensitive to ultrasound frequencies and they are analyzed by specific software. Different calls typologies exist, and each ultrasonic call can be manually classified, but the qualitative analysis is highly time-consuming. Considering this framework, in this work we proposed and evaluated a set of supervised learning methods for automatic USVs classification. This could represent a sustainable procedure to deeply analyze the ultrasonic communication, other than a standardized analysis. We used manually built datasets obtained by segmenting the USVs audio tracks analyzed with the Avisoft software, and then by labelling each of them into 10 representative classes. For the automatic classification task, we designed a Convolutional Neural Network that was trained receiving as input the spectrogram images associated to the segmented audio files. In addition, we also tested some other supervised learning algorithms, such as Support Vector Machine, Random Forest and Multilayer Perceptrons, exploiting informative numerical features extracted from the spectrograms. The performance showed how considering the whole time/frequency information of the spectrogram leads to significantly higher performance than considering a subset of numerical features. In the authors' opinion, the experimental results may represent a valuable benchmark for future work in this research field.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7815145PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0244636PLOS

Publication Analysis

Top Keywords

automatic classification
8
convolutional neural
8
supervised learning
8
numerical features
8
performance considering
8
classification mice
4
mice vocalizations
4
vocalizations machine
4
machine learning
4
learning techniques
4

Similar Publications

Leveraging GPT-4o for Automated Extraction and Categorization of CAD-RADS Features From Free-Text Coronary CT Angiography Reports: Diagnostic Study.

JMIR Med Inform

September 2025

Departments of Radiology, The Third Affiliated Hospital, Sun Yat-Sen University, 600 Tianhe Road, Guangzhou, Guangdong, 510630, China, 86 18922109279, 86 20852523108.

Background: Despite the Coronary Artery Reporting and Data System (CAD-RADS) providing a standardized approach, radiologists continue to favor free-text reports. This preference creates significant challenges for data extraction and analysis in longitudinal studies, potentially limiting large-scale research and quality assessment initiatives.

Objective: To evaluate the ability of the generative pre-trained transformer (GPT)-4o model to convert real-world coronary computed tomography angiography (CCTA) free-text reports into structured data and automatically identify CAD-RADS categories and P categories.

View Article and Find Full Text PDF

This study aimed to develop a deep-learning model for the automatic classification of mandibular fractures using panoramic radiographs. A pretrained convolutional neural network (CNN) was used to classify fractures based on a novel, clinically relevant classification system. The dataset comprised 800 panoramic radiographs obtained from patients with facial trauma.

View Article and Find Full Text PDF

Given the significant global health burden caused by depression, numerous studies have utilized artificial intelligence techniques to objectively and automatically detect depression. However, existing research primarily focuses on improving the accuracy of depression recognition while overlooking the explainability of detection models and the evaluation of feature importance. In this paper, we propose a novel framework named Enhanced Domain Adversarial Neural Network (E-DANN) for depression detection.

View Article and Find Full Text PDF

Accurate differentiation between persistent vegetative state (PVS) and minimally conscious state and estimation of recovery likelihood in patients in PVS are crucial. This study analyzed electroencephalography (EEG) metrics to investigate their relationship with consciousness improvements in patients in PVS and developed a machine learning prediction model. We retrospectively evaluated 19 patients in PVS, categorizing them into two groups: those with improved consciousness ( = 7) and those without improvement ( = 12).

View Article and Find Full Text PDF

Significance: Melanoma's rising incidence demands automatable high-throughput approaches for early detection such as total body scanners, integrated with computer-aided diagnosis. High-quality input data is necessary to improve diagnostic accuracy and reliability.

Aim: This work aims to develop a high-resolution optical skin imaging module and the software for acquiring and processing raw image data into high-resolution dermoscopic images using a focus stacking approach.

View Article and Find Full Text PDF