98%
921
2 minutes
20
The objective of this trial was to evaluate the behavioral patterns and performance of lactating sows and their litters under the effect of artificial vocalization. Twenty-eight sows and their litters were distributed in a completely randomized design in a 2x2 factorial scheme (artificial vocalization x lactation week). The behavior of the animals was monitored during 24 hours on the 7th and 15th days of lactation, analyzing the number, interval, and frequency of nursings. The body condition and performance of the sows were also evaluated. Artificial vocalization promoted higher frequencies of eating for sow and nursing for piglets (P <0.05), increased inactive sow behavior (P <0.05), and reduced sow alert in activity (P <0.05). The number and duration of suckling sessions at the 15thday of lactation were reduced (P <0.05). The use of artificial vocalization did not affect the body condition or milk production of the lactating sows, or the performance of the litter during lactation (P> 0.05). The use of maternal artificial vocalization during lactation of sows promoted greater lactation efficiency and longer rest time, favoring the sows' welfare.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1590/0001-3765201820180340 | DOI Listing |
Encephale
September 2025
Speech and Language Pathology Department of Nice, Faculty of Medicine, Campus Pasteur, université Côte d'Azur, 28, avenue de Valombrose, 06107 Nice, France; Cognition Behaviour Technology Laboratoy (CoBTeK), institut Claude-Pompidou, université Côte d'Azur, 10, rue Molière, 06000 Nice, France.
Introduction: Apathy, commonly observed in neurocognitive disorders, is characterized by a reduction in goal-directed behavior with a reduction of initiatives interests and emotions. This article presents the case of Mrs. B.
View Article and Find Full Text PDFLaryngoscope
September 2025
Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA.
Objectives: Major advancements have been made in applying artificial intelligence and computer vision to analyze videolaryngoscopy data. These models are limited to post hoc analysis and are aimed at research settings. In this work, we assess the feasibility of a real-time solution for automated vocal fold tracking during in-office laryngoscopy.
View Article and Find Full Text PDFFront Psychol
August 2025
School of the Arts, Universiti Sains Malaysia, Penang, Malaysia.
Introduction: Metacognition plays a vital role in enhancing learning outcomes and has received increasing attention in recent years. Studies have shown that accomplished musicians typically demonstrate high levels of metacognition, and that reflection and feedback are effective strategies for promoting metacognitive development. This study explores the impact of integrating artificial intelligence (AI) and e-learning tools into vocal music training.
View Article and Find Full Text PDFPLoS Comput Biol
August 2025
Elmore School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana, United States of America.
For static stimuli or at gross (∼1-s) time scales, artificial neural networks (ANNs) that have been trained on challenging engineering tasks, like image classification and automatic speech recognition, are now the best predictors of neural responses in primate visual and auditory cortex. It is, however, unknown whether this success can be extended to spiking activity at fine time scales, which are particularly relevant to audition. Here we address this question with ANNs trained on speech audio, and acute multi-electrode recordings from the auditory cortex of squirrel monkeys.
View Article and Find Full Text PDFPoult Sci
August 2025
College of Veterinary Medicine, Nanjing Agricultural University, Nanjing 210095, China. Electronic address:
With the advancement of precision livestock farming (PLF), acoustic technology has emerged as a key tool for tracking the health and well-being of laying hens, owing to its non-invasive, real-time and cost-effective nature. In this study, continuous audio data were collected from commercial chicken houses over a period of 15 days, in addition to temperature and humidity index (THI) analysis, to develop a convolutional neural network (CNN)-based model for classifying chicken squawks. This approach enabled the investigation of the relationship between environmental adaptability and acoustic traits in a mixed-sex rearing system.
View Article and Find Full Text PDF