Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Sensory systems are permanently bombarded with complex stimuli. Cognitive processing of such complex stimuli may be facilitated by accentuation of important elements. In the case of music listening, alteration of some surface features -such as volume and duration- may facilitate the cognitive processing of otherwise high-level information, such as melody and harmony. Hence, musical accents are often aligned with intrinsically salient elements in the stimuli, such as highly unexpected notes. We developed a novel listening paradigm based on an artificial Markov-chain melodic grammar to probe the hypothesis that listeners prefer structurally salient events to be consistent with salient surface properties such as musical accents. We manipulated two types of structural saliency: one driven by Gestalt principles (a note at the peak of a melodic contour) and one driven by statistical learning (a note with high surprisal, or information content [IC], as defined by the artificial melodic grammar). Results suggest that for all listeners, the aesthetic preferences in terms of surface properties are well predicted by Gestalt principles of melodic shape. In contrast, despite demonstrating good knowledge of novel statistical properties of the melodies, participants did not demonstrate a preference for accentuation of high-IC notes. This work is a first step in elucidating the interplay between intrinsic, Gestalt-like and acquired, statistical properties of melodies in the development of expressive musical properties, with a focus on the appreciation of dynamic accents (i.e. a transient increase in volume). Our results shed light on the implementation of domain-general and domain-specific principles of information processing during music listening.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11588220PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0312883PLOS

Publication Analysis

Top Keywords

melodic contour
8
statistical learning
8
complex stimuli
8
cognitive processing
8
music listening
8
musical accents
8
melodic grammar
8
surface properties
8
gestalt principles
8
statistical properties
8

Similar Publications

Melodic Contour Identification by Cochlear-Implant Listeners With Asymmetric Phantom Pulses Presented to Apical Electrodes.

Ear Hear

July 2025

Cambridge Hearing Group, Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK.

Objectives: (a) To compare performance by cochlear-implant listeners on a melodic contour identification task when the fundamental frequency (F0) is encoded explicitly by single-pulse-per-period (SPP) pulse trains presented to an apical channel, by amplitude modulation of high-rate pulse trains presented to several electrodes, and by these two methods combined, (b) to measure melodic contour identification as a function of the range of F0s tested, (c) to determine whether so-called asymmetric phantom stimulation improves melodic contour identification relative to monopolar stimulation, as has been shown previously using pitch-ranking tasks.

Design: Three experiments measured melodic contour identification by cochlear-implant listeners with two different methods of encoding fundamental frequency (F0), both singly and in combination. One method presented SPP pulse trains at the F0 rate to an apical channel in either partial-bipolar or monopolar mode.

View Article and Find Full Text PDF

Tonotopic effects on temporal-based pitch perception of transposed tones: Insights from Holo-Hilbert Spectral Analysis.

Biol Psychol

July 2025

Institute of Cognitive Neuroscience, National Central University, Taoyuan, Taiwan; Cognitive Intelligence and Precision Healthcare Center, National Central University, Taoyuan, Taiwan. Electronic address:

Natural sounds, whether music or conspecific communications, frequently contain multiple amplitude modulation (AM) components. AM, the temporal envelope of sounds, plays a critical role in pitch perception. However, how multiple AM components distribute across tonotopic region of the human cochlea to form pitch percepts remains unclear.

View Article and Find Full Text PDF

Purpose: Speech-in-noise performance of cochlear implant (CI) users varies considerably, and understanding speech in a complex auditory environment remains challenging. It is still unclear which auditory skill is causing this difficulty. This study aimed to evaluate spectral resolution, temporal resolution, and melodic contour identification (MCI) skills to determine which of these skills is most closely related to speech understanding in noise and to investigate whether these three skills differ among CI users with varying performances in speech-in-noise tasks.

View Article and Find Full Text PDF

Singing is a universal human attribute. Previous studies suggest that the ability to produce words through singing can be preserved in poststroke aphasia (PSA) and that this is mainly subserved by the spared parts of the left-lateralized language network. However, it remains unclear to what extent the production of rhythmic-melodic acoustic patterns in singing remains preserved in aphasia and which neural networks and hemisphere(s) are involved in this.

View Article and Find Full Text PDF

Cochlear implants improve auditory function in individuals with severe hearing loss, yet cochlear implant users often struggle with tasks such as identifying speaker characteristics and musical elements. While music therapy shows promise in addressing these deficits, standardized rehabilitation protocols, especially those focusing on music-based sound recognition, remain underdeveloped. This study evaluates the efficacy of EARPLANTED, a free platform developed at the Faculty of Physics, University of Bialystok, which includes a melodic contour identification test accessible on personal computers and mobile devices.

View Article and Find Full Text PDF