Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Where is the melody? Spontaneous attention orchestrates melody formation during polyphonic music listening

View through CrossRef
Abstract Humans seamlessly process multi-voice music into a coherent perceptual whole. Yet the neural strategies supporting this experience remain unclear. One fundamental component of this process is the formation of melody, a core structural element of music. Previous work on monophonic listening has provided strong evidence for the neurophysiological basis of melody processing, for example indicating predictive processing as a foundational mechanism underlying melody encoding. However, considerable uncertainty remains about how melodies are formed during polyphonic music listening, as existing theories (e.g., divided attention, figure–ground model, stream integration) fail to unify the full range of empirical findings. Here, we combined behavioral measures with non-invasive electroencephalography (EEG) to probe spontaneous attentional bias and melodic expectation while participants listened to two-voice classical excerpts. Our uninstructed listening paradigm eliminated a major experimental constraint, creating a more ecologically valid setting. We found that attention bias was significantly influenced by both the high-voice superiority effect and intrinsic melodic statistics. We then employed transformer-based models to generate next-note expectation profiles and test competing theories of polyphonic perception. Drawing on our findings, we propose a weighted-integration framework in which attentional bias dynamically calibrates the degree of integration of the competing streams. In doing so, the proposed framework reconciles previous divergent accounts by showing that, even under free-listening conditions, melodies emerge through an attention-guided statistical integration mechanism. Highlights EEG can be used to track spontaneous attention during the uninstructed listening of polyphonic music. Behavioural and neural data indicate that spontaneous attention is influenced by both high-voice superiority and melodic contour. Attention bias impacts the neural encoding of the polyphonic streams, with strongest effects within 200 ms after note onset. Strong attention bias leads to melodic expectations consistent with a monophonic music transformer, in line with a Figure-ground model. Weak attention bias leads to melodic expectations consistent with a Stream Integration model. We propose a bi-directional influence between attention and prediction mechanisms, with horizontal statistics impacting attention (i.e., salience), and attention impacting melody extraction.
Title: Where is the melody? Spontaneous attention orchestrates melody formation during polyphonic music listening
Description:
Abstract Humans seamlessly process multi-voice music into a coherent perceptual whole.
Yet the neural strategies supporting this experience remain unclear.
One fundamental component of this process is the formation of melody, a core structural element of music.
Previous work on monophonic listening has provided strong evidence for the neurophysiological basis of melody processing, for example indicating predictive processing as a foundational mechanism underlying melody encoding.
However, considerable uncertainty remains about how melodies are formed during polyphonic music listening, as existing theories (e.
g.
, divided attention, figure–ground model, stream integration) fail to unify the full range of empirical findings.
Here, we combined behavioral measures with non-invasive electroencephalography (EEG) to probe spontaneous attentional bias and melodic expectation while participants listened to two-voice classical excerpts.
Our uninstructed listening paradigm eliminated a major experimental constraint, creating a more ecologically valid setting.
We found that attention bias was significantly influenced by both the high-voice superiority effect and intrinsic melodic statistics.
We then employed transformer-based models to generate next-note expectation profiles and test competing theories of polyphonic perception.
Drawing on our findings, we propose a weighted-integration framework in which attentional bias dynamically calibrates the degree of integration of the competing streams.
In doing so, the proposed framework reconciles previous divergent accounts by showing that, even under free-listening conditions, melodies emerge through an attention-guided statistical integration mechanism.
Highlights EEG can be used to track spontaneous attention during the uninstructed listening of polyphonic music.
Behavioural and neural data indicate that spontaneous attention is influenced by both high-voice superiority and melodic contour.
Attention bias impacts the neural encoding of the polyphonic streams, with strongest effects within 200 ms after note onset.
Strong attention bias leads to melodic expectations consistent with a monophonic music transformer, in line with a Figure-ground model.
Weak attention bias leads to melodic expectations consistent with a Stream Integration model.
We propose a bi-directional influence between attention and prediction mechanisms, with horizontal statistics impacting attention (i.
e.
, salience), and attention impacting melody extraction.

Related Results

Owner Bound Music: A study of popular sheet music selling and music making in the New Zealand home 1840-1940
Owner Bound Music: A study of popular sheet music selling and music making in the New Zealand home 1840-1940
<p>From 1840, when New Zealand became part of the British Empire, until 1940 when the nation celebrated its Centennial, the piano was the most dominant instrument in domestic...
Listening Modes in Concerts
Listening Modes in Concerts
While the use of music in everyday life is much studied, the ways of listening to music during live performances have hardly been considered. To fill this gap and provide a startin...
Incidental Collocation Learning from Different Modes of Input and Factors That Affect Learning
Incidental Collocation Learning from Different Modes of Input and Factors That Affect Learning
Collocations, i.e., words that habitually co-occur in texts (e.g., strong coffee, heavy smoker), are ubiquitous in language and thus crucial for second/foreign language (L2) learne...
Welcome to the Robbiedome
Welcome to the Robbiedome
One of the greatest joys in watching Foxtel is to see all the crazy people who run talk shows. Judgement, ridicule and generalisations slip from their tongues like overcooked lamb ...
Active music listening: Promoting music and movement in early childhood music education
Active music listening: Promoting music and movement in early childhood music education
This article’s premise is that listening can be an engaging and important way for children to interact with music, using movement and their entire bodies, leading to music learning...
Advancing knowledge in music therapy
Advancing knowledge in music therapy
It is now over 20 years since Ernest Boyer – an educator from the US and, amongst other posts, President of the Carnegie Foundation for the Advancement of Teaching – published his ...
Music Video
Music Video
Music video emerged as the object of academic writing shortly after the introduction in the United States of MTV (Music Television) in 1981. From the beginning, music video was cla...

Back to Top