Abstract
Do speech and music understanding share common neural mechanisms? Here, brain bioelectrical activity was recorded in healthy participants listening to music obtained by digitally transforming speech into viola music. Sentences originally had a positive or negative affective prosody. The aim was to investigate if the emotional content of music was processed similarly to the affective prosody of speech. EEG was recorded from 128 electrodes in 20 healthy students. Participants had to detect rare neutral piano sounds while ignoring viola melodies. Negative affective valence of stimuli increased the amplitude of frontal P300 and N400 components of ERPs, while positive valence enhanced a late inferior frontal positivity. Similar markers were previously found for the processing of positive versus negative music, vocalizations, and speech. Source reconstruction showed that negative music activated the right superior temporal gyrus and cingulate cortex, while positive music activated the left middle and inferior temporal gyrus and the inferior frontal cortex. An integrated model is proposed of a possible common network for processing the emotional content of music, vocalizations, and speech, which might explain some universal and relatively innate brain reaction to music.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
