Abstract
Does the music that we know have a language-like semantics? I argue that mere agreement among auditors about their cognitive representations and descriptions of music doesn't give grounds for attributing meaning to music. I also argue that music does not have a language-like semantics not because it fails to be robustly referential, but because musical structures are not genuine grammars. The reason is that while music typically has very elaborate and regular structures - much like language - these structures do not apparently originate from nor are they in the service of the need to encode meanings - exactly unlike language. Nonetheless, the difference between languages and music is more a matter of degree than of kind. In other words, we can imagine transforming what we now call music into a language; if, by some strange necessity, music were pressed into service on a day-to day basis for the purposes of comprehension and communication, then it could without much trouble become a language. But we would very likely no longer regard the strangely melodious utterances of such a language to be real music once it came to serve, in a quite transparent way, its pragmatic communicative and cognitive functions. I explain these views as a consequence of an old-fashioned aesthetic theory of music cognition as newly formulated by Raffman (1993).
Get full access to this article
View all access options for this article.
