Abstract
Personality factors, typically determined by the Big Five Inventory (BFI), have been a primary method for investigating individual preferences in music. While these studies have yielded a number of insights into musical choices, weaknesses exist, owing to the methods by which music is characterized and categorized. For example, musical genre, music-preference dimensions (e.g., reflective and complex), and musical attributes (e.g., strong and mellow), reported within the literature, have arguably produced inconsistent and thus difficult to interpret results. We attempt to circumvent these inconsistencies by classifying music using objectively quantifiable acoustic features that are fundamental to Western music, such as tempo and register. Moreover, it is our contention that the link between musical preference and personality may operate primarily at the level of acoustic features and not at broader categorization levels, such as genre. This study attempts to address this issue. Ninety participants listened to and indicated preference for stimuli that were systematically manipulated by dynamics (attack rate), mode, register, and tempo. Personality was measured using the BFI, allowing for analysis of personality traits and preference for acoustic features. Results supported the link between personality and preference for certain acoustic features. Preference with respect to dynamics was related to openness and extraversion; mode to conscientiousness and extraversion; register to extraversion and neuroticism; and tempo to conscientiousness, extraversion, and neuroticism. Though significant, these associations were relatively weak; therefore, future research could expand the number of manipulated acoustic features. Specific attempts should also aim to disentangle the effects of genre versus acoustic features on musical preferences. Personality–preference relationships at the acoustic-feature level are discussed with respect to music recommender systems and other aspects of the literature.
Introduction
Music plays a functional role in human behavior: individuals may use music to regulate their emotions, maintain interpersonal relationships, and express their identities (Hargreaves & North, 1999). These personal uses can be extended into forms of communication, and thus help to develop social relationships. Simply through the sharing of playlists, information such as personality traits, individual values, and emotional states can be conveyed between strangers (Rentfrow & Gosling, 2006). Furthermore, the type and amount of information communicated with respect to musical preference differs from that of other stimuli, such as photographs or video recordings. Indeed, such is the communicative power of our musical choices that Rentfrow and Gosling (2006, p. 241) have suggested that there is an “intuitive understanding of the links between music preferences and personality.” Based on prior research (e.g., Barone, Bansal, & Woolhouse, 2017), our contention is that this link may operate at the level of acoustic features in addition to broader categorization levels, such as genre. While, in this study, we did not seek to establish the relative strengths of genre versus acoustic features in relation to musical preference, we did aim to investigate possible relationships between personality factors and acoustic features in music.
Over the past two decades, many methodological approaches have been employed to investigate the link between personality and music preference. Generally, the measurement of personality has been consistent across studies, while the classification of music itself has been subject to a number of approaches. Studies have classified music by genre, such as rock, pop, and classical (Dunn, de Ruyter, & Bouwhuis, 2012; Zweigenhaft, 2008); musical dimensions, such as reflective and complex (Rentfrow & Gosling, 2003); and musical attributes, such as sad and danceable (Greenberg et al., 2016). In turn, this has led researchers to question which particular aspects of music individuals respond to. Preference for genre would indicate that personality is linked with relatively high-level conceptualizations of music, while preference for musical dimensions or musical attributes might imply that personality is linked with music that conveys a particular experience or mood.
Music Classification
The inherent ambiguity of genre classification has been a persistent concern in the field of music preference and personality research. Aucouturier and Pachet (2003, p. 83) have stated that genre is “intrinsically ill-defined”, and described genre as “intentional and extensional” concepts that are mismatched in the real world—how we interpret genre (intentional) and how we expect genre to consist of qualitative properties (extensional) do not align. Genre classifications may refer to music by the period in which it was written (e.g., baroque), associated location or culture of origin (e.g., Indian, Balinese), topic of lyrical content (e.g., love song), type of dance (e.g., waltz), instrumentation (e.g., piano), language of lyrics, and so on. However, the music of an artist, from an album, song, or even part of a song frequently does not conform to a single genre category, belonging, instead, to a combination of definable categories (e.g., folk rock) or specialized subcategories (e.g., drumstep). This phenomenon highlights the abstract and subjective (and possibly inaccurate) nature of music as classified by genre.
Furthermore, the conception of musical genre can be influenced by age, gender, and culture, among a variety of other factors. For instance, the frequency of listening to pop and folk music increases with age (Dunn et al., 2012). Is this a result of changes in preference with age, or the differing conceptions of genres by old and young listeners? We do not know the answer to this question; however, using principle component analysis, Rentfrow and Gosling (2003) determined four music-preference dimensions along which music is categorized, while Zweigenhaft (2008) suggested that there are five dimensions, and noted that different populations and the timing of studies may have influenced the discrepancies between their analyses. Subsequently, Rentfrow, Goldberg, and Levitin (2011) included a fifth factor, in what they refer to as the MUSIC model. Music-preference dimensions in the MUSIC model include
While genre classification provided a starting point for the analyses cited, other researchers have begun to explore alternative methods. Acknowledging the constraints of “broad and illusive genres or styles,” Greenberg et al. (2016, p. 597) assessed preference for perceived attributes of music. These attributes described emotional and cognitive aspects of music (e.g., aggressive, intelligent, or dreamy) and were organized into three dimensions: arousal, valence, and depth. In turn, each dimension was correlated with the Big Five personality factors, which we discuss next (John, Donahue, & Kentle, 1991). In contrast, Dunn et al. (2012, p. 426) noted that “preference for objectively measured music characteristics might provide greater accuracy for measuring music preferences.” This approach led Eerola, Friberg, and Bresin (2013) to manipulate musical cues (e.g., mode and tempo) in their study of emotional expression in music; such cues were important for the expression of scary, happy, sad, and peaceful sentiments in music. Arguably, this type of systematic manipulation of musical cues has, to date, received relatively little attention in the music-preference literature.
Personality
In general, the assessment of personality attempts to determine variation among individuals with respect to specific dimensions (Buss, 1987). Although a number of methods have been advanced to define personality dimensions (e.g., the Myers–Briggs Type Indicator, NEO Personality Inventory (NEO-PI), and Minnesota Multiphasic Personality Inventory), consensus has converged, in many fields of research, on the Big Five taxonomy (John, Naumann, & Soto, 2008). With development rooted in lexical analyses, the Big Five categorizes personality traits based on commonly used language and, after many variations, has arrived at the following factors: (1)
Instruments designed to measure Big Five factors, such as the lengthy 240-item NEO-PI questionnaire, have been extensively tested and validated. The NEO-PI questionnaires break each personality factor into six unique facets. For example, extraversion can be broken down into warmth, gregariousness, assertiveness, activity, excitement-seeking, and positive emotions. While the full NEO-PI questionnaire is recommended, the Big Five Inventory (BFI) is a shorter, efficient alternative. In sum, the BFI reliably determines an individual’s position along each of the five personality factors using only 44 questionnaire items and a five-point Likert scale; however, facet scores per dimension are not produced.
Although preference for music has been linked to personality a number of times in the literature, interpreting the extent to which these factors correlate depends on the classification methodology used in each study. Rentfrow and Gosling (2003) used the BFI to investigate preference for various music dimensions extrapolated from 14 genres. For example, the reflective and complex dimension describes music that is slower in tempo, using acoustic instruments, and with little singing; the intense and rebellious dimension describes music of a faster tempo, using electric instruments, and with moderate singing. Zweigenhaft’s (2008) study used the NEO-PI and investigated preference for 21 musical genres. While his results overlapped with Rentfrow and Gosling’s (2003, 2006) analyses using music dimensions, differences between these studies prompted Zweigenhaft to continue his analysis at the genre level, as opposed to using solely music dimensions. Lastly, using the International Personality Item Pool, Greenberg et al. (2016) found that preference was linked to 38 perceived psychological attributes in music. These attributes, in contrast to music dimensions and genres, instead refer to the level of arousal, type of valence, and emotional depth that is perceived in music. The foregoing underscores the point that many approaches and interpretations have been undertaken by researchers with respect to music preference and personality, leading to a plethora of interpretations and conclusions.
In addition to the categorization methods used in these studies, the interpretation of results must consider the manner in which preference ratings are recorded. Rentfrow and Gosling (2003) measured self-reported preferences with the absence of auditory stimuli. They developed the Short Test of Music Preference (STOMP), a questionnaire that used a seven-point Likert-type scale to rate 14 music genres. In contrast, Greenberg et al. (2016) presented participants with musical excerpts to evaluate. While these approaches may be consistent within themselves, categorizing music via questionnaires is susceptible to subjective interpretation and may not be consistent between studies that measure preference for perceived auditory stimuli. Rating auditory stimuli in terms of low-level features may represent preference for music more accurately, since participants are presumably not conflating their responses with high-level subjective genre knowledge. That said, as with all music-personality preference studies, participants’ responses may be prone to short-term fluctuating influences, such as mood and environment.
Behavioral analyses offer additional insight into this issue. Music streaming services acquire large amounts of data, such as the frequency and duration of played songs on users’ playlists, allowing the preferred songs and genres of listeners to be inferred. Since these data are often recorded over many months or years, preferences may reflect personality more reliably as the variability of mood is averaged over time. Corroboration of self-reported preferences and preferences inferred from behavioral data has been shown in the literature. Dunn et al. (2012) measured self-reported preferences and analyzed listening behavior from a music streaming database over a 3-month period. Their results showed a positive correlation between these two measures, an important finding that supports the validity of existing research (that used self-report measures of music preference) and inferences made in studies, such as that of Bansal, Flannery, and Woolhouse (2020), who deduced musical preference based on user’s music collections.
Despite the thoroughness of the investigations, results from these studies have been somewhat mixed, with only a few consistent relationships found. In general, openness often correlates with reflective and complex and intense and rebellious music dimensions, consisting of blues, classical, folk, jazz, alternative, heavy metal, and rock genres. These dimensions and genres include a broad range of tempi, and acoustic and electric instruments, with little to moderate singing. Extraversion correlates with upbeat and conventional and energetic and rhythmic dimensions, and the genres of country, pop, religious, soundtracks, dance, rap, and soul. These dimensions and genres are more narrowly focused on music that is of moderate tempo, using electric instruments, and with moderate singing (Dunn et al., 2012; Rentfrow & Gosling, 2003; Zweigenhaft, 2008). In relation to perceived attributes, agreeableness and conscientiousness correlate with preference for low arousal attributes, which are gentle, calming, and mellow, and negatively with music that is intense, forceful, abrasive, or thrilling. Neuroticism is correlated with negative valence attributes, which are depressing and sad, and negatively with fun, happy, lively, enthusiastic, and joyful music. In addition, openness correlates with attributes of positive valence and depth, attributes that are perceived as intelligent, sophisticated, inspiring, complex, poetic, deep, emotional, and thoughtful, and negatively correlates with party and danceable music (Greenberg et al., 2016). While these studies provide insight into how personality relates to music, in sum, the use of abstract categorization methods, such as genre and musical attributes, limits the conclusions that can be made about the specific features of music responsible for our likes and dislikes. Thus, there is a need for further research.
The Study
This study aims to provide a deeper understanding of how features of music, independent of genres, dimensions (e.g., reflective and complex), or psychological attributes, are preferred by individuals. While factor analysis can determine dimensions that are more or less orthogonal to one another, the precise composition of the resulting factors, in terms of musical attributes, remains unclear. Or, to put it another way, if an individual has a preference for a specific genre or perceived psychological attribute of music, how can we describe that preference objectively? To address this issue, and in contrast to the studies cited previously, we systematically varied musical attributes of the stimuli in a quantifiable manner. Referred to as
Much research has explored the use of acoustic-feature manipulation and analysis, and its links to affect-related responses in music listening. For instance, in order of importance, Eerola et al. (2013) found that mode, tempo, register, dynamics, articulation, and timbre significantly contribute to the perception of emotional expression. Furthermore, extracted acoustic features have been used in data analysis with respect to people’s playlist choices. Barone et al. (2017) analyzed acoustic features that were algorithmically defined and extracted by Spotify’s Web API. Broadly speaking, the acoustic features used in Barone et al. (2017) were described as follows:
The acoustic features in our experiment were presented as audible stimuli, which participants responded to directly using a preference rating. Therefore, in contrast to other studies that ask for self-reported genre preference (e.g., the STOMP), measurement of preference for acoustic features did not require the listeners to have a conceptual understanding of the features they were rating, in turn, minimizing the myriad subjective factors that could affect their interpretations.
Method
Participants
Undergraduate students (
Materials
The open-source music composition and notation software MuseScore (Version 3.3.4) was used to create the stimuli. First, three major-key excerpts of solo keyboard music were selected from pre-existing compositions: (1) Mozart, Piano Sonata no. 9 in D Major, K. 311, measures 17–24; (2) Beethoven, Piano Sonata No. 1, Op. 2 No. 1, Adagio, measures 1–8; and (3) Bach, Prelude No. 3 in C-sharp Major, BWV 848, measures 1–16. These excerpts were transcribed into MuseScore and then modified to create two levels within the following four factors:
Piece (Bach, Beethoven, Mozart)
The three excerpts were selected owing to the ubiquity and popularity of the composers (i.e., they are quintessential exemplars) and because their medium tempi and registers allowed them to be effectively manipulated. All the excerpts were well-formed from a music-theoretic perspective, contained regular four- or eight-measure phrases, and followed established principles of functional harmony (Figures 1, 2, and 3).

Example of

Example of

Example of
The reason for selecting the particular MAFs for this study relates in part to the ecological validity of the stimuli. For example, the Mozart excerpt occurs in both the major and minor mode versions within the larger piece, an example of him engaging in the practice of mixture (Aldwell & Cadwallader, 2018, p. 435). Register, too, is frequently manipulated within real-world compositions. For instance, the second subject material in the recapitulation of a sonata form movement—melodies are invariably transposed up or down a perfect fifth or fourth in accordance to the unfolding key structure of the piece. Moreover, exposition material is often treated to registral and mode manipulation within the development sections of sonatas. And, naturally, performers change the tempi and dynamics of their playing, resulting in any one piece being interpreted differently along these parameters. As a result, the MAF manipulations that we deployed—mode, register, tempo, dynamics—are all subject to a high degree of variability within music. Which is to say, none of the manipulations created seemingly unrealistic sounding stimuli from a stylistic perspective. These factors are described in detail next.
Dynamics (Piano, Forte)
Perceptual differences in dynamics were varied by changing MIDI velocity values (a representation of how much force is used when playing a note). Velocity values of 49 were used for the piano (
Mode (Major, Minor)
The original excerpts were all in the major mode: Bach, C-sharp major; Beethoven, F major; and Mozart, D major. Manipulations into minor used the parallel modes and were constructed to maintain melodic and harmonic conventions. For example, in the minor excerpt in Figure 2 (Beethoven/minor), where necessary the leading tone has been raised from E flat to E natural, ensuring that voice-leading and harmonic function remains well-formed.
Register (High, Low)
Stimuli were divided into high and low registers, where high was transposed one octave above the original register, and low transposed one octave below the original register. The resulting pitch heights ranged from C2 to A4 for low register stimuli, and C4 to A6 for high register stimuli. For example, Figure 1 shows register manipulations in relation to the Bach excerpt: Bach/high begins in a register starting on C#4 (left hand) and E#6 (right hand), and Bach/low begins two octaves lower, on C#2 and E#4.
Tempo (Slow, Fast)
The fast tempi were twice the speed of the slow tempi: Bach, 90 or 180 bpm; Beethoven, 45 or 90 bpm; and Mozart, 60 or 120 bpm. To compensate for the inevitable difference in durations that this manipulation would create, the slow stimuli were edited so that the slow stimuli had half as many measures as the fast stimuli. This resulted in stimuli that were each 16 seconds in duration. It should be mentioned that the original excerpts contained repeating phrases, which helped to facilitate the editing process. Examples of fast and slow versions of the Mozart excerpt are shown in Figure 3. Mozart/fast is 8 measures long and 120 bpm; Mozart/slow is 4 measures long and 60 bpm. This information gives an approximate number of note onsets per second. They have been updated as follows: Bach, 2.6 or 3.9; Beethoven, 1.6 or 3.1; and Mozart, 1.6 or 3.3.
Procedure
Participants attended this experiment in person (rather than online) and were compensated with partial course credit. Prior to the experiment, an outline of the study was explained, and participants voluntarily provided written consent before continuing with the rest of the procedure. Participants completed a demographic questionnaire, then the BFI of personality (John et al., 1991, 2008). The inventory is a five-point Likert scale that consists of 44 items. Participants responded to items phrased as “I am someone who…[statement]” (e.g., “is talkative”), where answers ranged from
Stimuli consisted of 48 manipulations, such as the examples shown in Figures 1, 2, and 3. After each presentation, participants were asked to rate the stimulus based on the following prompt: “You will hear different versions of several short piano pieces. Using the slider on the screen, please tell us how much you like or dislike each version.” The sliding scale recorded responses from 0 (
Data Analysis
To measure the effects of MAF manipulation on the dependent variable of preference ratings, separate mixed-factorial analyses of variation (ANOVAs) were performed (using R statistical software; Bates, Mächler, Bolker, & Walker, 2015; R Core Team, 2020) for each personality factor. The between-subject factor related to personality, and was either agreeableness, conscientiousness, extraversion, neuroticism, or openness; each had two levels, high and low. These levels were determined by organizing participants in relation to the median score of each factor, resulting in two groups with equal numbers of participants. The high level within each factor consisted of participants above the median; the low level consisted of participants below the median. This did not result in the skewing of demographic attributes between the groups. For example, male participants were not disproportionately low in agreeableness. Within-subject factors were the five MAFs, which included dynamics (forte, piano), mode (major, minor), piece (Bach, Beethoven, Mozart), register (high, low), and tempo (fast, slow).
Results
Data from all participants were complete and used for analyses. Personality factors were scored according to the BFI procedure described by John et al. (1991, 2008). Agreeableness had the highest mean rating, with relatively low variance ranging from 2.444 to 5.000 (
Music Acoustic Features
There were significant main effects for four of the five MAFs on preference ratings for stimuli shown in Figure 4. There was a main effect of dynamics,

Notched box plots of preference ratings for the two levels within each MAF and the three levels within piece. Notches represent the standard error around the median; ***
Agreeableness
There was a main effect for agreeableness,

Notched box plots of preference ratings for the two levels within each personality factor. Notches represent standard error around the median; ***

Significant interactions between personality and MAF. Personality factors are grouped into high and low levels on the
Conscientiousness
There was no significant main effect of conscientiousness, but there were significant interactions of conscientiousness with mode, piece, and tempo. For mode and conscientiousness (Figure 6, row 1, column 3), major mode was rated highest with low conscientiousness, and there was no difference in ratings for minor mode,
Extraversion
There was no significant main effect of extraversion. Significant interactions were found, however, with dynamics, mode, register, and tempo. With extraversion and dynamics (Figure 6, row 1, column 1), forte was rated lowest with low extraversion, while there was a difference in ratings for piano,
Neuroticism
There was no significant main effect of neuroticism. Interactions were found with register and neuroticism, and tempo and neuroticism. For register and neuroticism (Figure 6, row 2, column 2), low register was rated lowest with high neuroticism, while there was no difference between ratings for register by participants with low neuroticism,
Openness
There was no significant main effect of openness, but there were interactions with dynamics and openness, and piece and openness. For dynamics and openness (Figure 6, row 1, column 2), piano dynamics was rated disproportionately higher with high openness,
Discussion
Our results support previous research into relationships between personality and music preference. Furthermore, via the systematic manipulation of the MAF factors, this study provides a novel insight into the nature of these relationships. In general, the personality factor of agreeableness related to music preference independently of MAF manipulation, as shown by the main effect in Figure 5. Preference was also related to the MAFs
The collection of studies by Rentfrow et al. (2011, 2012) provide context in which to compare our results to musical feature-sets related to genres. Organized within the
The
Lastly, the
In addition to this research, Nave et al. (2018) found that
Openness was an additional factor in Bansal et al. (2020) that positively correlated to breadth of musical taste. In our study,
Major
There was no main effect of
Fast
The number of MAFs we chose to manipulate in our study is obviously limited relative to the number of features that vary in music. While our results demonstrate that the manipulation of MAFs can result in particular responses based on personality, there is equal potential for other MAF manipulations to explain additional variance in music preference. Timbre, in particular, is likely to play a significant role, as shown by the “music-specific attributes” (e.g., brass, piano, woodwind, and synthesizer) of the MUSIC model in Rentfrow et al. (2012) and its importance as a factor (flute, horn, and trumpet) in Eerola et al. (2013). A second limitation is that our study’s sample was largely homogeneous, with a narrow range of ages, socioeconomic backgrounds, and education. Furthermore, our sample was skewed toward female participants. In turn, the generalizability of our experiment is limited, and future research should expand to a much wider demographic range to allow for in-depth analysis of these factors. Furthermore, the BFI measure, while practical, did not allow us to consider facets of personality in individuals, which may be significant. In other words, the main BFI factors may be too broad and eclipse smaller, but potentially significant traits. Other studies, such as Zweigenhaft (2008) and Greenberg et al. (2016), have found that personality facets explain aspects of music preference; for example, the facet excitement-seeking within
With respect to further investigation, a potential approach could be to identify the frequency of acoustic features in standard genre classifications, music-preference dimensions, or psychological attributes. If, for example, dance music is frequently characterized by major mode, fast tempo, and high register, then results consistent in other literature—that preference for dance is correlated with
In summary, the contribution that our research brings to the debate outlined above is one of specificity. In Rentfrow et al. (2011, 2012), the MUSIC factors are, by their very nature, general and catch-all categories into which many different styles and musical attributes are subsumed. We show in our article that there is a direct link between preference and objectively quantifiable attributes. The manipulation of these attributes interacted with various personality traits, which furthers nuances our understanding of the relationship between people and their music.
Conclusion
The results of this study provide support for the link between personality and music preference. Though primary associations between personality factors and acoustic features were relatively weak, significant trends did emerge from the results. Conscientiousness interacted with preference for piece and tempo; extraversion with mode, register, and tempo; neuroticism with register and tempo; and openness with dynamics. The personality–preference relationships found in this study are increasingly relevant as music consumption has become an integral part of everyday human behavior (Clarke, Dibben, & Pitts, 2010; North, Hargreaves, & Hargreaves, 2004). Users have access to more music than they could possibly listen to, and the ability for music providers to connect users to music they will enjoy is important for music creators, distributors, and listeners. The functional aspects of music, for regulation of emotion and relationships and expression of identity can be improved when users are connected to music they prefer in an informed manner.
Footnotes
Contributorship
Conception and design of study by MF and MW. Participant recruitment, data analysis, and first draft of manuscript by MF. Ethical approval by MW. The manuscript was reviewed, edited, and approved by MF and MW.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Action Editor
David Meredith, Aalborg University, Department of Architecture, Design and Media Technology.
Peer Review
Rebecca Schaefer, Leiden University, Institute for Psychology; Health, Medical & Neuropsychology Unit.
One anonymous reviewer.
