Abstract
To investigate individual differences in creativity as measured with a complex problem-solving task, we developed a computational model of the remote associates test (RAT). For 50 years, the RAT has been used to measure creativity. Each RAT question presents three cue words that are linked by a fourth word, which is the correct answer. We hypothesized that individuals perform poorly on the RAT when they are biased to consider high-frequency candidate answers. To assess this hypothesis, we tested individuals with 48 RAT questions and required speeded responding to encourage guessing. Results supported our hypothesis. We generated a norm-based model of the RAT using a high-dimensional semantic space, and this model accurately identified correct answers. A frequency-biased model that included different levels of bias for high-frequency candidate answers explained variance for both correct and incorrect responses. Providing new insight into the nature of creativity, the model explains why some RAT questions are more difficult than others, and why some people perform better than others on the RAT.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
