Abstract
Concerns about models of cultural adaptation as analogs of genetic selection have led cognitive game theorists to explore learning-theoretic specifications. Two prominent examples, the Bush-Mosteller stochastic learning model and the Roth-Erev payoff-matching model, are aligned and integrated as special cases of a general reinforcement learning model. Both models predict stochastic collusion as a backward-looking solution to the problem of cooperation in social dilemmas based on a random walk into a self-reinforcing cooperative equilibrium. The integration uncovers hidden assumptions that constrain the generality of the theoretical derivations. Specifically, Roth and Erev assume a “power law of learning”—the curious but plausible tendency for learning to diminish with success and intensify with failure. Computer simulation is used to explore the effects on stochastic collusion in three social dilemma games. The analysis shows how the integration of alternative models can uncover underlying principles and lead to a more general theory.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
