Abstract
Social learning (learning from others) is evolutionarily adaptive under a wide range of conditions and is a long-standing area of interest across the social and biological sciences. One social-learning mechanism derived from cultural evolutionary theory is prestige bias, which allows a learner in a novel environment to quickly and inexpensively gather information as to the potentially best teachers, thus maximizing his or her chances of acquiring adaptive behavior. Learners provide deference to high-status individuals in order to ingratiate themselves with, and gain extended exposure to, that individual. We examined prestige-biased social transmission in a laboratory experiment in which participants designed arrowheads and attempted to maximize hunting success, measured in caloric return. Our main findings are that (1) participants preferentially learned from prestigious models (defined as those models at whom others spent longer times looking), and (2) prestige information and success-related information were used to the same degree, even though the former was less useful in this experiment than the latter. We also found that (3) participants were most likely to use social learning over individual (asocial) learning when they were performing poorly, in line with previous experiments, and (4) prestige information was not used more often following environmental shifts, contrary to predictions. These results support previous discussions of the key role that prestige-biased transmission plays in social learning.
Introduction
Social learning—learning by observing or interacting with others (Heyes, 1994)—is a long-standing area of interest in the social sciences, particularly in psychology and anthropology (e.g., Boyd and Richerson, 1985; Henrich and McElreath, 2003; Laland, 2004; Mesoudi, 2011a; Rendell, Fogarty, and Laland, 2010; Rendell et al., 2011). Humans participate in social learning for a variety of adaptive reasons, such as reducing uncertainty (Kameda and Nakanishi, 2002), learning complex skills and knowledge that could not have been invented by a single individual alone (Richerson and Boyd, 2000; Tomasello, Kruger, and Ratner, 1993), and passing on beneficial cultural traits to offspring (Palmer, 2010).
One proposed social-learning mechanism is prestige bias (Henrich and Gil-White, 2001), defined as the selective copying of certain “prestigious” individuals to whom others freely show deference or respect in order to increase the amount and accuracy of information available to the learner. Prestige bias allows a learner in a novel environment to quickly and inexpensively choose from whom to learn, thus maximizing his or her chances of acquiring adaptive behavioral solutions to a specific task or enterprise without having to assess directly the adaptiveness of every potential model's behavior. Learners provide deference to teachers in order to ingratiate themselves with a chosen model, thus gaining extended exposure to that model (Henrich and Gil-White, 2001). New learners can then use that information—who is paying attention to whom—to increase their likelihood of choosing a good teacher.
Consider a simple example of prestige bias, where a woman has married into a patrilocal society and her new community has a different specialization than her home location. In fact, her new community is one of the few in the world where women rather than men are responsible for making stone arrowheads. A woman in this community can enhance her new family's survival prospects by creating arrowheads that not only help her husband kill more game but that can be traded to other communities. As our transplanted woman goes about learning the task of arrowhead making, she has several pathways to success. She could engage exclusively in individual (or asocial) learning, where she tries to figure out how to make arrowheads entirely on her own with no social influence whatsoever. Given that projectile-point technology culturally evolved over tens of thousands of years through the efforts of countless generations of innovators, each making small improvements on what went before, her chances of reinventing projectile-point technology from scratch, using purely individual learning, seem slim.
Alternatively, our novice flintknapper could engage in some form of biased social learning, where she tries to copy either the object itself, if it is simple, or, more likely, the manner in which others are making their arrowheads (Boyd and Richerson, 1985). In this example, social learning is superior to individual learning because of the high costs of the latter (Boyd and Richerson, 1985). One does not become a flintknapper, let alone an accomplished one, overnight (Olausson, 2008; Pigeot, 1990). Instead of trying to reinvent the wheel, it seems more cost effective to buy a ready-made “package” off the shelf through copying, but the question becomes: Which package does she buy? Our learner could “copy the majority” (Henrich and Boyd, 1998) and attempt to make arrowheads the way most women seem to be doing it, but she doesn't have more than passing access to any of those women. Besides, conformity is a time-consuming and cognitively challenging task, as the learner would need to survey the whole group to determine the most frequently used technique (Eriksson, Enquist, and Ghirlanda, 2007).
A quicker option might be to copy the single most skilled arrowhead makers (Henrich and Broesch, 2011; Mesoudi, 2008)—those whose arrowheads kill the most game (Mesoudi, 2011b; Mesoudi and O'Brien, 2008a). This would be an example of “success bias,” in this case using information about the hunting success of a model as a guide to whom to copy. Success information, however, might also be difficult to gather under real-life conditions. Hunting success can fluctuate with random variables such as availability of prey or weather and can be confounded by other factors such as the motor skills or hand–eye coordination of the hunter rather than the quality of the arrowhead. And again, this would require our learner to assess and compare the hunting success of all or most hunters in the community to identify the most successful.
Our novice flintknapper, however, sees another way to gather information quickly. The first thing she noticed when she started making arrowheads was that anytime someone had difficulty with the steps involved, that person always sought out a specific woman in the community for help. Perhaps the master flintknapper was someone older and presumably more knowledgeable (Henrich and Gil-White, 2001; Henrich and Henrich, 2010 [but see Reyes-Garcia et al., 2008]), although our learner could not know this for sure, having no direct access to the hunting success of this woman's husband. All she knows is that everyone in the community pays this woman more attention and generally confers upon her more respect. From this, our novice decides that she, too, should pay special attention to this other woman. As such, she is able to learn the intricacies of successful arrowhead creation, allowing her husband to kill more game and herself to receive more in trade for her arrowheads.
This process might occur not only when a flintknapper enters a new group, but also whenever the environment changes such that a new kind of arrowhead becomes optimal. Little theoretical work has considered the dynamics of prestige-biased transmission directly after an environmental shift, and there are reasons to support both the increased and decreased use of prestige-biased transmission. For example, it may be the case that prestigious people continue to be imitated after an environmental shift because their reputation comes from something other than the task itself. If, for instance, prestige is related to some kind of general problem-solving ability or general intelligence (the “g” factor), then prestigious individuals will be most likely, and quickest, to discover the new optimum after the environmental shift. Even if the optimal strategy is to forego the use of prestige-biased transmission directly after an environmental shift, the continued use of this biasing mechanism by underperforming individuals may cause it to persist as a behavioral strategy. We make a first attempt to generate empirical data to address this issue.
It is intuitive that prestige bias can be cost effective, but how can we measure it? To begin to answer this question, we can divide prestige bias into four constituent parts: the bias to produce information that can be copied by others, the bias to confer deference, the bias to value having prestige (on the part of the teacher), and the bias to pay attention to prestige information (on the part of the learner). Each part is worth analyzing in detail, but here we report the results of an experiment aimed only at the bias to pay attention to prestige information.
In the first explicit experimental test of this component of prestige bias, Chudek et al. (2012) showed that 3–4-year-old children preferentially learn from adults at whom other adults have spent longer looking. Our study represents the first experimental test of prestige bias with adult participants. Further, whereas the task used by Chudek and colleagues—choosing which of two kinds of food to eat—was relatively simple, we employ a more complex task that is more representative of actual technology acquired by real-life human populations. Although complex tasks can make the interpretation of results more difficult, these tasks also better reflect the environments in which people actually make decisions (Mesoudi and O'Brien, 2008a). As the goal of this paper is to determine whether individuals choose to engage in certain learning strategies, we consider the additional complication of a multivariate task to be worthwhile. We also, for the first time, present an explicit comparison between prestige bias (where people use cues independent of success, such as eye-gaze, as a guide to model selection) and success bias (where people use direct measures of fitness or success, such as hunting yield, as a guide to model selection). If the notion that prestige information serves as an inexpensive proxy for success is correct, then people should prefer to use the more reliable success information when given both success and prestige information.
Hypotheses
The above discussion leads us to generate the following hypotheses:
H1: The amount of time spent looking at an individual will determine the likelihood of that individual being chosen as a model by a learner when direct information about an individual's skill is unavailable.
H2: The effect of the amount of time spent looking at an individual will increase immediately after an environmental shift.
H3: When learners are provided with both the amount of attention paid to potential models and the direct success of potential models, the effect of H1 (prestige bias) will decrease and we will see the use of success-biased strategies.
Task Outline
To test these hypotheses, we employed an experimental task used previously to study individual and social learning (Mesoudi, 2008, 2011b; Mesoudi and O'Brien, 2008a). In this task, participants design an arrowhead that may vary in several dimensions (length, width, thickness, shape, and color), then use their arrowhead to go on a series of “hunts.” The closer their design is to hidden optimal designs, the higher their payoff, expressed in terms of caloric return. Over successive hunts, participants can improve their design either through individual learning (trial and error) or through social learning—that is, copying the design of one or more other participants. Although previous studies (Mesoudi, 2008, 2011b; Mesoudi and O'Brien, 2008a) have examined success-biased copying—allowing participants to view the cumulative payoff of other players and then preferentially copy the most successful (highest-scoring) player—here we added the possibility of prestige bias, indicated by participants preferentially copying those models whom other participants had looked at for longer periods of time. We initially sought to test this when prestige information was the only social cue available (to test H1); we then introduced periodic environmental shifts where the hidden optimal arrowhead design changed (to test H2); and finally, we introduced direct success information to see whether participants preferentially employed prestige or success bias, if given the choice (to test H3).
Materials and Methods
Participants
One hundred thirteen participants took part in the experiment. All were enrolled at the University of Missouri and received course credit for participating in the experiment, in addition to monetary payment ranging from $2 to $8 (see below for payment scheme). Participants spent 45–60 minutes completing the experiment.
Design
All participants engaged in three seasons of hunting, each of which comprised 30 hunts. In all three seasons, participants could view prestige information relating to a series of models (see next section for details of how prestige was represented) in order to test hypothesis H1. The environment shifted between each season. Within seasons, season 1 comprised a constant environment, whereas seasons 2 and 3 contained a change of environment in order to test hypothesis H2. Season 3 presented success information alongside prestige information in order to test hypothesis H3. In all seasons, the dependent variable is the model chosen.
We can divide social learning into two components: “observation” and “copying.” In our design, participants could choose to view an arrowhead designed by another person but not actually copy it, i.e., change their arrowhead to match the model's; this would be observation but not copying. Participants who viewed an arrowhead and then copied it exhibited both observation and copying (copying cannot occur without observation).
Procedure / Task
Participants were told to imagine they were prehistoric hunters in the American Great Basin and that they needed to design the best arrowhead in order to achieve calories (see Figure 1). Three seasons of 30 hunts each were conducted, and after each hunt participants had the opportunity (1) to modify their arrowhead by learning either individually or socially, or (2) to hunt again with the same arrowhead. Individual learning cost 167 calories out of a total of 1000 potential calories (see below). Social learning or hunting again with the same arrowhead imposed no caloric reduction. In season 1, the adaptive landscape stayed constant throughout the 30 hunts. In both seasons 2 and 3, however, the adaptive landscape was changed at hunt 15 (see Table 1). Participants were warned before season 1 that this might happen. To encourage participants to perform well, a $2 reward was provided for each 2,100 calories over 13,000 calories that a participant averaged over the three seasons (e.g., a score of 13,000 resulted in no payment, a score of 15,100 resulted in a payment of $2, and so on). The average payment was $4, with a minimum of $2 and a maximum of $8.
Experimental manipulations conducted in each session

Flowchart showing the decisions that participants can make through a season of 30 hunts
So that everyone received the same information, participants in each group were told that they were interacting with individuals in other groups when in fact all information they were shown was determined beforehand. Prior to beginning, participants were instructed to wait for the other groups to get ready. After 1–2 minutes, the researcher instructed the participants that the other groups were ready and thus they could begin. To give an air of reality to the deceit, participants were assigned randomly generated wait times after each hunt (see Appendix 1 for details of wait-time rules). A manipulation check was conducted at the end of the experiment so that we could exclude from the results those participants who did not believe in the “other group” scenario. 1 The manipulation was assessed using a simple question after completion of the study: “Did you believe you were interacting with a group of real people?” Seventy-three percent of the subjects indicated a belief in the manipulation; the remaining 27% were excluded from all analyses.
Based on pilot data, we imposed a 16.7% cost, corresponding to the “low cost” condition of Kameda and Nakanishi (2002), each time a participant learned individually (see Table 1). This corresponds to a cost of 167 calories (out of a potential 1000) levied against a participant's hunting success after each hunt during which the participant learned individually. Kameda and Nakanishi argue that imposing this cost better reflects the reality of engaging in risky but potentially rewarding individual learning in a real-world environment.
Participants could change their arrowheads by manipulating any or all of five variables—three continuous (width, length, and thickness) and two discrete (shape and color), the latter containing four states each. All modifications, except those to color, resulted in changes in fitness. After each hunt, participants were given a score ranging from 1 to 1,000. The values specified for each variable were compared to an underlying function specifying optimal values, which were changed, following the rules below, at the beginning of each season and on turn 15 during seasons 1 and 2 (see Appendix 2 for details of fitness functions).
On the first hunt of each season, participants were presented with a choice of five arrowheads that had been used to hunt on a previous day, along with the hunting “success” of each arrowhead (see Table 1). We emphasize that the arrowheads are proxies for the individuals who made them, and when we say that a participant “learned” from an arrowhead, we mean that he or she learned from the individual who had created the arrowhead the participant was copying.
Participants were required to select one of the five arrowheads as their starting point. The characteristics of these arrowhead models were generated through simulation using agent-based models (Mesoudi and O'Brien, 2008b). Fitness (hunting success) was derived from a different adaptive landscape than the one used in each round so that copying any one arrowhead would not result in an unfair advantage. After selecting an arrowhead, a participant was directed to a screen on which he or she could see the picture of the arrowhead design and the values for each variable. On the first hunt, participants were required to use the arrowhead they had chosen previously. During all subsequent hunts, once a participant made a decision to learn individually, to learn socially, or to hunt, he or she could not perform either of the other actions on that hunt.
Participants who chose to learn individually were directed to a screen that contained a picture of the selected arrowhead and the values for each variable. Participants could change each of the three continuous variables between 1 and 100 and could change the two discrete variables to any of four states. They also could change as many of the values as desired or none (but still incurring the penalty for individual learning). After accepting the changes, participants were returned to the hunt screen and notified of the penalty. After hunting, participants were shown their scores for the last hunt and told to wait for the other members of their group to finish, after which the process could be repeated.
Participants who chose to learn socially were taken to a screen that showed the five arrowheads. After clicking on an arrowhead (this step did not involve actually copying the arrowhead's values), they received information regarding prestige (attentional information). In seasons 1 and 2, this information consisted of the names of the other four individuals and the amount of time that each potential learner spent examining the arrowhead he or she had highlighted. For example, a participant looking at arrowhead 1 might see that both individuals 3 and 5 spent no time looking at arrowhead 1, whereas individual 2 spent 7 seconds and individual 4 spent 11 seconds. In season 3, this information was expanded to contain the hunting success, averaged over the last three turns, of the individual looking at the arrowhead of interest (see Table 1). After examining the information, participants needed to click a button to view the characteristics of the arrowhead of interest. After clicking, participants could not choose to learn from any other arrowhead. A participant could then choose to copy any, all, or none of the given characteristics. After choosing which characteristics to copy, participants were returned to the hunt screen, after which the process could be repeated.
All viewing times were products of a random-number generator, but they were constrained in order to be realistic. Viewing times were chosen randomly from between zero and 20 seconds. This did not result in a consistently prestigious individual, and the prestige of individuals could vary each turn. In addition, conflicting information, as given in the example above, was present. Viewing times were the same for all participants. Viewing times were uncorrelated with any aspects of the arrowheads so as to avoid conflating success and prestige in an experimental setting. In the analysis, the time spent viewing a variable was a sum of all time spent looking at information for a particular arrowhead.
Analytical Methods
Multinomial logistic regression was used to predict whether any one arrowhead was “learned from”—again, a shorthand way of referring to the individual who produced a particular arrowhead. This type of analysis allows us to use the full information on all of the variables in the analysis. In multinomial regression, one of the predicted categories must be left out of the analysis and treated as the reference category; we used arrowhead 5 as the default. The necessity of having a reference category in multinomial regression should not qualitatively impact the results, even if a different arrowhead were selected as the default (see Hendrickx and Ganzeboom [1998] for a readable explanation of multinomial logistic-regression models). Further, because the data are multiple turns of a game conducted by an individual, they are organized hierarchically by individual. All analyses were conducted using subject as a clustering variable in MPlus. This procedure computes a random intercept for each subject as implemented in MPlus using TYPE=TWOLEVEL (Muthén and Muthén, 2007).
Results
Did Participants Engage in Social Learning?
A prerequisite for examining the effects of prestige bias is that participants engaged in social learning (prestige-biased or otherwise). Analyses confirmed that participants did indeed frequently engage in social learning. Figure 2, which shows the distribution of social learning—both “observation” and “copying”—in each season, indicates the frequency of social learning increases each season. Season 3, in particular, has a lower number of people either never engaging in social learning or doing so only once. The number of arrowhead views that resulted in copying for each season is shown in Figure 3. The proportion of arrowhead views that resulted in copying rises throughout the experiment, from 80% in season 1, to 88% in season 2, to 92% in season 3.

The mean number of hunts, by season, during which participants engaged in social learning

The number of model views that resulted in copying compared to those that did not
Did Participants Use Prestige Information in Seasons 1 and 2?
H1 posits that individuals use prestige information when choosing a model. Results of the multinomial logistic regression are presented in Table 2. Conceptually, this table presents the results of four different logistic regressions: comparing choosing arrowhead 1 to arrowhead 5, arrowhead 2 to arrowhead 5, and so on. Each row presents the results for one such comparison. Row one, for example, shows the effect of attention paid to each arrowhead model on the likelihood, expressed as odds ratios, of selecting arrowhead 1 instead of arrowhead 5. The columns give the parameter estimates for each independent variable. Again using row one as an example, each unit increase of time that arrowhead 1 was looked at predicts a 2.3% greater likelihood of selecting arrowhead 1 as a teacher; each unit increase of time that arrowhead 2 was looked at predicts a 0.8% decreased likelihood of selecting arrowhead 1 as a teacher; and so on. If individuals use prestige information to decide from whom to learn, we would expect to see odds ratios that statistically differ from and are greater than 1.0 on the diagonal in Table 2. Note that the expected results are found for arrowheads 3 and 4 (significant parameter estimate >1, highlighted in green) and that there appears to be a strong trend toward the expected result for arrowheads 1 and 2 (trending parameter estimate >1, highlighted in yellow).
Effect of attention paid on selection of an arrowhead from which to learn
Note: All values given as odds ratios; cells shaded yellow indicate a trend in the expected direction; cells shaded green indicate a significant result in the expected direction; 1532 observations clustered within 80 individuals.
An example might be useful for understanding the change in likelihood of choosing an arrowhead model that is implied by the results in Table 2. We'll use arrowhead 4 as the example. Assume that on a particular hunt arrowhead 4 was the one least looked at (3 seconds) and arrowhead 2 was the most looked at (15 seconds). Assume that on another hunt the time looked at arrowhead 4 is now the highest (16 seconds). The learner would be 62.4% (13 units of time at an increased probability of 4.8% per unit of time) more likely to select arrowhead 4 on the second hunt compared to the first hunt.
Note also that participants could look at the success of as many or as few of the five arrowheads as they desired. Therefore, the following results may be attenuated by a participant's failure to look at the prestige information for all arrowheads. For example, it may be the case that a participant looked only at arrowheads 2 and 3 and chose as his or her teacher the maker with the higher prestige or success, but that particular teacher may still have been the third most-prestigious or successful arrowhead maker. Therefore, any effects detected here would be stronger if attenuation were accounted for.
Was Prestige Information More Likely to Be Used After an Environmental Shift?
To test H2, whether there was an increased reliance on the attentional information directly after an environmental shift, the 10 hunts directly after the shift were separated from the other hunts. This cut point was selected through qualitative observation of subjects as they were interacting with the game. Players appeared to have recognized and responded to the environmental shift by 10 turns after the shift. The logistic regressions predicting arrowhead selection were then conducted for both sets of hunts from seasons 2 and 3. A z-test was conducted to determine whether the beta values differed from each other (Paternoster et al., 1998). The results indicate little to no differences (z-score range: 0.91–2.24, one test significant at p < .05).
When Given the Choice in Season 3, Did Participants Prefer to Use Prestige Information or Success Information?
We tested H3, the relative importance of attention paid and arrowhead success in a learner choosing to learn from a specific teacher, by examining information criteria for three statistical models predicting the arrowhead selected. One model used prestige information (equation 1, Appendix 3), a second used success information (equation 2, Appendix 3), and a third used both prestige and success information (equation 3, Appendix 3).
All three models were tested only in season 3, providing an equal sample size for each model. We followed a model-selection paradigm using Bayesian information criteria (BIC)—a measure of parsimony that rewards a model for explaining more variance and punishes a model for incorporating more variables. A lower BIC value indicates a better fit of the model to the data. Akaike Information Criteria (AIC) were also investigated, and all results are qualitatively identical to the BIC results. The difference between models was insubstantial (see Table 3), indicating that attentional information and success information are equal predictors.
Model criterion comparisons for attention paid and success information in season 3
Note: For BIC and BIC difference, lower numbers are better; BIC weight gives the likelihood of the model being the model truly underlying the distribution given the models provided.
Further, these effects are not additive, given that the full model (incorporating both prestige and success information) does not have a lower BIC than the other models. This indicates that even though prestige information was random, individuals were using prestige and success information in similar ways. The BIC weight tells us the likelihood of each of the provided models being the true model underlying the distribution of the data. For this statistic, higher values are better. In this case, the model incorporating only prestige has a 65% chance of being the model, out of the three provided, that underlies the distribution of the data.
When Was Social Learning Used, If Not After an Environmental Shift?
Although we showed above that social information was no more likely to be used after an environmental shift, further analyses revealed that social learning was more tied to a participant's own performance in the task. The rolling average (calculated over the last three hunts) of participants engaging in copying for each season is shown in Figure 4. We can see that individuals who engaged in copying were performing worse than those who merely observed. Specifically, those who engaged in copying had lower rolling averages than those who merely observed. In effect, these individuals had been performing worse over the past three hunts than they were used to. We can see that individuals opt to use social information when they are performing poorly.

The rolling average (calculated over the last three hunts) of individuals who engaged in copying compared to those who did not
Figure 5 shows even more clearly the impact of an individual's score on choosing whether or not to copy. The seasonal average was derived by taking the total score of the individual at any point in the season and dividing it by the number of hunts thus far in that season. We examined differences of means in these clustered data by testing the null hypothesis (no difference of means), by using the Wald test of parameter constraints with subject as the clustering variable. This can be interpreted as analogous to an independent-samples t-test. Individuals who engaged in social learning had lower rolling averages, corrected for the seasonal average, than individuals who did not learn socially (mean difference = 19.96, Wald chi-square = 12.33, df = 1, p < .001). Those who chose to learn had lower rolling averages when corrected for the seasonal average than those who merely observed (mean difference = 37.07, Wald chi-square = 14.86, df = 1, p < .001). The seasonally corrected rolling average for those who learned becomes much lower as the game progresses (μ1= 93.78, μ2 = 28.24, μ3 = −8.49). This should not be interpreted as evidence that social learning does not pay but rather as showing that people become better able to determine when to persist in a strategy and when to learn from others.

The rolling average (calculated over the last three hunts) after correcting for the individual's seasonal average
Discussion
Prestige-biased transmission relies on several components, all of which are necessary and together sufficient to give rise to prestige bias (Henrich and Gil-White, 2001): the production and sharing of information, the conferment of deference, the bias to value having prestige (on the part of the potential teacher), and the bias to attend to prestige (on the part of the learner). The theoretical benefits of using prestige-biased transmission have previously been outlined (Henrich and Gil-White, 2001) and used in studies of behavior (e.g., Kirkpatrick and Ellis, 2006; Plourde, 2008, 2009), but the predictions of the impact of prestige on social transmission have not been well examined in the laboratory. Following on from Chudek et al.'s (2012) recent demonstration that children use prestige information in model selection, the experiment reported here was designed to understand the impact of attentional (prestige) information on the likelihood of an adult learner selecting one individual over another as a teacher, and in the context of a complex and ecologically relevant technology-design-based task.
Our findings are consistent with the idea that attention paid to an individual impacts the likelihood of that individual being chosen by a learner. In the portion of the experiment where learners had access only to attentional information regarding each individual (seasons 1 and 2), we found that attention paid to an individual increased the likelihood of that individual being selected. This can be seen in Table 2. Our experiment also addressed the issue of which strategy will be preferred, prestige bias or success bias, when both sets of information are present. We found that when participants are presented with both sets of information, there is no difference (see Table 3). This is all the more surprising given that the prestige information was randomly generated and thus provided a poor indirect cue of adaptive behavior, whereas the success information provided direct and accurate information concerning adaptive behavior. This perhaps suggests that our participants are used to using prestige cues in their everyday lives and this carries over into the laboratory, even when it is a suboptimal strategy.
This study allows us to make some broader statements regarding social learning and transmission. First, as people become integrated into a novel environment, such as in the arrowhead game, they become better at deciding when to learn from other individuals. Note that in Figure 2 there is a significant trend toward a higher proportion of learning events that result in copying as the game progresses through the seasons. Second, people are more likely to engage in social learning when they are performing poorly. This is seen by the difference in both the rolling average (see Figure 4) and the seasonally corrected rolling average (see Figure 5) between those who copy and those who do not. This “copy-when-unsuccessful” strategy has been found in previous experiments (Kameda and Nakanishi, 2003; Mesoudi, 2008) and is thought to be a generally adaptive social-learning rule found across the animal kingdom (Laland, 2004). Third, as individuals spend time in a novel environment, perfecting the techniques associated with that environment, they are more likely both to persist in using a technique known to be successful and to realize more quickly when their technique is performing poorly and change it. This is evident in the fact that the seasonally corrected rolling average for those who learn socially drops each season (see Figure 5), which indicates that people are more sensitive to failure and likely to persist with a successful technique once integrated into a novel environment. If they did not become more sensitive to failure, then their seasonally corrected averages would not have changed between seasons. Similarly, if they were not so likely to persist with a given technique, then their seasonally corrected rolling average when they chose to learn would be higher, at least greater than zero.
Finally, one interesting finding was that in the context of a shifting adaptive landscape, such as the one we inserted in this study, there was no evidence that individuals use prestige-biased transmission at higher rates than in stable environments. One might have thought that if prestige information can be seen as a reliable indicator of individual performance or ability, then there would be an increase after the environment changed, but this was not the case. It may be that this shift was not suitably dramatic enough to be noticed or accommodated by our participants, or that the aforementioned effect of performing poorly overwhelms this manipulation of the environment.
The precise factors that constitute prestige in natural environments have not been identified. Indeed, the factors that affect prestige are likely to be highly variable across societies (Henrich and Gil-White, 2001). For example, the aged and knowledgeable do not enjoy greater prestige in the selection of medicinal plants among the Tsimane' (Reyes-Garcia et al., 2008), but the aged and knowledgeable do enjoy greater prestige in the diffusion of food taboos during pregnancy and lactation in Fijians (Henrich and Henrich, 2010). In a sample of American undergraduates, however, we found that attentional information, a valid proxy for prestige in this context, does impact an individual's choice of whom to select as a teacher. We anticipate that future cross-cultural experimental and ethnographic studies will shed light on the social and ecological factors that shape the use of prestige information in the human species.
Footnotes
1
The protocol for deceit suggested by the American Psychological Association was followed, and an ethics board approved the manipulation.
Acknowledgements
We thank David Geary for allowing us to use his computer lab; Mark Flinn, Lee Lyman, Elizabeth Cashdan, and two anonymous reviewers for extremely helpful comments on how to strengthen the manuscript; Devin Smittle for assistance with computer programming; and Melody Galen for preparing the figures and tables.
Appendix 1: Wait-time Rules for the Arrowhead Game
To give an air of reality to the deceit, participants were assigned randomly generated wait times after each hunt according to the following rules:
Appendix 2: Fitness Functions for the Arrowhead Game
The score from a single arrowhead design was shown to participants as W (0 < W ≤ 1000) and was calculated from the weighted fitness contributions of the four functional attributes according to the following function, where subscripts λ denotes length, w denotes width, t denotes thickness, and s denotes shape, and each of the four fitness components ranges from 0 to 1:
Participants were informed only of their overall score, not of the individual contribution of each attribute to their score. Note that this equation specifies the relative importance of the variables in computing the final score in descending order of importance as thickness, length, weight, and finally shape (although this was not a crucial detail in the present study).
As in Mesoudi and O'Brien (2008a), the three continuous variables—length, width, and thickness—each had bimodal fitness distributions. Fitness was calculated from two normally distributed functions for each attribute, W1 and W2, where W1 is centered around a global optimum value—O1 (10 < O1 < 90)—and W2 is centered around a local optimum value—O2 (10 < O2 < 90). If a participant's attribute value is at the global optimum, O1, then the participant receives the maximum possible fitness for that attribute (Wλ = 1, Ww = 1, or Wt = 1), prior to being subject to random noise. If a participant's value is at the local optimum, O2, then the participant receives two-thirds of the fitness of the global optimum (Wλ = .66, Ww = .66, or Wt = .66), prior to being subject to random noise. Any deviation from either optimum decreases the feedback from that attribute's fitness and hence the overall feedback, W, according to equation (1). Equations (2) and (3) give the ultimate functions W1 and W2 for length (hence Wλ1 and Wλ2), adapted from Boyd and Richerson (1985):
where Xλ is the participant's length value; Oλ1 and Oλ2 are the specific values of the two optima in terms of the arbitrary length units; P1 and P2 are the maximum fitness values given by the global and local optima, respectively; and s is a measure of the intensity of selection. In this study, P1, P2, and s were constant at P1 = 1, P2 = .66, s = .025. The overall Wλ is then given by the larger of the two values, Wλ1 and Wλ2 (equation 4):
The same process was repeated for the other variables.
One variable, shape, was discrete and had four possible values. These were randomly rank ordered to determine their fitness. The first ranked value received the maximum, Ws = 1, followed by Ws = .9, Ws = .66, and Ws = .33 in descending rank order. The fitness scores shown to the participants were subject to random error (McElreath et al., 2005), used to represent random environmental factors such as prey availability. The normal distribution used to draw the fitness value had a mean of W and a standard deviation of E, set at a constant E = 5 in this case.
Appendix 3: Equations Specifying the Statistical Models Used for One of the Logistic Regressions Contained within the Multinomial Logistic Regression
α denotes the intercept for each individual. β, χ, δ, γ, η, κ, λ, μ, ν, and θ are cluster-(individual) level random effects. p1–p5 represent the attention paid to each arrowhead variable, and s1–s5 represent the success of each arrowhead variable.
