Abstract
The mnemonic effects of animacy and threat were explored with photographic stimuli. After studying labeled photographs of animate and inanimate items that were either threatening or nonthreatening, participants recalled significantly more animate items than inanimate items and more threatening items than nonthreatening items. However, a recognition test of the photographs showed higher accuracy for inanimate photographs. Eyetracking technology was utilized in the second experiment to determine if participants’ eye movements are affected by the animacy or threat status of stimuli and whether the pattern of eye movements were similar to the pattern of memory results. The free recall patterns followed the typical effects of animacy and threat, but a reverse animacy effect was again found in photograph recognition. Further, eyetracking measures revealed patterns similar to those of the free recall data, with more fixations and more time spent viewing animate items and threatening items. The data present a dichotomy between memory for the specific details of the studied stimuli (i.e., the details of the studied photographs) and memory for the more general semantic information of the studied stimuli (i.e., the ability to recall more animate items than inanimate items). The results of the eyetracking show that animate items and threatening items are more likely to capture visual attention of participants when compared to inanimate items and nonthreatening items, which could be at least partly responsible for the increased memory for these item types in recall memory, but not necessarily recognition memory.
Is a (Threatening) Picture (of an Animate Stimulus) Worth a Thousand Words?
There is evidence for the animacy effect using eyetracking and photographic stimuli, as animate items are better remembered than inanimate items (Nairne et al., 2013). Myriad studies provide evidence of this animacy effect with a variety of stimuli and under a variety of conditions in adults in retrospective memory (see Nairne et al., 2017 for a review) as well as prospective memory (Félix et al., 2024). The effect is also present in young children (Aslan & John, 2016). The focus on studying animacy in memory followed a surge of interest in exploring the evolutionary underpinnings of human memory, started in part due to research exploring the survival processing advantage in memory (Nairne et al., 2007) as well as a recent focus on exploring the apparent priority in cognitive processing given to animate stimuli. For example, animate items are more likely to be detected in change detection tasks (Altman et al., 2016) and inattentional blindness tasks (Calvillo & Hawkins, 2016; Calvillo & Jackson, 2014). Animate items are more likely to be reported in a serial visual presentation task (Guerrero & Calvillo, 2016; Hagen & Laeng, 2017). Further, participants took longer to identify the font color of animate items compared to inanimate items in a modified Stroop task, suggesting that animate items were being processed differently than inanimate items (Bugaiska et al., 2019).
Although it is clear that animacy is prioritized in human memory, the proximate mechanism that supports the animacy effect is not yet known. Recent research suggests that the animacy effect is due to increased item-specific memory for the animate items, as opposed to intentional storage or retrieval strategies utilized by participants (Serra, 2021; VanArsdall et al., 2017). The increased item-specific memory strength for animate items could have a number of causes and several proximate mechanisms for the animacy effect in memory have been suggested, including elaboration and imagery (e.g., Bonin et al., 2015, 2022; Gelin et al., 2019; Meinhardt et al., 2020), valence of the items (Popp & Serra, 2018), emotional or mental arousal of the items (e.g., Meinhardt et al., 2018; Popp & Serra, 2016), mortality salience induced by the items (Popp & Serra, 2016), and attention capture (e.g., Bonin et al., 2014; Leding, 2020; Popp & Serra, 2016; VanArsdall et al., 2013). Evidence discrediting some of these possibilities exists, such as emotional or mental arousal (e.g., Meinhardt et al., 2018; Popp & Serra, 2018) and valence (Popp & Serra, 2018). However, clear evidence in support of one proximate mechanism over the others has not been found, but several studies have provided evidence for the role of attention capture in the animacy effect.
For example, the animacy effect persists under conditions that manipulate participants’ attention and memory ability, such as being under a cognitive load when categorizing animate and inanimate items (Bonin et al., 2015) and being in a divided attention condition while studying animate and inanimate words for a later memory test (e.g., Leding, 2019; Rawlinson & Kelley, 2021). The animacy effect persists in survival processing conditions (Gelin et al., 2017) as well as in conditions manipulating shallow and deep processing of studied items (Leding, 2018). Animacy does not always lead to a consistent advantage across memory tasks, but in cases where it does not, the evidence suggests attention capture could still be partly responsible for the results. For example, paired-associate learning often shows a reverse animacy effect, where words paired with animate cues are less likely to be remembered than words paired with inanimate cues, suggesting that the animate cues might capture attention, leading to less attention available to form an association for the paired word (e.g., Kazanas et al., 2020; Popp & Serra, 2016; although see VanArsdall et al., 2015 for evidence for the animacy effect with paired-associate learning). Further, in recognition memory tests, false alarms for animate items are often higher than for inanimate items (e.g., Félix & Pandeirada, 2024; Leding, 2020), perhaps suggesting that unstudied animate items are more likely to capture attention during the test portion of the experiment causing them to be falsely recognized.
The likelihood of animate stimuli to capture attention could be due, at least in part, to the perceived threat of animate stimuli compared to inanimate stimuli used in studies examining the animacy effect. To determine whether this was the case, animacy and threat were manipulated separately where participants were presented with animate threatening, animate nonthreatening, inanimate threatening, and inanimate nonthreatening stimuli (Leding, 2019). Animate items were remembered better than inanimate items, and threatening items were remembered better than nonthreatening items in both recall (Leding, 2019) and recognition memory (Leding, 2020), suggesting independent influences of these two variables. Further, for recognition memory, false alarms were more likely for animate items and threatening items, compared to inanimate items and nonthreatening items. These effects persisted across a manipulation of response signal delay (RSD), which should affect participants’ ability to utilize strategic processes to correctly identify studied items (Leding, 2020). In studies examining location memory, both animacy and threat separately enhanced location memory in adults, with the emotional intensity of the stimuli being related to the memory performance of both the animacy and threat effects (Lhoste et al., 2025). Together, these results provide further evidence that attention capture might be partly responsible for the animacy and threat effects in memory found with these materials.
The current studies were designed to explore the possibility that attention capture is related to the effects of animacy and threat in memory found in Leding (2019, 2020). The first study used the same word stimuli from Leding (2019) along with simultaneously presented photographs to establish whether the pattern of recall results for animacy and threat would remain consistent when photographs were used. Bonin et al. (2014) found that recall for animates was better than recall for inanimates when photographs and verbal labels were presented simultaneously to participants. Further, this experiment explored whether participants would be more likely to recognize photographs of certain item types, such as animate stimuli or threatening stimuli. Because recall involves using the semantic labels associated with the stimuli whereas recognition of photographs could rely more on familiarity of the information or the perceptual details in the photographs, a within-subjects design was utilized to look for the possibility that differences in the patterns of results for the two memory tests could be found. However, based on prior research, it was predicted that animate items and threatening items would be more likely to be recalled compared to inanimate items and nonthreatening items. In line with previous studies examining animacy and threat in recognition memory for verbal information, it was also predicted that a similar pattern of results would occur for the recognition memory test of the photographs for targets and that false alarms would also be higher for animate items compared to inanimate items and for threatening items compared to nonthreatening items (Leding, 2020).
The second experiment utilized eyetracking technology to give insight into whether animate items or threatening items were more likely to capture the visual attention of the participants when compared to inanimate items or nonthreatening items. Recent evidence suggests that there are advantages to using eyetracking to examine memory, including that the oculomotor system and hippocampal memory system are well connected, suggesting that eye movements are functional in the formation of memory in human and nonhuman primates (Ryan & Shen, 2020). For example, free eye movements while participants studied scenes were predictive of memory performance on a later recognition task of those scenes and in a more controlled experiment to test for a causal relationship, participants whose eye movements were restricted during the study phase had reduced hit rates (Damiano & Walther, 2019). Further, a study utilizing eyetracking and fMRI showed that gaze fixations for novel stimuli and hippocampal activation were positively correlated (Liu et al., 2017). In the second experiment, participants viewed displays of four stimuli while an eyetracker measured how quickly they first viewed an image of a certain item type, the number of eye fixations, how long participants viewed certain item types, and how many times they revisited a certain item type during the stimulus presentation. Participants’ recall memory for the words and recognition memory for the photographs was then tested. It was predicted that the eyetracking measures would favor the animate items, such that animate items would be more likely to be viewed for the first time more quickly than inanimate items, and that more time would be spent viewing the animate items. This was predicted because if the eyetracking measures favored animate items, then it would suggest that these items are more likely to capture the visual attention of participants, which could be related to increased likelihood that those items are remembered. It was predicted that results for the recall memory test of the verbal labels and recognition memory test for the photographs would replicate the patterns found in Experiment 1.
Experiment 1
Method
Participants
Participants were 42 students 1 (31 indicated that their current gender identity was female, 11 indicated that their current gender identity was male; mean age = 20.29, SD = 3.15). The participants were recruited through an online experiment sign-up system and were told that they should only participate if English was their first language. Four participants indicated that English was not their native language. When the data were analyzed with these participants excluded, the significant effects and pattern of results remained the same as when they were included, so the data from these participants were included in the reported analyses. Participants received partial course credit for participation.
Materials
The stimuli list comprised of 112 words, with an equal number of animate threatening, animate nonthreatening, inanimate threatening, and inanimate nonthreatening items (Leding, 2019). Each word on the stimuli list was represented by two photographs, one was used during the study portion of the experiment and the other was used as a distractor on the recognition test. The use of the photographs as the target and distractor was counterbalanced across conditions. The photographs were chosen so that they would be similar (e.g., for the stimulus word ‘lion,’ each photograph included a male lion laying on the ground with its head up, surrounded by grasses in its natural environment, and for the stimulus word ‘towels,’ each photograph included a different set of white towels sitting on a counter with tiles in the background). Visual stimuli for the study portion of the experiment were created that were 500 pixels (horizontally) by 400 pixels (vertically). The photographs were sized to be 500 pixels (horizontally) by 333 pixels (vertically) and the remaining portion of the stimulus (500 pixels by 67 pixels) was a white rectangle at the bottom of the photograph with the stimulus word centered and typed in 24-pt Times New Roman black font. The verbal label was included with the photograph, as in Bonin et al. (2014) to ensure that participants used the intended names to correspond with the photographs. The study stimuli were presented in a random order through DirectRT (Jarvis, 2014) at a rate of one stimulus every 2000 ms with a 250 ms interstimulus interval on a black background.
The stimuli for the recognition test were the originally studied stimuli plus the additional 112 unstudied stimuli. The white rectangle with the stimulus word was replaced with a black rectangle. The study stimuli were presented to participants in a random order on a black background with a scale at the bottom of the screen to remind participants which key represented an ‘old’ response and which key represented a ‘new’ response. Participants were able to work through the recognition test at their own pace. At the end of the DirectRT program participants were asked their age, current gender identity, and whether English was their native language.
Procedure
Participants completed the study individually or in groups of up to four people. Participants sat at computers that were separated by dividers. After signing the consent form, participants were told that they would be viewing a series of stimuli that would include a word and a photograph representing the word. They were told that they should pay attention to the pictures and words because their memory for the items would be tested. The participants then viewed 112 stimuli on the computer screen and then completed a 10 min distractor task where they worked on a word search that included the names of the 50 states in the United States as the items to be found. After the distractor task, the participants were given five minutes to write down as many words from the list as they could remember. After four minutes had passed, the participants were told that they had one minute remaining and that they should continue trying to remember words. At the end of the recall test, instructions for the recognition test were given. Although the act of completing the recall memory test could affect participants’ subsequent recognition memory for the photographs, this order was chosen to avoid participants viewing each photograph a second time during the recognition test before they were able to complete the free recall test. Participants were instructed that they would be presented with photographs and that they should press the key labeled “old” (the z key) if they had previously seen the photograph and the key labeled “new” (the/key) if they had not previously seen the photograph. The instructions stated that there would be photographs that were similar to the ones that had previously been studied and that participants should indicate that a photograph was old only if they had previously seen that exact photograph. Participants then completed the 224-item recognition test at their own pace and then completed a short demographic questionnaire that asked their age, whether English was their native language, and their current gender identity. Participants were then thanked and debriefed.
Results
The purpose of the experiment was to test whether threat and animacy affected recall rates of the word stimuli and recognition rates of photographs. A series of 2 (Animacy: Animate, Inanimate) by 2 (Threat: Threatening, Nonthreatening) repeated-measures ANOVAs were conducted on proportion of recall for the word stimuli, as well as target recognition, false alarms, d’ scores, and response bias (C) for the recognition test on photographs. See Table 1 for means and standard deviations.
Proportion Recall, Target Recognition, False Recognition, d’ Scores, and Response Bias Scores for Experiment 1.
Note. Standard deviations are presented in parentheses.
Recall
There was a significant effect of animacy, F(1, 41) = 21.70, MSE = 0.01, P < .001, ηp2 = .346, with participants recalling more animate items than inanimate items. There was also a significant effect of Threat, F(1, 41) = 45.43, MSE = 0.01, P < .001, ηp2 = .526, with participants recalling more threatening items than nonthreatening items. The animacy by threat interaction was not significant (P = .194), providing evidence for the independent influences of animacy and threat on recall memory. The results of the analysis on recall replicated those found previously using word stimuli, with similar ranges of correct recall (e.g., Leding, 2019), showing independent effects of animacy and threat.
Target Recognition
For the proportion of studied pictures recognized, there was a significant main effect of Animacy, F(1, 41) = 53.67, MSE = 0.01, P < .001, ηp2 = .567. Unlike the typical animacy effect, the photographs of the inanimate items were recognized more often than photographs of animate items. There was also a significant main effect of Threat, F(1, 41) = 8.06, MSE = 0.01, P = .007, ηp2 = .164, with threatening items being recognized more often than nonthreatening items. The animacy by threat interaction was not significant (p = .945). Thus, when considering correct recognition of the photographs, the animacy effect was reversed with animate items being recognized less frequently than the inanimate items, while the effect of threat was maintained. These results suggest that completing the recall test did not necessarily affect performance in the recognition memory test, since opposite patterns for animate items were found for recall and recognition.
False Alarm Recognition, d' Scores, and Response Bias
The false alarm rate was calculated by considering the proportion of new items that were incorrectly reported as ‘old’, or previously studied. In addition, d’ scores were calculated; scores were corrected using instructions from Macmillan and Kaplan (1985), where target recognition proportions of 1.0 were corrected using (1 − (1/2N)) where N was the number of targets and false recognition proportions of 0 were corrected using (1/(2N)) where N was the maximum number of false alarms. 2 The d’ scores give a measure of participants’ ability to discriminate between the old and new items, with higher scores indicating a better ability between new and previously studied items (Snodgrass & Corwin, 1988). Response bias (C) scores were calculated for each item type with C = 0.5*(Zfalse alarms + Zhits), where positive C values suggest conservative biases, or a tendency to say no to items in the recognition test, and negative C values suggest liberal biases, or a tendency to say yes to items in the recognition test (Snodgrass & Corwin, 1988).
For false alarms, there was a significant main effect of Animacy, F(1, 41) = 7.32, MSE = 0.01, P = .010, ηp2 = .152, with false alarms being higher for animate items than inanimate items. The main effect of Threat and the interaction were not significant (both Ps > .220). The d' scores were calculated for the four item types to demonstrate the ability for participants to discriminate between the old and new items (Snodgrass & Corwin, 1988). The ANOVA for the d' scores indicated a significant main effect for Animacy, F(1, 41) = 73.93, MSE = 0.12, p < .001, ηp2 = .643, with higher d’ scores for the inanimate items compared to the animate items. This indicates that participants were better able to discriminate new and old photographs for inanimate items compared to animate items. The main effect of Threat and the interaction were not significant (both Ps > .475). The response bias scores (C) were calculated for the four item types to determine whether participants were more conservative or liberal in their responses for the four item types. The main effects of Animacy and Threat were significant, F(1, 41) = 12.45, MSE = 0.06, p = .001, ηp2 = .233, and F(1, 41) = 10.32, MSE = 0.03, P = .003, ηp2 = .201, respectively. Participants were more liberal in their response for animate items (M = −0.243) than inanimate items (M = −0.115) and were more liberal in their response for nonthreatening items (M = −0.224) than threatening items (M = −0.133). The interaction was not significant. Thus, the results of the recognition test indicate that participants had fewer correct recognitions and more false recognitions for animate items compared to inanimate items, and these results were corroborated with the d’ scores and response bias scores which showed higher discriminability and more conservative responding for the inanimate items. When considering threat, participants recognized threatening items more often than nonthreatening items, but there was no difference in false alarms, likely because participants were more conservative in their responses for threatening items compared to nonthreatening items.
Discussion
When participants studied words that were accompanied by photographic representations of those words, the animacy effect and threat effect both persisted through free recall of the words. The animacy effect is similar to that found in Bonin et al. (2014) when participants had higher recall for animate items than inanimate items when they were presented with photographs and their verbal labels. Although the effect of threat persisted in the recognition test, the animacy effect was not replicated for recognition of photographic stimuli and was, in fact, reversed where participants recognized photographs of inanimate items at a significantly higher rate than photographs of animate items. Thus, the recall data cannot be interpreted as being due to the more distinctive photographs of animate stimuli, because the photographs were less likely to be correctly recognized. The data present a dichotomy between memory for the specific details of the studied stimuli (i.e., the details of the studied photographs) and memory for the more general semantic information of the content of the studied stimuli (i.e., the ability to recall more animate items than inanimate items). From an evolutionary perspective, it makes sense that humans would not need to be able to remember the specific details and characteristics of animate stimuli that would allow item identification at an individual level. For example, it would be unnecessary to know which specific tiger was seen and instead to just know that there was a tiger. Therefore, it seems reasonable that recall for the animate items and the threatening items would be higher even when the animacy effect did not persist through recognition of photographic stimuli.
Experiment 2
The second study was designed to investigate whether animate items or threatening items would be likely to capture visual attention, when compared to inanimate and nonthreatening items. An eyetracker was used to determine if certain item types were viewed longer and revisited more often than other item types and to see if those items were more likely to be remembered.
Method
Participants
Participants were 71 students 3 (56 indicated that their current gender identity was female, 14 indicated their current gender identity was male, and 1 person did not disclose; mean age = 21.29, SD = 5.66). The participants were recruited through an online experiment sign-up system and were told that they should only participate if English was their first language. Six students indicated that English was not their first language. When the data were analyzed with these participants excluded, the significant effects and pattern of results remained the same as when they were included, so the data from these participants were included in the reported analyses. Data from three additional participants were excluded because one decided to withdraw participation during the study and two did not view over 10% of the pictures during the eyetracking portion of the study.
Materials
A Gazepoint GP3 Eyetracking device utilizing Gazepoint Control software was used to collect the eyetracking data.
The same 112-item word list from Experiment 1 was used in this study that included animate threatening, animate nonthreatening, inanimate threatening, and inanimate nonthreatening items. In Experiment 1 participants viewed the 112 items individually whereas in Experiment 2 participants viewed the 112 items across 28 displays that included four items each. Thus, two sets of 28 displays were created that each included four target stimuli. The four target stimuli on each display were the photographs with their corresponding word as in Experiment 1. Each of the four item types was represented on each of the 28 displays. The displays were 1280 × 1080 pixels and each of the four target stimuli were 500 × 400 pixels, arranged in the four corners of the display. Each of the four target stimuli in a display was designated as an area of interest (AOI) in the Gazepoint Control software. The software measured how long upon stimulus onset it took for participants to first view each AOI, how many eye fixations occurred in each AOI, the total time participants spent viewing each AOI, and how many times during stimulus presentation the participants revisited each AOI. The target stimuli were surrounded by a black background, with 240 pixels separating the target stimuli across both the height and width of the display. For each target stimulus, the photograph was 500 × 333 pixels and the label below the picture was 500 × 67 pixels. When constructing the stimulus displays, one item from each of the four item types was randomly selected to be included. The four stimuli that were presented on each display were located in the four corners of the 1280 × 1080 pixel display. The four item types were presented in each of the four corners of the stimulus displays in a counterbalanced manner, so that all possible combinations of locations were used. The two different sets of displays utilized different pictures such that the pictures from the display set that was not viewed by the participants could be used as distractors on the recognition test. A fixation display was also created, which was a 1280 × 1080 pixels white background with a black cross measuring 105 × 105 pixels located in the center. The recognition test from Experiment 1 was used.
Procedure
Participants completed the study individually. For the first part of the experiment, participants sat at a computer that was equipped with a Gazepoint GP3 eyetracking device and Gazepoint Control software. The participants were told that the experiment was interested in their memory for picture and word stimuli and that during the stimulus presentation their eye movements would be tracked with the eyetracking device that was located below the computer monitor. Before the stimulus presentation began, each participant completed a calibration session with the eyetracking device and Gazepoint Control software to ensure that the device was calibrated with their eye movements. Participants were asked to complete the nine-point default calibration process in the Gazepoint Control software. They were instructed to visually follow a dot around a screen. The dot moved to nine different locations. Upon completion of this, the calibration was tested using the software's default screen where eleven circles are displayed on the screen with crosses in each circle. The research assistant used a laser pointer to point to the crosses in the circles and asked the participants to look at the laser pointer. The Gazepoint Control software shows where the participant's current eye movements are fixated on the screen during the calibration testing. If the eye movements were not detected at the point of the laser the participant completed the calibration session again.
After the calibration session, the research assistant told the participant that they would begin the experiment and reminded them that their eye movements would be tracked. Participants were asked to try to stay in the same seating position, to continue to focus on the computer screen, and to try to avoid touching their face. The experiment session started and the participants then viewed the 28 displays. A black screen was presented for 2 s and then participants viewed the fixation screen for 1 s, and then viewed a stimulus with four pictures on it for 10 s A 1 s interstimulus interval of a black screen occurred before the next fixation screen appeared. After the presentation of the 28 displays, the participants completed a 5-min distractor task where they completed the same word search as in Experiment 1. After the distractor task period, the participants were asked to write down as many of the studied words as they could remember. Participants were given five minutes to do this. After four minutes had passed, the research assistant told participants that they had one minute remaining and to continue trying to remember words during the final minute. At the end of the recall test, participants were given instructions for the recognition portion of the experiment. The participants were asked to move to a different computer so that they knew their eye movements were no longer being tracked. The instructions for the recognition test were the same as in Experiment 1. Participants completed the recognition test at their own pace and then completed a short demographic questionnaire. Participants were then thanked and debriefed.
Results
Eyetracking
The eyetracking software recorded how long it took participants to first view each of the four AOI on the display, the amount of time they spent viewing each AOI on the display, the number of fixations for each AOI on the display, and the number of times each AOI was revisited during the 10 s interval that the stimulus was displayed. The average for each of these dependent variables was calculated for each participant for the four item types. 4 A series of 2 (Animacy: Animate, Inanimate) by 2 (Threat: Threatening, Nonthreatening) repeated-measures ANOVAs were conducted on these dependent variables. See Table 2 for means and standard deviations.
Descriptive Statistics for Eyetracking Data for Experiment 2.
Note. Standard deviations are presented in parentheses.
For the time to first view, if participants did not view a picture on the stimulus during the 10 s interval this was recorded as a −1. For the purpose of this analysis, those scores were deleted and the averages for missing data were adjusted accordingly. 5 This occurred very rarely in the data set, with only 18 instances of a picture not being viewed out of 7952 possible instances for the 71 participants. The main effect of Animacy was not significant (P = .207) but the main effect of Threat was significant, F(1, 70) = 9.50, MSE = 0.04, P = .003, ηp2 = .120, with threatening items being viewed for the first time more quickly than nonthreatening items. The Animacy by Threat interaction was significant, F(1, 70) = 5.26, MSE = 0.04, P = .025, ηp2 = .070. The interaction was explored by examining the difference between animate and inanimate items separately for the threatening and nonthreatening items. There was a significant difference in animacy for the threatening items, t(70) = 2.29, P = .025, with animate threatening items being viewed more quickly for the first time than the inanimate threatening items. The difference for the nonthreatening items was not significant (P = .547). Thus, when considering the amount of time that it took participants to first view an item, the animate threatening items were viewed more quickly than any of the other item types. This suggests that animate threatening items were initially capturing the attention of participants when presented with the four different types of stimuli.
For amount of time that participants spent viewing each item type, there was a significant effect of Animacy, F(1, 70) = 4.29, MSE = 0.05, P = .042, ηp2 = .058, with animate items viewed longer than inanimate items. There was also a significant effect of Threat, F(1, 70) = 16.92, MSE = 0.03, P < .001, ηp2 = .195, with threatening items viewed longer than nonthreatening items. The interaction was also significant, F(1, 70) = 4.95, MSE = 0.03, P = .029, ηp2 = .066. This interaction was further explored by comparing the average time spent viewing the animate and inanimate items for the threatening items and nonthreatening items separately. For the threatening items, participants spent significantly more time viewing the animate items than the inanimate items, t(70) = 2.61, P = .011, but there was no difference in time spent viewing the animate and inanimate pictures for the nonthreatening items (P = .832). Thus, the participants spent more time viewing the animate threatening items than any of the other items, with patterns of main effects that were similar to those of the recall data in Experiment 1.
For fixations, there was a significant effect of animacy, F(1, 70) = 6.58, MSE = 0.27, P = .012, ηp2 = .086, with animate items receiving more fixations than inanimate items. There was a significant effect of Threat, F(1, 70) = 20.22, MSE = 0.19, P < .001, ηp2 = .224, with threatening items receiving more fixations than nonthreatening items. The interaction was also significant, F(1, 70) = 7.08, MSE = 0.18, P = .010, ηp2 = .092. This interaction was further explored by comparing the average number of fixations for the animate and inanimate items for the threatening items and nonthreatening items separately. For the threatening items, there were significantly more fixations for the animate items than the inanimate items, t(70) = 3.20, P = .002, with no significant difference for the nonthreatening items (P = .707). Not surprisingly, the analysis on fixations is similar to that of average time viewed, with there being a higher number of fixations for animate threatening items compared to the other item types.
For number of times participants revisited item types, there was a significant effect of Animacy, F(1, 70) = 16.21, MSE = 0.05, P < .001, ηp2 = .188, with animate items receiving more revisits than inanimate items. The main effect of Threat and the interaction were not significant (both P’s > .116), suggesting that animate items were more likely than inanimate items to have attention returned to them during the 10 s presentation of the display.
Taken together, the analyses on the eyetracking data suggest that participants’ attention was likely to be captured by the animate threatening items, as these were the items that had the shortest amount of time to first view and also had more fixations and more total time being viewed. Further, the animate items were revisited more often than the inanimate items, again suggesting that animate items, and especially animate threatening items, captured the attention of participants during the study portion of the experiment.
Free Recall
After presentation of the stimuli, participants completed a free recall test for the words studied. A 2 (Animacy: Animate, Inanimate) by 2 (Threat: Threatening, Nonthreatening) repeated-measures ANOVA was conducted on proportion of recall for the word stimuli. See Table 3 for means and standard deviations. There was a significant effect of animacy, F(1, 70) = 50.93, MSE = 0.01, P < .001, ηp2 = .421, with animate items being recalled at higher rates than inanimate items. The main effect of Threat was also significant, F(1, 70) = 163.71, MSE = 0.004, P < .001, ηp2 = .700, with threatening items being recalled at higher rates than nonthreatening items. The interaction was not significant (P = .727). Thus, the results for recall were similar to those in Experiment 1 as well as those of previous experiments using these word stimuli without the accompanying photographs (e.g., Leding, 2019), with animate and threatening items being better remembered in recall tests compared to inanimate and nonthreatening items.
Proportion Recall, Target Recognition, False Recognition, d’ Scores, and Response Bias Scores for Experiment 2.
Note. Standard deviations are presented in parentheses.
Target Recognition
To test the effects of animacy and threat on picture recognition, separate 2 (Animacy: Animate, Inanimate) by 2 (Threat: Threatening, Nonthreatening) repeated-measures ANOVAs were conducted on the proportion of studied pictures recognized, false alarms, d' scores, and response bias (C). See Table 3 for means and standard deviations. For target recognition, the main effect of Animacy was significant, F(1, 70) = 24.20, MSE = 0.02, P < .001, ηp2 = .257, with inanimate items being recognized more often than animate items. The main effect of Threat was also significant, F(1, 70) = 15.64, MSE = 0.01, P < .001, ηp2 = .183, with threatening items recognized more often than nonthreatening items. The animacy by threat interaction was not significant (P = .978).
False Alarm Recognition, d' Scores, and Response Bias
For false alarms, there was a significant main effect of Animacy, F(1, 70) = 8.04, MSE = 0.01, P = .006, ηp2 = .103, with false alarms being higher for animate items than inanimate items. The main effect of Threat was not significant (P = .125) and the interaction was marginally significant, F(1, 70) = 3.52, MSE = 0.01, P = .065, ηp2 = .048, with the false alarm rate for inanimate nonthreatening items being the lowest of the four item types. The ANOVA on d’ scores was similar to that in Experiment 1. There was a significant effect of animacy, F(1, 70) = 46.40, MSE = 0.17, P < .001, ηp2 = .399, with inanimate items having higher d’ scores compared to animate items, indicating that participants were better able to discriminate new and old photographs for the inanimate items. The main effect of Threat and the interaction were not significant (both Ps > .206). For response bias, the main effect of Threat was significant, F(1, 70) = 13.62, MSE = 0.03, P < .001, ηp2 = .163. Participants were more liberal in their response for nonthreatening items (M = −0.141) than threatening items (M = −0.067). The main effect of Animacy and the interaction were not significant.
The results for target recognition, false recognition, and d’ scores followed the same patterns as those in Experiment 1, where participants had better target recognition for inanimate items and threatening items when compared with animate items and nonthreatening items, respectively, and more false recognition for animate items compared to inanimate items. The d' scores in both studies indicated that inanimate items were more easily distinguishable compared to animate items. The pattern of response bias scores differed slightly from Experiment 1, with the main effect of animacy not being significant, but a significant difference for threatening items where participants were more conservative in their responses for threatening items compared to the nonthreatening items. These results contrast with those of the free recall responses, which showed the typical animacy effect of better recall for animate items than inanimate items along with the threat effect in memory. Further, the results for picture recognition contrast with those of the eyetracking data, which showed that animate items were more likely to be viewed longer, have more fixations, and more likely to be revisited, when compared to inanimate items.
General Discussion
The present studies add to the growing evidence in favor of the role of attention being at least partly responsible for the animacy effect in memory. When presented with the photographic stimuli along with the word stimuli in Experiment 1, participants were more likely to recall animate items compared to inanimate items and threatening items compared to nonthreatening items. These results are consistent with past studies using these materials examining the influence of animacy and threat on memory (e.g., Leding, 2019). Recognition memory for the photographs did not follow this same pattern of results. Although threatening photographs were recognized more often than nonthreatening photographs, the inanimate photographs were correctly recognized more often than the animate photographs and animate photographs were more likely to have false alarms. Thus, the first experiment replicated the typical animacy and threat effects for recall memory of the verbal labels, but not for recognition of the photographs, going against the original predictions of the study and studies examining the animacy effect in recognition memory with verbal stimuli (e.g., Félix & Pandeirada, 2024). Interestingly, although Félix and Pandeirada (2024) found that animate items were more likely to be recognized than inanimate items, they did not find an animacy effect in the discriminability measure (A′) across the animacy status of the items.
The second experiment utilized eyetracking technology to determine whether certain item types were more likely to capture participants’ visual attention. When participants were presented with stimuli consisting of four photographs – one representing each of the item types – the time to first view a stimulus was shortest for the animate threatening items. Further, for the total amount of time spent viewing each item type as well as the number of fixations, participants viewed animate items longer than inanimate items and threatening items longer than nonthreatening items. A significant interaction revealed that the animate threatening items were the ones that were viewed the longest and had the most fixations. Animate items were also more likely to be revisited during the duration of the stimulus presentation, when compared to inanimate items. Thus, the eyetracking data present evidence that both animacy and threat are related to where participants direct their visual attention. Further, the animate threatening items were the ones that were viewed for the first time the most quickly and viewed for longer periods of times with more fixations. The eyetracking data follow similar patterns to the recall results, with animate items recalled more often than inanimate items and threatening items recalled more often than nonthreatening items, although the animacy by threat interaction was significant in some eyetracking measures and not significant for the recall data. However, the recognition results for the photographs did not follow these patterns and instead were the same as in Experiment 1, with threatening items correctly recognized more often than nonthreatening items and inanimate items recognized more often than animate items. This pattern does not relate to the eyetracking data, which indicated inanimate items were not as likely to capture the visual attention of the participants.
Using eyetracking technology, Yorzinski et al. (2014) found that threatening animals were detected more quickly and were more likely to be viewed than nonthreatening animals, suggesting that dangerous animals are likely to capture and maintain attention. Similarly, the present study found that animate items and threatening items are more likely to capture and maintain visual attention, with the animate threatening items often being most likely to do this. Although this pattern of results in the eyetracking data corresponded closely to the recall rates of the items, the pattern did not correspond with the recognition rates of the photographs. In fact, participants were better at recognizing inanimate photographs compared to animate photographs even though the eyetracking data did not follow this same pattern. The data from the two experiments present a dichotomy between memory for the ability to recall that items were presented and memory for the details of a specific item. They also further demonstrate that the threatening status of items makes them more likely to be attended to and, consistent with prior research, more likely to be remembered in both recall (Leding, 2019) and recognition (Leding, 2020) tests of memory.
From an evolutionary standpoint, it would not typically be necessary to remember the specific details of a predator that you saw, although remembering that you saw a predator would be imperative. Popp and Serra (2016) suggested that the animacy effect would be strengthened when additional processing of individual items leads to enhanced memory such as in free recall, but that the attention given to animate items could detract attention from other information, such as in their study using cued recall tests where associations must be learned, or as in learning the details of a photograph that corresponds with an animate entity. Thus, although the animate items were more likely to capture the visual attention of participants, as evidenced by the eyetracking data, that visual attention could lead to additional cognitive processing of the animate items which could increase the likelihood that the item is recalled but decrease the likelihood that the participants are engaging in effortful encoding about the details of the photograph itself. Knowing that you have seen a predator (e.g., a tiger) or prey (e.g., antelope) is enough detail to recall that you saw the stimulus and encoding the specific details of that stimulus (e.g., the specific pattern of stripes on the animal) is likely unnecessary for survival. However, encoding the specific details of inanimate objects, such as weapons or tools, so that they can be distinguished from other items that belong to other individuals, might be more important from an evolutionary standpoint.
Another possibility is that the inclusion of the verbal labels with the photographs caused participants’ attention to be better captured by animate words, leading participants to generate their own mental imagery that did not match the photographs leading to decreased recognition for those items. Furthermore, the backgrounds of the photographs might have played a role in the likelihood that the photographs were correctly recognized in the test portion of the experiment. Although neutral backgrounds were chosen, the variety of background types that could be found for inanimate stimuli likely had more variation than the background types that would be available for photographs of animate stimuli. That is, it could be that the inanimate photographs had more distinctive backgrounds that led to better recognition memory either through increased familiarity for the photograph or increased memory for the specific details of the image, or it could be that the animate stimuli drew attention to the stimuli themselves, leading to less focus on the backgrounds of those photographs. Future studies could use photographic stimuli that have had the backgrounds removed from the picture without the additional verbal label to help control for these possible confounds.
Conclusion
The current studies were conducted to determine whether the animacy effect persisted when photographic stimuli were used and to provide an initial exploration of using eyetracking to determine whether participants’ visual attention was captured by animate stimuli. In addition, the studies further tested the effect of threat in recall and recognition memory that has been previously demonstrated (e.g., Leding, 2019, 2020). The animacy effect and threat effect in recall memory were present in both experiments, even with the photographic stimuli being presented during the study portion. However, although recognition memory for threatening items was higher than for nonthreatening items, the photographs of animate items were not better recognized than inanimate items in both experiments, even when the eyetracking data from Experiment 2 suggest that participants’ visual attention was more likely to be captured by the animate items. These results provide further evidence for the robustness of the animacy effect, as the effect in recall memory persisted even when stimuli were used that led to improved recognition memory of the photographs for the inanimate items. The results also show that the threatening status of items leads them to capture visual attention and be more likely to be remembered in both recall and recognition memory tests.
Although the specific mechanisms underlying the animacy effect in memory are still unknown, the current studies present evidence that there might be a dissociation between memory for the specific details of the studied stimuli (i.e., the perceptual details of the particular studied photographs) and memory for the more general semantic information of the content of the studied stimuli (i.e., the ability to recall more animate items than inanimate items) such that the animacy effect in memory might not extend to specific details of animate items. The current studies also provide evidence that the animacy effect and threat effect in recall memory might be due, at least in part, to the visual attention capture of participants when presented with various types of stimuli. The dissociation between the pattern of results for recall, recognition, and the eyetracking measures across the two studies demonstrates the necessity of examining multiple memory measures and outcomes to better understand the mechanisms underlying the animacy effect in memory.
Footnotes
Ethics Approval
Approval was obtained from the IRB of the University of North Florida. The procedures used in this study adhere to the tenets of the Declaration of Helsinki.
Consent to Participate
Informed consent was obtained from all individual participants included in the study.
Consent for Publication
Not applicable.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: the paper was supported by a University of North Florida Faculty Publishing Grant.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Availability of Data and Materials
The materials and data that support the findings of these studies are available from the author, upon reasonable request.
