Abstract
Some evolutionary psychologists have hypothesized that animals have priority in human attention. That is, they should be detected and selected more efficiently than other types of objects, especially man-made ones. Such a priority mechanism should automatically deploy more attentional resources and dynamic monitoring toward animal stimuli than nonanimals. Consequently, we postulated that variations of the
Keywords
Introduction
According to the
While the above hypothesis appears plausible from an evolutionary perspective, it is relevant to point out that a fundamentally similar distinction in the human semantic system between animate and inanimate (e.g., man-made artifacts like tools) has also been documented as patterns of selective impairments in neuroanatomically damaged patients (e.g., Capitani, Laiacona, Mahon, & Caramazza, 2003; Caramazza & Shelton, 1998; Gainotti, 2000, 2010; Hillis & Caramazza, 1991; Mahon & Caramazza, 2009; Warrington & Shallice, 1984). That is, in several neuropsychological studies, it has been shown that some patients can show a striking deficit for identifying animals while having a nearly intact ability to identify artifacts, whereas other patients show the reverse dissociation. Such findings are, however, not limited to patients, as a
While it remains unclear whether the neuropsychological observations described earlier stem from an innate or acquired distinction (e.g., Gainotti, 2015), they do suggest a strong relevance of visual and semantic classifications of animate and inanimate objects within, at least, the cognitive system of the primate brain. However, the animate monitoring hypothesis specifically proposes the presence of a low-level innate and adaptive mechanism for the classification of animate and inanimate objects. More specifically, according to New et al. (2007), animals should spontaneously and preferentially recruit more visual attention than artifacts regardless of their relevance to the task. Another critical aspect of the hypothesis is that, as animals in a natural setting can rapidly change their trajectory or position in a fraction of a second, the system should not only be geared toward detecting animals but also to actively monitor them in an ongoing manner through frequent inspections of their status (New et al., 2007). Accordingly, we expected that animals will bias the spatial distribution of attention and cause stronger spontaneous recruitment of attention. One area of category-specific attentional biases that have received some consideration and which, intuitively, is similar in nature to an attentional bias toward animals is the case of visual tracking of human faces (e.g., Li, Oksama, Nummenmaa, & Hyönä, 2017). Accordingly, one could anticipate that paradigms sensitive to human faces would also be sensitive to the presence of animals.
The seminal study by New et al. (2007) showed that animals were more readily detected in a
Indeed, in daily life, objects frequently change positions within our visual field, be it they change physical positions or because we move our body and eyes. Consequently, to monitor objects in our environment, we are taxed with the challenge of continuously and dynamically updating their positions. A task or experimental paradigm frequently used to study this ability is the
As a bias toward automatically monitoring animals for changes in position or state should have had significant survival value for human ancestors, New et al. (2007) specified that the system did not just evolve to detect animals but also to autonomously monitor animals in an ongoing manner. In essence, the system should be sensitive to moving objects that looks like animals if these aspects played a role in the natural selection of the system.
We believe that being able to document an attentional bias toward animals with dynamic tracking tasks should be greatly beneficial to a further understanding of the extent or limits of animacy’s ability to influence dynamic, distributed, and sustained visual attention. Hence, the goal of this study is to attempt to document the presence and extent of such a bias. Specifically, estimates of effect sizes, a sufficient level of power, and Bayesian approaches appear to be necessary, as potentially nonsignificant results cannot be used as conclusive evidence for a particular effect being absent (Dienes, 2014). Furthermore, this approach should help narrowing down the set of situations where animate monitoring can have a sizable influence on the perceptual and attentional system.
More specifically, in line with the animate monitoring hypothesis, we expected to find that attention prioritizes animals in an automatic manner. Associating a task-relevant object with animacy (i.e., an image of an animal) should promote strong attentional allocation and vigilance toward that object and this effect should be measurable as prioritized responses as well as improved tracking and monitoring ability compared with objects that are not associated with animacy. Likewise, task-irrelevant animal distractors should be particularly capable to divert attention away from task relevant objects, which should be measurable as an increase in animal distractors being incorrectly reported as targets. In other words, participants should report animals more frequently than artifacts, irrespective of their status as targets or distractors.
Experiment 1
One model of how tracking takes place in MOT proposes that the observer allocates one focus of attention per target (Cavanagh & Alvarez, 2005) and, consequentially, a limited pool of neural resources gets divided among them (Alnæs et al. 2014; Kahneman, 1973). The aim of the present, initial, experiment was to attempt to influence this assignment process by presenting one of the targets as an animal. According to the animate monitoring hypothesis, animals should automatically capture attention more strongly than other objects. Hence, our hypothesis was that targets displayed as animals during the target assignment phase in an MOT task would bias the amount of resources assigned to it and thereby result in improved tracking accuracy. In addition, we predicted that this process of prioritization should also lead to a bias in the order in which targets are reported, and that animal distractors should be reported as targets more frequently than artifact distractors.
More specifically, in the first experiment, two randomly positioned images of objects were used as targets while another set of 10 objects were used as distractors. After viewing the targets for a period of time sufficient to identify each of them uniquely, we occluded the objects with black disks during the tracking period such that the images were only visible during target assignment. We made the straightforward prediction that animal targets would be tracked more successfully than artifact targets. Second, as participants were free to report the targets in any order they liked, we predicted that animal targets would be reported (clicked on) before targets presented as artifacts due to the presumed prioritization process. Third, as targets’ localization errors would be dependent on target–distractor confusions, we predicted that when participants made erroneous responses, that is, a distractor was reported as a target, it would be more likely for such a distractor to be an animal rather than an artifact. This was expected from the supposedly automatic prioritization of animals in attention, which should make it more likely for targets’
Power analysis based on an estimated mean
To be able to quantify the evidence for and against a given hypothesis, we used JASP (https://jasp-stats.org/) to calculate Bayes factors (BFs; with JASP’s default prior). We report BF01 in favor of the null hypothesis, expressing the probability of the data given the null hypothesis relative to the alternate hypothesis (e.g., a value of 7 would suggest that the observed data are seven times more likely to have occurred under the null hypothesis than under the alternate hypothesis). Specifically, as the value of BF01 increases above 1, there is more evidence in support of the null hypothesis (e.g., that an effect is likely to be absent). Conversely, as the value decreases below 1, there is more evidence in support of the alternate hypothesis (e.g., that an effect is likely to be present). Inverting BF01 (1/BF01) yields BF10, which expresses how likely the data are under the alternate hypothesis relative to the null hypothesis (Dienes, 2014; Jarosz & Wiley, 2014). The BFs can be further interpreted or categorized based on the obtained value, for example, a BF01 in the range 1 to 3 (or BF10 within 1–0.33) can be viewed as anecdotal (i.e., weak, inconclusive) evidence (see Andraszewicz et al., 2015; Wetzels, Ravenzwaaij, & Wagenmakers, 2015, for the interpretations adopted here).
Methods
Participants
We recruited 68 (20 women) participants with a mean age of 33 years (range: 18–57 years, standard deviation [
Apparatus
The experiment was implemented with JavaScript and each participant ran the experiment on their own computer, as is typically the case with crowdsourcing experiments (Crump, McDonnell, & Gureckis, 2013).
Stimuli
We used the Snodgrass and Vanderwart’s (1980) stimulus set of line drawings to select 20 animal and 20 artifact images that were balanced on complexity ratings, animals:
Procedure
The task started with the presentation of 12 randomly positioned and nonoverlapping objects for 200 milliseconds before two of them were designated as targets by enclosing them in red circles for 1,500 milliseconds (see Figure 1(a)). Next, the red circles flashed for 1,500 milliseconds before the objects were occluded by black disks and started moving around the screen. The tracking period lasted for a random duration between 5 and 7 seconds, but durations were the same between identical paths. The display was redrawn at a rate of 30 frames per second, and the objects moved with a speed of 16 pixels per frame in a display measuring 1,200 × 800 pixels. The displays were scaled dynamically to encompass differences in screen resolutions by resizing the display area to fit within the browser window of devices not supporting the full resolution. Participants were instructed to click on the target objects as soon as their movement stopped. Feedback was given by indicating the number of correctly identified objects. Each participant was required to complete five practice trials with at least 75% correct prior to starting the experiment. The practice trials contained a different set of images than those used in the main experiment (Op de Beeck & Wagemans, 2001). Task instructions were presented with textual stepwise descriptions and illustrations as well as a video demonstration of the task.
Illustration of a trial in Experiment 1. First, the targets were indicated by enclosing them in red circles (a), then all objects were hidden behind black disks (b) before they started moving around the screen (c). Participants indicated the positions of the targets when their movement stopped by clicking on them (d), which also made the objects visible, to provide feedback.
Results
Before conducting the statistical analysis, we removed data from five participants for having a mean accuracy that was below 50% (1.5 Combined bar and scatter plots on mean accuracy, response orders (lower numbers indicate earlier responses), and percentage of incorrect responses (distractors reported as targets) over target category in Experiment 1 (i.e., percentages of animal and artifact distractors that were selected during the response phase). Error bars show standard errors, and the superimposed scatterplots show mean values of each participant.
To investigate whether participants reported animal targets before artifact targets, we conducted a
To investigate whether animal distractors were reported as targets more frequently than artifact distractors, we conducted a
Discussion
The present results did not reveal statistically significant support for the hypothesis that presenting objects as animals and artifacts during the target assignment phase in an MOT task, should lead to (a) improved tracking accuracy for animal targets, (b) earlier responses for animal targets, or (c) more animal distractors being reported as targets. Similarly, the BFs consistently showed moderate support for the null hypothesis of no effect of images of animals across measures. Given these results, it seems unlikely that our measures are substantially different between the two types of images used. As with any experimental report, the research community should decide whether the potential for even smaller effect sizes than what our study was powered for is deemed interesting and worthwhile pursuing in larger samples.
One reason for the present findings could be that simply presenting targets as animals during assignment is not sufficient to evoke a measurable bias. That is, the hypothesis rests on the assumption that prioritization is assigned to a token location of an animal and that this can be maintained as long as an attentional locus is assigned to the object, irrespective of the fact that the object no longer depicts the figure that could lead to such prioritization. Although several studies with MOT provide evidence that the visual system can track such items independent of their original identity (e.g., color), it is possible that the supposed attentional bias assigned to targets cannot be easily maintained as the objects turn to black disks and change positions during the several seconds of the tracking phase. Essentially, the presented images during the target assignment phase might have been mostly irrelevant to participants and, perhaps consequentially, became irrelevant for tracking performance as well. Hence, in the next experiment, we maintained the visibility of the images.
To prevent ceiling performance in this task, we had set a speed deemed to be sufficient for yielding errors. One possibility is that such a relatively high speed was not appropriate to uncover an advantage for animals. Thus, in the next experiment, we increased the number of targets while lowering the speed in an attempt to cause more competition between multiple attentional foci and thus induce stronger priority.
Experiment 2
The first experiment failed to indicate any attentional biases toward animals. One possibility is that the previous experiment did not pose enough competition between attentional foci to bring about an advantage for animals. According to New et al. (2007), high levels of focused attention should reduce the impact of task-irrelevant nonanimals more than task-irrelevant animals. Thus, using four targets and lowering the speed of the objects (so as to not make the task too difficult), we aimed to make it more relevant for the attentional system to perform prioritizations. Moreover, most previous studies using MOT or MIT to investigate attentional biases have used three to six targets (Jin & Xu, 2015; Li et al., 2016, 2017; Liu & Chen, 2012).
In addition, most studies demonstrating improved tracking performance for particular categories of objects (Li et al., 2017; Liu & Chen, 2012) have kept the objects visible during the tracking phase. Such an experimental design may also be more suited to test the animate monitoring hypothesis, as it specifically proposes that animals should be monitored continuously, in an ongoing manner. This may presuppose that their shape is visible while they are tracked, at least for most of the time. Besides, in ecological conditions, we rarely track a subset of identical objects that previously had a visible identity and then lost it. Hence, we generated a version of the task that would seem closer to natural scenarios. For completeness, a version with the objects hidden during tracking was also conducted at an early stage of the study and is included as Supplementary Experiment 1.
As for the previous experiments, the main prediction was that animals would be tracked more successfully as targets than artifacts. Second, we predicted that animal targets would be reported before artifact targets. Third, we predicted that when participants made erroneous responses, it would be more likely for an animal distractor to be reported as a target than for an artifact distractor. Conceivably, the present design should be more sensitive to this aspect, as the continuously visible animals could grab attention at any time during tracking.
Methods
Participants
We recruited 67 participants (13 women) with a mean age of 32.3 years (range: 19–59 years,
Stimuli
We used the same images as in Experiment 1.
Procedure
This was similar to Experiment 1, except for the following: We used four targets (two animals and two artifacts), lowered the speed to 12 pixels per frame to avoid making the task too difficult, and objects were visible during the tracking period but hidden on the last frame before the movements stopped (see Figure 3(c)). Combined with a variable trial duration, the change in procedure aimed to avoid a strategy of simply remembering how the targets looked in order to report them correctly (i.e., they were required to continuously keep track of them).
Illustrationof a trial in Experiment 2. Objects were visible during assignment (a) and tracking (b) but hidden at the last frame (c) before the response phase (d).
Results
Before performing statistical analysis, we removed three participants for having a mean accuracy below 50% (1.5 Combined bar and scatter plots for Experiment 2 on mean accuracy, response orders, and percentage of incorrect responses by category. Error bars show standard errors, and the superimposed scatterplots show mean values from each participant.
In contrast to the previous experiment, response orders ranged from 1 to 4, where 1 would represent the first response a subject made on a trial, while 4 would represent the last response. A
Then, to investigate whether animal distractors were reported as targets more frequently than artifact distractors, we applied a
Discussion
Despite increasing the number of targets and continuously displaying the target objects as animals and artifacts during tracking, we did not observe significantly more accurate tracking of target animals as compared with artifact targets. In addition, in line with the previous experiment, we failed to observe a significant precedence in reporting of animal targets in this experiment as well. Finally, we also failed to observe significantly more frequent erroneous reporting of animal distractors compared with artifacts. Despite the fact that each trial was structured to induce competition between animal and artifact attentional foci in the presence of hypothetically attention-grabbing animal distractors, we failed to reject any of the null hypotheses. In fact, the obtained BFs showed moderate evidence for the null hypothesis for the measures on accuracy and percentage of incorrect responses. However, the BF for response orders was only anecdotal, which does not warrant a firm conclusion on its support for the null hypothesis. Rather, the data appear insensitive in distinguishing between the null and the alternative hypothesis for the measure on response orders (Dienes, 2014).
In summary, it appears that artifacts can be tracked just as efficiently as animals, and we found no conclusive evidence for prioritizations of either category, nor did we observe that animate objects were conclusively more effective in capturing attention in target–distractor confusions, even though the images were continuously visible. Again, researchers should decide whether the potential for even smaller effect sizes than what our study was designed for is deemed relevant.
Experiment 3
In the previous couple of experiments, the identity of the objects was mostly irrelevant to the task; thus in the following experiment, we made the identity of the objects explicitly relevant, using image probes during the response phase, while requiring participants to localize them. Previous work using such probes has indicated more successful tracking of object properties presumed able to induce attentional biases (i.e., attractive faces and emotional expressions: Jin & Xu, 2015; Li et al., 2016; Liu & Chen, 2012). Moreover, using such image probes during the response phase, we required participants to be aware, at all times, of which objects were tracked and where these were located.
According to the animate monitoring hypothesis, this type of explicit requirement should not be necessary for observing a bias toward animals, as the bias is supposed to behave in an automatic way regardless of current goals. However, as the previous experiments failed to bring about a substantial advantage for animals, we reasoned that the situation constructed here could increase the chance of revealing such a bias.
We expected that in such conditions, the binding and tracking of animal targets would be more successful than for artifacts. Thus, the task is similar to Experiments 1 and 2, except for making the appearance of the objects at assignment directly relevant for performance. Due to the extensive literature on category-specific deficits for animals in naming, recognition and memory (e.g., Capitani et al., 1994; Låg, 2005; Låg et al., 2006; Laws & Hunter, 2006; Laws & Neve, 1999; Nairne, VanArsdall, & Cogdill, 2017; Nairne, VanArsdall, Pandeirada, Cogdill, & LeBreton, 2013), it seems difficult to purely attribute an effect of superior identity tracking accuracy for animals as stemming from an attentional bias. Consequently, we designed for the acquisition of position accuracy measures as well, by making the task sufficiently difficult, so as to avoid ceiling effects.
The design of this study allowed for investigating both identity tracking performance and position tracking performance. We defined identity accuracy as the percentage correct localizations of the probe images displayed at the bottom of the screen (see Figure 5). We defined position accuracy as the percentage correct localizations of targets, irrespective of their identities. The expectation was that the identity of animal targets would be tracked more successfully than the identity of artifact targets and, consequently, we expected their positions to be tracked more successfully as well. As in the previous experiments, we also predicted that animal distractors would be reported as targets more frequently than artifact distractors.
Illustration of a trial in Experiment 3. First targets were assigned by enclosing them in red circles (a), then all objects started moving around the display (b) before being hidden at the last frame when the movements stopped (c). Probes appeared at the bottom of the display during the response phase (d), where participants indicated the position of the probes.
Another version of this experiment, kept the objects hidden during tracking in an attempt to make the bindings between identity and positions more volatile. We included this experiment as Supplementary Experiment 2.
Methods
Participants
We recruited 71 participants (21 women) with a mean age of 32.2 years (range: 18–57 years,
Stimuli
We used the same set of images as in the previous experiments.
Procedure
This was similar to Experiment 2, with the exception that participants were required to indicate the position of the objects displayed at the bottom of the screen in the response phase (see Figure 5(d)). Identity accuracy was based on how accurately participants could localize the individual targets after the tracking period, while position accuracy was defined as how accurately target positions were reported irrespective of their identity. Each target was probed sequentially in a counterbalanced manner between animals and artifacts. The circles turned red when clicked on during the response phase. The incorrect and correct objects were revealed along with feedback about accuracy once the required number of objects had been reported.
Results
Before conducting the statistical analysis, we removed the data from five participants for having mean identity accuracy below 35% (1.5
A further Combined bar and scatter plots for Experiment 3 on mean identity accuracy, position accuracy, and percentage of incorrect responses by category. Error bars show standard errors, and the superimposed scatterplots show mean values from each participant. Identity accuracy shows how accurately participants could localize the individual targets after the tracking period. Position accuracy shows percentage correct localizations of targets, irrespective of their identities.
Before analyzing the percentage of incorrect responses on distractors by category, we removed two participants for having five or less incorrect responses. A
Discussion
Although the images were continuously visible throughout tracking and participants were explicitly required to track their identities, we failed to observe statistically significant advantages for animal targets over artifact targets in identity and position accuracy as well as in percentages of incorrect responses. Consistently, the obtained BFs showed moderate evidence for the null hypothesis of no effect of images of animals. It thus seems unlikely that our measures were substantially different between the two types of images used.
An alternative design could have required participants to locate the objects by name rather than image, which could in turn have promoted a strategy for encoding more semantic aspects of the objects. However, according to the animate monitoring hypothesis, explicit semantic processing of animals should not be required for obtaining an attentional advantage.
Experiment 4
Some MIT variants in previous studies with facial stimuli have used designs where targets and distractors are either from the same or from the different categories (Jin & Xu, 2015; Li et al., 2016, 2017; Liu & Chen, 2012). Such a design allows for the simultaneous testing of differences in the ability of the categories in holding and attracting attention. Moreover, this design allows participants to separate targets and distractors categorically in a subset of trials, which may relax the need to relay on object identity during tracking. While it may not be clear why this arrangement should be more sensitive to an attentional bias for animals than our previous attempts, our primary motivation was to use a design that has had history of successfully demonstrating biases to categories of objects.
An underlying assumption in these studies is that the binding of an identity, which may have an associated attentional bias, to its position, should improve tracking performance of that position, as it moves around the display, independently of the explicit requirement of tracking its identity (Li et al., 2017). Despite this apparent assumption, most previous studies have focused on the acquisition of identity accuracy measures. In fact, the majority of studies with facial stimuli and identity probes did not analyze position accuracies due to ceiling effects (Li et al., 2016, 2017), but one study reported an advantage for fearful over neutral faces in both position tracking accuracy and identity tracking accuracy (Jin & Xu, 2015). Thus, we specifically designed the experiment to obtain position accuracies as well. With this setup, we predicted that animals would yield an advantage in both identity accuracy and position accuracy. Moreover, we predicted that animal distractors would lead to more errors than artifact distractors.
Methods
Participants
We recruited 67 participants (17 women) with a mean age of 33 years (range: 18–67 years,
Stimuli
We used the same set of images as in the previous experiments.
Procedure
Similar to Experiment 3, except that targets were either four animals or artifacts, while distractors were either eight animals or artifacts.
Results
Before performing the statistical analysis, we removed four participants for having mean identity accuracy below 35% (1.5
An ANOVA on position accuracy over target category and distractor category revealed a significant main effect of target category, Combined bar and scatter plots for Experiment 4 on mean identity and position accuracy by target category and distractor category. Error bars show standard errors, and the superimposed scatterplots show mean values from each participant. Identity accuracy shows how accurately participants could localize the individual targets after the tracking period. Position accuracy shows percentage correct localizations of targets, irrespective of their identities.
Discussion
In line with predictions, the identity of animal targets was reported significantly more successfully than the identity of artifact targets. However, contrary to the prediction that a similar advantage should be found in position accuracy, the results showed that artifacts were tracked significantly more successfully than animals. Based on these tendencies, it would seem that participants were better at tracking the identity of animals but not their positions. Suggesting that participants were slightly better at remembering where they saw a particular animal. In addition, we found no significant effect of animal distractors, which is in line with previous studies (Li et al., 2016, 2017) as well as our previous experiments. The obtained BFs for identity accuracy were mostly in line with the significance tests. However, the BFs helped to reveal that the evidence for the alternate hypothesis of target category was only anecdotal (Wetzels et al., 2015). Thus, we cannot conclusively state that animal identities were tracked better than artifact identities. The BFs also helped to cast doubt on the statistically significant result of target category in position accuracy by showing that the null hypothesis was 1.97 times more likely than the alternate hypothesis given the data. Thus, the results indicated anecdotal evidence for no difference between the categories in position accuracy.
Although the Bayesian results did not warrant any conclusion with regard to the effect of category on identity and position accuracies, it is still interesting to consider that the indicated patterns of results might not necessarily be attributed to attention. As the results indicated that participants were not better at tracking positions associated with animals but were better at remembering what they depicted, this might suggest an advantage in memory (Nairne et al., 2013, 2017) or encoding (Hagen & Laeng, 2017). Indeed, more effective encodings of animals from brief exposures (as implied by the brief target inspections occurring in such tasks, Oksama & Hyönä, 2016) might yield the indicated advantage in reporting where particular animals were localized. Specifically, a recent study with rapid presentations of animals (Hagen & Laeng, 2017) showed that animal targets were encoded more successfully for later report than artifacts but still did not gain prioritized access to attention.
Finally, it must be stressed that the observed effect sizes in the present experiment are relatively small and far from what should be expected from the original account (New et al., 2007). In fact, the Bayesian analysis indicated that the effects of category on identity and position accuracies were weak and inconclusive. Thus, future studies should aim to test a larger sample if the potential for such small effects are deemed interesting and worthwhile.
Experiment 5
While the previous experiments largely failed to observe any clear attentional biases for animals, the animate monitoring hypothesis specifically proposed that the mechanism evolved to monitor the location and state of animate objects. Consequently, the features offered by the change detection task were deemed important by the original investigators of the hypothesis (New et al., 2007). The investigation thus far has probed more the aspect of keeping track of the changing positions of animals, but we have not yet assessed the importance of actually monitoring the state of objects. It is also possible that the original change detection design (New et al., 2007) lacked some dynamic aspects which the mechanism may be particularly sensitive to, considering that it evolved in a dynamic and noisy world (e.g., animals may be moving about, but only certain aspects of their translations in space is relevant for behavior). Another aspect of the change detection task is that it relies on disrupting visual processing by blanking the screen to mask changes in state, which may have unknown influences on the putative monitoring system. Thus, the concept of combining an MOT task with a change detection task appears to have merits worthy of an investigation despite the apparent lack of evidence so far from either type of paradigms (e.g., Hagen & Laeng, 2016).
Such combinations have been attempted in unrelated investigations (Bahrami, 2003; Oksama & Hyönä, 2008), relying on invasive disruptions of visual processing (blanking and mud splashes). A recent development, however, is the MEM task, where participants are required to continuously monitor the state of multiple objects moving randomly around a display (Wu & Wolfe, 2016). The task is thus similar to the MOT or MIT tasks, with the notable exception that participants are to monitor all objects for a specific change and respond as fast as possible when a change occurs. Importantly, in this paradigm, the objects are continuously visible as changes in state occur. Because changes in state can induce visual transients drawing attention to their location, the paradigm relies on small clockwise and counterclockwise rotations, of each stimulus throughout the duration of the tracking period, to mask the signal from a single transitory change in state, enforcing participants to pay close attention to the state of objects rather than relying on a transient signal from one of the objects. In the version of the task presented here, we changed the state of objects by manipulating their lateral (horizontal) orientation at a random point in time. This task thus combines the continuous distributed and dynamic attention aspects of the MOT or MIT paradigms with a change detection task requiring participants to respond as fast as possible when they detect a change in state of animals and artifacts. This task should thus be more similar to the type of task thought to be sensitive to the attentional bias for animals (New et al., 2007), that is, engaging active monitoring of the location and state of animals while imposing vigilance toward responding to changes that are relevant for behavior.
The main prediction was that participants should detect changes to animals faster and more correctly than changes to artifacts. In addition, we used two levels of load, as previous research with this paradigm (Wu & Wolfe, 2016) has indicated that only about two to three objects can be tracked successfully (Wu & Wolfe, 2016); we decided to have trials with two or four objects. If humans’ typical tracking capacity is two to three objects, then tracking four objects should presumably help in bringing forward an advantage for animals, especially if these are prioritized in attention.
Methods
Participants
For Experiment 5A, we recruited 60 participants (24 women) with a mean age of 31 years (range: 17–64 years,
Stimuli
For experiment 5A we selected 20 animals (alligator, ant, bear, cow, donkey, fish, fly, frog, gorilla, horse, kangaroo, lobster, monkey, mouse, penguin, pig, rabbit, rooster, seal, and snake) and 20 artifacts (airplane, baby carriage, bicycle, three cars, church, digger, gun, rocking chair, scooter, stroller, Swiss army knife, teapot, telescope, tractor, triangle ruler, trumpet, watering can, and whistle) that we judged to have clear directionality across six sets of line drawings (Bates et al., 2003; Bonin, Peereman, Malardier, Méot, & Chalard, 2003; Cycowicz, Friedman, Rothstein, & Snodgrass, 1997; Op de Beeck & Wagemans, 2001; Nishimoto, Miyawaki, Ueda, Une, & Takahashi, 2005; Snodgrass & Vanderwart, 1980). The groups of objects were matched on degree of visual change they would induce by matching them on the number of pixels that would vary when they changed orientation from left to right,
Procedure
Each trial started with the presentation of a set of objects for 3 seconds. Next, the movement phase started and the objects moved randomly around the screen for 8 seconds (see Figure 8), or until response. The objects moved with a speed of 8 pixels per frame (30 frames per second) in a display measuring 800 × 800 pixels. Objects were also randomly tilted by 30° to the left or right for short durations (233 milliseconds) in order to mask any unique transients imposed by the change in orientation of the target object (Wu & Wolfe, 2016). Participants were instructed to look for a change in lateral orientation in any of the objects and press the space bar, as quickly as possible, to indicate that they had detected a change in orientation. Black disks immediately occluded the objects once the button had been pressed. Participants were then instructed to indicate the target by clicking on it with the mouse pointer. If they failed to press the button before 8 seconds had elapsed, the trial ended with the objects occluded and the participant guessed which one had changed. A change in orientation always happened within the time range of 2 to 6 seconds. Thus, there was a sufficient time to detect a change before the trial ended. Each trial contained an equal number of animals and artifacts. The experiment included 80 trials, divided over target category (animal, artifact) and number of objects (load 2, load 4). All trial movements were randomly generated for each participant, but movement patterns were matched between the target categories. Participants were required to complete six practice trials where at least four were correct and responded to within 2 seconds of the change.
Illustration of the procedure in Experiment 5. First, targets were assigned (a), then all objects started moving around the display while frequently rotating 30° left and right (b), a change in lateral orientation occurred at a random time point between 2 and 6 seconds from start of tracking (c; notice that the church changes lateral orientation, as signaled by the position of the belfry). The movements stopped once a total of 8 seconds had elapsed or the observer pressed a button. Black disks immediately occluded the objects as the movements stopped (d). Participants then indicated the position of the changed object (e). Images adapted with permission (Snodgrass & Vanderwart, 1980, pp. 197–204).
Results
Experiment 5A
To analyze accuracy, we ran an ANOVA on Category (animal, artifact) and Load (Load 2, Load 4), which showed significant main effects of Category, Mean accuracy and RTs in Experiment 5A. Error bars show standard errors.
For the analysis of response times (RTs), we selected trials in which a correct response was made within 2,000 milliseconds from a change in orientation. An ANOVA on RT over Category (animal, artifact) and Load (Load 2, Load 4) showed significant main effects of Category,
Experiment 5B
Before the analysis, we removed three participants for having mean accuracy below 65% (1.5 Mean accuracy and RTs in Experiment 5B. Error bars show standard errors.
An ANOVA on RT over Category (animal, artifact) and Load (Load 2, Load 4) showed that the main effect of Category was not significant,
Discussion
The results from Experiment 5A revealed that changes to animals were reported significantly more accurately and faster than changes to artifacts. This seems to be in agreement with the animate monitoring hypothesis (New et al., 2007) and to bring support to the idea that the act of monitoring objects for changes is an important aspect for observing a bias for animals. However, there is a possibility that the animal stimuli were somehow easier to monitor for changes than artifacts due to some uncontrolled factors pertaining to the chosen set of images. It is thus interesting to consider the results from Experiment 5B, which used a different set of images. Similar to the previous experiment, this experiment also appeared to reveal significantly more accurate reporting of changes to animals as compared with artifacts, but it did not replicate the observation of faster detections of animal changes. While the BF for the effect of category on accuracy was in agreement with the significance results of Experiment 5A, it did not agree with the significance results of Experiment 5B. The significance tests of Experiment 5B showed that animals were tracked significantly more accurately than artifacts, while the BF showed anecdotal evidence for the null hypothesis for the same data. Given this set of results and the fact that we have only anecdotal evidence for the alternate and null hypothesis across experiments (5A, 5B), the results on the effect of category on accuracy appear inconclusive. The evidence for an effect of category in RT was in fact moderate in both experiments, with Experiment 5A showing evidence for the alternate hypothesis while Experiment 5B showing evidence for the null hypothesis. Thus, we are here faced with a conflicting set of results.
In summary, both experiments were inconclusive in relation to an effect of category on accuracy. Experiment 5A showed moderate evidence for an effect of category on RT, while Experiment 5B showed moderate evidence for no effect of category on RT. Specifically, the effect of images of animals on accuracy and RT would appear not to be robust or of considerable size, as well as appearing to be dependent on the stimuli used. While it is possible that humans are more sensitive to changes in lateral orientation of animals, such a requirement appears too specific in relation to the general advantage for animals we are seeking to find (New et al., 2007). In addition, we cannot rule out the effect some uncontrolled low-level aspects that somehow made the monitoring of the animals’ lateral orientation easier (e.g., that a protruding head and neck pointing in a certain direction could be easier to detect to have changed than objects not suggesting such directionality). Thus, it would seem appropriate to attempt to generalize the indications observed here to another type of change. Another type of change that might seem even more relevant in a survival scenario is changes in size. A change in size would intuitively signal that an animal is either getting closer or further away from the viewer, a situation which intuitively should be more relevant for survival than animals turning left and right. Thus, the next experiment was designed to directly address whether the advantage is dependent on the type of change participants were monitoring for.
Experiment 6
To assess whether the indication of an advantage for monitoring animals in Experiment 5 is specific to lateral (horizontal) orientation changes or can be generalized to another type of change that would be perhaps even more relevant in a survival scenario, we selected changes in size as another type of change to monitor for. Intuitively, changes in size provide visual cues for apparent distance of an object from an unmoving viewer so that a size-changing object may appear to move in depth during the change.
As for the previous experiment, it is predicted that changes to animals would be detected more accurately and faster than changes to artifacts. Similar to the previous experiment, we also chose to conduct a replication in a parallel experiment (6B) with a different set of images.
If the advantage for animals in Experiment 5 was related to an animate monitoring bias, then we would expect to find analogous effects in the present experiment. Conversely, if the animal advantage was specific to lateral changes and not related to an animate monitoring advantage, we expected no marked advantage for animals.
In addition, we checked the post hoc hypothesis that increases in size should cue for increased proximity (e.g., looming; Schiff, Caviness, & Gibson, 1962) and thereby be more salient and pertinent to yielding an animal advantage than decreases in size.
Methods
Participants
For Experiment 6A, we recruited 71 participants (10 women) with a mean age of 28 years (range: 18–58 years,
Stimuli
For Experiment 6A, we used the same set of images as in Experiment 5A. As this set was balanced on the number of pixels that would change in lateral inversions, it was not balanced on the amount of change introduced by changing their sizes. Thus, we resized the images such that the mean overall size, Example of a trial in Experiment 6. First, targets were assigned (a), then all objects started moving around the display while frequently rotating 30° left and right (b), a change in size occurred (larger or smaller) at a random time point between 2 and 6 seconds from start of tracking (c; notice that the teapot increases in size). The movements stopped once a total of 8 seconds had elapsed or the observer pressed a button. The objects were immediately occluded behind black disks as the movements stopped (d). Participants then indicated the location of the changed object (e). Image of rabbit adapted with permission (Snodgrass & Vanderwart, 1980, pp. 197–204).
For Experiment 6B, we selected, from a pool of 61 animals and 80 artifacts, 20 new pairs of animals (bear, beetle, bug, bull, camel, cow, duck, kangaroo, moose, mouse, parrot, rabbit, raccoon, seahorse, shark, snail, swan, turtle, and two wolves) and artifacts (bathtub, bicycle, flipper, harmonica, kitchen knife, kite, lamp, light bulb, motorbike, mousetrap, purse, rocking chair, skateboard, stroller, table, teapot, telescope, tractor, violin, and watering can). The new set were chosen, as it was more appropriate for balancing on visual properties deemed important with regard to size changes (number of pixels changing when resized; in contrast, the set selected for, e.g., Experiment 5B was balanced on number of pixels that changed when flipped left and right). The images were resized to be minimally different in number of overall pixels,
Procedure
The overall procedure was identical to Experiment 5, with the exception that participants were instructed to report when one of the objects changed size. The change in size was either 25% smaller or larger.
Results
Experiment 6A
Before analyzing the data, we removed three participants for having mean accuracy below 65% (1.5 Mean accuracy and RTs in Experiment 6A. Error bars show standard errors.
An ANOVA on RT showed a nonsignificant main effect of Category,
Experiment 6B
Before conducting the analysis, we removed six participants for having mean accuracy below 65%. An ANOVA on accuracy over Category (animal, artifact) and Load (Load 2, Load 4) showed a nonsignificant main effect of Category, Mean accuracy and RTs in Experiment 6B. Error bars show standard errors.
Similarly, an ANOVA on RT failed to reach significance for the main effect of Category,
Additional analysis
To assess the hypothesis that an animal’s increasing size should be particularly pertinent to yield an advantage, we ran an ANOVA on accuracy over Experiment (6A, 6B), Load (Load 1, Load 2), Category (animal, artifact), and Change (larger, smaller). Only the effect of Load reached significance,
Finally, we ran the same ANOVA on RTs. Only the effect of Load reached significance,
Discussion
None of the observations we had made in Experiment 5 were replicated. In these new experiments, animals were not reported to change significantly more accurately or faster than artifacts in either experiments. This was further confirmed by the obtained BFs which showed moderate evidence for the null hypothesis, supporting the conclusion that there was no effect of category on both accuracy and RTs. Moreover, the combined additional analysis revealed strong evidence for the null hypothesis. It thus appears that specific types of visual transformation can influence how well the stimuli used can be monitored for changes. As Experiments 5A and 6A used the same set of images, the advantage cannot be attributed to a particular set of images being easier to monitor for changes in general. Thus, changes in lateral orientation would seem more relevant for the hypothesized animate monitoring system than changes in size (or depth). Such a conclusion is, however, puzzling, as a change in size would intuitively signal that an animal is either getting closer or further away from the viewer and it would seem that this should be a more salient event (in survival terms) than seeing the same animal changing its direction to the left or right. Indeed, this consideration leads us to think that the results of Experiment 5 might have resulted from some low-level features’ changes that were more noticeable in the lateral orientation of animals. In addition, while it would seem intuitive that animals getting closer should be particularly capable in grabbing attention, we found no support for this. In relation to all, the null results of our previous experiments, as well as the findings with other paradigms, also showing no special role for animals in attention (Hagen & Laeng, 2016, 2017), it seems appropriate to suggest that what we observed in Experiment 5 is more likely to be related to aspects that are not directly related to a general prioritization in attentional processes (e.g., uncontrolled low-level features). Therefore, considering the conflicting results of Experiment 5 in context of the available evidence, we are inclined to conclude that Experiment 5 does not provide strong or conclusive evidence in favor of the animate monitoring mechanism.
An interesting observation from the above experiments is that accuracy appears to be lower in combination with faster RTs as compared with Experiment 5. This could indicate that when changes in size were noticed, this occurred quite rapidly, while changes in lateral orientations appear to require more processing of the visual features for some length of time after the event, as reflected in longer RTs and higher accuracy.
We should note that at present the MEM task is not a time-honored, standard, task in the cognitive sciences, thus not much is known about its limits and caveats. More research is needed to better understand the nature of the task and whether accuracy and RTs are actually indicative of attentional prioritizations of the depicted items. Such research may throw light on whether findings like those of Experiment 5 is likely to be a false positive in relation to studying the proposed innate attentional bias for animal stimuli, or conversely, the present experiment is likely to be a false negative.
Omnibus Analysis
To provide an overview of all results obtained in this study and make some principled conclusion, we conducted a final omnibus analysis; whereas several of our experiments failed to reach significance or provide conclusive evidence, they could still provide more robust indications of an animal or artifact bias when taken together. Although we based sample sizes on power-based statistical inference, one cannot exclude the possibility that the present experiments could have been underpowered, especially if there is a true effect that is smaller than the effect size originally estimated in the power analysis. Importantly, nonsignificant results are mostly inconclusive when considered alone. Thus, a more stringent way to reach a conclusion is to consider multiple experiments simultaneously, as we attempt to do next.
Although it is statistically impossible to support an effect size of exactly zero (Lakens, 2017), it is possible to use an equivalence test (Lakens, 2017) to reject effect sizes above a specified limit by showing that our estimated effect sizes are statistically smaller than a specified equivalence bound (e.g., [−0.4, 0.4]). However, in absence of a concretely defined theoretical limit for a minimum effect size, we decided to set equivalence bounds to the range of effect sizes that we estimated to have 80% power to find equivalence for (Lakens, 2017) by simulating meta-analyses with heterogeneity similar to our combined experiments.
In addition, to provide some context, we sought to quantify the effect of animal targets relative to another known effect in the field. Specifically, we compared it with the effect of increasing the number of targets (tracking load) in a standard MOT task. Previous work in our lab (Alnæs et al., 2014) has shown that either adding or removing one target leads to an average change in tracking performance of about 5.8%.
To more specifically investigate the effect of category on tracking and monitoring accuracy, we ran an omnibus ANOVA across all experiments, which is similar to running a meta-analysis of raw mean differences (Bond, Wiitala, & Richard, 2003). More specifically, we combined the accuracy measures from Experiment 1 to Experiment 2 and Supplementary Experiments 1 and 3, the position accuracy measures from Experiment 3 to Experiment 4 and Supplementary Experiment 2, and the accuracy measures from Experiment 5 and 6. This omnibus analysis revealed significant main effects of Experiment,
Next, we inspected Cook’s distances for the individual effect sizes across experiments and found that measures in Experiments 5A and 5B were likely to have comparatively large impacts on our result. As discussed previously, we have raised doubts on the appropriateness of these experiments, considered in relation to their weak evidential value and conflicting results, the null results of Experiment 6, and the general set of results. Thus, to further assess the weight of Experiment 5 in reaching a significant result in our omnibus analysis, we removed it from the analysis.
Running the same analysis without Experiment 5 showed a significant main effect of Experiment,
Next, we assessed identity accuracy across Experiments 3 and 4 and Supplementary Experiment 2. This showed significant main effects for Experiment,
When we assessed the same experiments for position accuracy, we found a significant main effect of Experiment,
We also ran an omnibus ANOVA on percentages of incorrect responses in Experiments 1 to 4 and Supplementary Experiments 1, 2, and 3. This analysis showed nonsignificant main effects of Experiment,
Finally, we conducted an omnibus ANOVA on response orders across Experiments 1 to 2 and Supplementary Experiments 1 and 3. This analysis revealed a significant main effect of Experiment,
In more concrete terms, ignoring averages per participant, we obtained 107,374 valid responses from 658 participants across all experiments (e.g., excluding responses before a change occurred in the MEM tasks). Of these, 40,620 were correctly reported animals and 40,457 were correctly reported artifacts (ignoring identity accuracies). This amounts to a difference in only 163 responses. Thus, across 81,077 human attentional engagements with animal stimuli, just 0.2% of them represent our overall observed bias in tracking and monitoring accuracy (or 0.67% by exchanging position accuracy for identity accuracy). A similar count on incorrect responses revealed that 11,790 animal distractors and 11,789 artifact distractors were incorrectly reported as targets, a difference of only one response.
Based on the above set of analyses, we cannot conclude that animals had a sizable influence on performance across tasks. Although we cannot entirely conclude that animals had no effect across tasks either, it turned out that in a majority of our tests, we got moderate to strong evidence for no effect of animals, while only inconclusive evidence for an effect in omnibus analyses that involved Experiment 5 and identity accuracy. Hence, it seems difficult to interpret these results as robustly supporting the animate monitoring hypothesis. All in all, we found evidence for no effect of images of animals in omnibus ANOVAs on tracking and monitoring accuracy (depending on the exclusion of Experiment 5), percentages of incorrect responses, and response orders. In fact, we found support for effect sizes smaller than what we had 80% power to find equivalence for. However, our data were not sensitive enough to conclude whether animals had an influence on identity accuracy or not. If the indicated effect sizes are deemed interesting and worthwhile, future studies should aim to include more participants than in this study as well as reassessing the validity of the design of Experiment 5.
General Discussion
On the basis of the animate monitoring hypothesis (New et al., 2007), we have used several versions of the MOT, MIT, and MEM tasks to investigate (a) whether images of animals can improve position and identity tracking, (b) whether they can act as more effective distractors, (c) whether they are selected prior to artifacts in the response phase, and (d) whether they are easier to monitor for changes. In the first three experiments with MOT and MIT tasks, we failed to reject the null hypothesis of no advantage or bias for animal targets and distractors. In fact, we found evidence in support of the null hypothesis. In Experiment 4, however, we did uncover a significant advantage for animal targets in identity accuracy but not in position accuracy. Remarkably, we observed the opposite: a significant advantage for artifacts. A Bayesian test did, however, show that the evidence for such effects was only weak and inconclusive. In the following two experiments (5A, 5B), instructing participants to monitor for lateral orientation changes, seemingly uncovered a pattern of results consistent with the animate monitoring hypothesis. As animals were reported significantly more accurately than artifacts in both experiments, and animal targets were reported significantly faster than artifacts in Experiment 5A. However, a Bayesian test did not support such a conclusion, as the evidence for an effect of category on accuracy turned out to only be anecdotal in both experiments. The effect of category on RT was also conflicting, as the first experiment (5A) showed moderate evidence for the alternate hypothesis while the latter (5B) showed moderate evidence for the null hypothesis. To follow up on these observations and to attempt to rule out that the indications of an advantage was specific to the type of change used, we substituted the lateral orientation changes with changes in size (as an index of proximity). This substitution appeared to be crucial for an advantage for animals in accuracy and RTs, as we consistently found moderate evidence for the null hypothesis of no effect of category. In fact, this pattern of results lends credibility to the interpretation that the results of Experiment 5 could represent a potential false positive in relation to an attentional bias for animals. More specifically, given the fact that Experiment 6A used the same set of images as Experiment 5A and failed to replicate any bias, this may support the interpretation that the specific task in combination with the image set could have had effects that were actually independent of a specific attentional bias to animals (e.g., low-level features being more salient in lateral views of animals than artifacts in general). While some of our results could potentially be viewed as false negatives or, at least, as not sensitive enough to reveal the supposed attentional bias, it seems that if a true effect of images of animals on our measures exists at all, it is likely to be small.
Based on our omnibus approach, we failed to obtain robust evidence for appreciable effect sizes across measures for position accuracy, incorrect responses (distractibility), response order (response prioritization), as well as monitoring RTs and accuracy. We did, however, find indications of improved identity accuracy for animal targets, but this effect was difficult to consider as evidence for an attentional bias given the overall context of the results. In fact, our degree of evidence in support of the presence of an effect for animals is largely dependent on the inclusion of Experiment 5, thus it seems important that future studies assess the validity of this experimental design in studying attentional biases. In fact, we can reject small-to-medium effect sizes with equivalence bounds when disregarding Experiment 5 in an omnibus analysis. Similarly, we found support for rejecting effect sizes considerably smaller than what we expected to find for measures on location accuracy, incorrect responses, and response orders. In the end, it cannot be ruled out that if there is a true effect, it might be smaller than our equivalence bounds and what our study was powered for. Hence, studies with a considerably higher level of statistical power should be conducted in future investigations, assuming that the potential for such small effects is deemed interesting and worthwhile.
We need to point out that we have attempted to tax and challenge the attentional system responsible for keeping track of objects in various ways to bring about errors such that a category-specific prioritization may become detectable. However, it seems that a robust advantage or distracting effect of animals is not easily obtained in tasks requiring divided sustained attention. Rather, it seems that special conditions may need to be met to detect a sizable advantage, which raises the question of the validity and aptness of those conditions in revealing an attentional bias.
In fact, the observed effect sizes across experiments were considerably smaller than what the authors of the original hypothesis designed for in their seminal study (New et al., 2007). Clearly, we cannot entirely rule out the presence of a small effect of animal targets. One should, however, consider the extent in which such a weak effect has practical implications for survival as well as the validity of the measures used to obtain such an effect. In particular, it is difficult to envision that lateral orientation changes should be especially sensitive to an attentional bias for animals in relation to the other null results and the original formulation of the hypothesis. In fact, it would seem necessary to reformulate the hypothesis, in order to provide a reason for why the attentional bias should be more sensitive to changes in lateral orientation of animals, without significantly improving tracking performance, or biasing response orders, or allowing them to act as more effective distractors than artifacts. For these reasons, it seems to us difficult to argue that an evolved animal monitoring circuit should be especially sensitive to changes in lateral orientation instead of the size (or proximity) of an animal. In fact, it does not seem particularly adaptive for a putative monitoring system to be specifically sensitive to some features that do not explicitly appear as more relevant for survival than others. Moreover, we cannot assume that performance in the MEM task with lateral orientation changes can be seen as exclusively grounded in attentional ability or attentional processes. There is the possibility that an advantage in detecting the type of changes made to animals over artifacts is based on cues provided by low-level features that were present in the sets of stimuli and not due to their category per se. However, we cannot rule out that sensitivity to specific low-level features could have been selected itself by natural selection to assist the detection of a specific category of objects (e.g., eye-like shapes. suggesting different directions of movement).
In addition, it seems highly relevant to judge the present evidence in the light of other experiments and experimental paradigms that have examined the prioritization of animal stimuli in attention. In particular, in our laboratory, we have previously used tasks such as change detection (Hagen & Laeng, 2016) and attentional blink (Hagen & Laeng, 2017), which have both led us to question the animate monitoring hypothesis in relation to prioritization of attentional mechanisms. Other researchers have employed inattentional blindness tasks, which apparently showed an advantage for animals (Calvillo & Hawkins, 2016; Calvillo & Jackson, 2014). However, our previous study with images of animals in an attentional blink task showed that animals have no considerable impact on attentional blinks (Hagen & Laeng, 2017), but that they are reported more successfully regardless of their temporal position, suggesting instead an advantage in perceptual processing or encoding. That is, animal stimuli are unable to surpass the blindness of the attentional blink or spontaneously induce such blinks, while other stimuli considered biologically important (e.g., arousing words, facial expressions, or food) do seem to be able to do this.
We note that another line of work (Pratt et al., 2010) has investigated the ability of arbitrarily shaped objects (squares) to capture attention when they abruptly changed motion patterns from predictable trajectories to unpredictable animate trajectories. The changes were coupled with a type of change detection task, where participants reported a vanishing square shortly after the change in motion pattern occurred. The experiments quite successfully demonstrated that such animate motion patterns are able to capture attention. An interesting question in relation to this study is whether such changes in motion patterns would be more readily detected if the objects were depicted as animals as opposed to artifacts. It is also worth noting that this study used random motion trajectories, which according to Pratt et al. (2010), should signal animate motion, as animates rarely move in predictable or lawful ways (they are self-propelled rather than moving under Newtonian physics). While our designs are markedly different from that of Pratt et al. (2010), one may raise the concern that the random motion patterns employed in the present experiments could have signaled animacy for the animate monitoring system. Hence, participants could have assigned equal priority to all objects, irrespective of what they depicted, effectively erasing any potential for a bias for images of animals. To attempt to resolve this concern, we have included Supplementary Experiment 3, which found no discernible evidence for a bias toward objects moving with random and unpredictable directional changes as compared with objects moving predictably. Importantly, it did not show that images of animals were biased when moving with predictable motions. In fact, the evidence was markedly incompatible with the aforementioned concern.
Another potential concern with the design of the present experiments is that the system responsible for assigning attentional biases to animals could have been overtaxed by the number of animals displayed simultaneously. However, considering that research employing a large number of facial stimuli in similar tasks have repeatedly been successful in showing attentional biases (Jin & Xu, 2015; Li et al., 2016, 2017; Liu & Chen, 2012), this does not appear as a significant concern, and even less so regarding our last two experiments. Future studies could, however, attempt, with appropriate balancing, trials with just one animal among multiple nonanimals to discern whether it will be prioritized more than a single nonanimal or multiple animals.
Finally, we would also like to note that, in contrast to our individual controlled experiments, the actual natural environment poses a complex and varied set of challenges. The present experiments represent only a modest probe in comparison to a noisy environment where animal monitoring may be relevant under different circumstances. Thus, researchers should pose the question of whether attentional biases for animals are even measurable in laboratory settings. Perhaps using a computer monitor showing images of animals cannot bring about robust evidence for an attentional bias, as these are confined to the use of
Conclusion
We have studied the role of animals in attention by challenging over 600 participants with several variations of visual tracking tasks, all requiring divided and sustained attention. Following the reasoning behind the animate monitoring hypothesis, we expected to find that associating positions with images of animals would lead to more accurate tracking, more vigilant monitoring, prioritized responses, and that animals would function as more effective distractors than artifacts. The combined results are, however, not strongly or unequivocally supportive of these expectations. Although some observations were in favor of the animate monitoring hypothesis, these could not be regarded as more than weak and inconclusive evidence. Indeed, the indicated effect sizes across experiments were considerably smaller than what we expected to find from current theory and previous research. We found moderate to strong evidence that images of animals do not improve positional tracking, do not act as more effective distractors, are not selected prior to artifacts in the response phase, and are not easier to monitor for changes in size.
Supplemental Material
Supplemental material for Chasing Animals With Split Attention: Are Animals Prioritized in Visual Tracking?
Supplemental material for Chasing Animals With Split Attention: Are Animals Prioritized in Visual Tracking? by Thomas Hagen, Thomas Espeseth and Bruno Laeng in i-Perception
Footnotes
Acknowledgements
The authors would like to thank Todd S. Horowitz and two anonymous reviewers for their insightful comments and valuable suggestions.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
