Abstract
Subjects attempted to detect a target object specified by name (eg “chair”) in rapid serial visual presentation sequences, with each sequence consisting of 40 gray level, 72 ms images of common objects. On the 50% of the trials in which a target was present it was never in the first or last eight positions. In homogeneous sequences, the images were all of the same size (all large or all small differing by a factor of five) or scale (spatial frequency, SF, either all low-passed [2 cycles deg−1] or all high passed [10 cycles deg−1], in a 1.5 octave band). There was no difference in detection accuracy between the different sizes or the different scales. In the switched sequences, the target could differ in size or scale from all the other images in the sequence. A strong asymmetry now emerged in that small-size or high-passed targets were much more difficult to detect than when in homogeneous sequences, whereas the large-size or low-passed images were much easier! Is this coarse-to-fine tuning based on size or scale? When size was pitted against scale, in that targets could either be of small size and low SF or large size and high SF, the large-size objects were much easier to detect, with only a slight modulating effect of scale. Size — scale tuning thus operates on space (={size}) rather than SF, and requires some time for its implementation, though less than that required for the first eight images. Once implemented, the tuning can be completely efficient, as evidenced by the equivalent levels of performance in the homogeneous conditions.
Get full access to this article
View all access options for this article.
