Abstract
Over the past decade, listening comprehension tests have been converting to computer-based tests that include visual input. However, little research is available to suggest how test takers engage with different types of visuals on such tests. The present study compared a series of still images to video in academic computer-based tests to determine how test takers engage with these two test modes. The study, which employed observations, retrospective reports and interviews, used data from university-level non-native speakers of English. The findings suggest that test takers engage differently with these two modes of delivery. Specifically, while test takers engaged minimally and similarly with the still images, there was wide variation in the ways and degree to which they engaged with the video stimulus. Implications of the study are that computer-based tests of listening comprehension could include still images with only minimally altering the construct that is measured by audio-only listening tests, but the utilization of video in such computer-based tests may require a rethinking of the listening construct.
Get full access to this article
View all access options for this article.
