Abstract
The aim of this experiment was to test the influence of target context on adaptation to scale perturbations introduced by a video display. Participants performed pointing movements without direct vision of their moving hand, although they could see their movements on a video display. Their perceived movements could be reduced, enlarged, or displayed at their actual size. Three target contexts were compared: dark surround, illuminated frame, and familiar object. Movements were executed with or without vision of hand displacement. Results showed that target context enhanced an allocentric coding of the movement, which improved movement execution. However, the effect of target context changed whether or not the displacement of the hand was available. Overall, the results suggest that target context allowed the extraction of dynamic information about movements, which is used to program and control movements. This suggests that target context could be used efficiently to improve spatial accuracy and speed in teleoperation learning. Potential applications include the reduction of difficulties encountered during teleoperation learning through the introduction of visual context.
Get full access to this article
View all access options for this article.
