Abstract
The representation of haptic objects by three groups of participants (sighted, blindfolded, and congenitally blind) was studied in a mental-rotation task. Three models were tested. The participants explored a standard object continuously with the left hand and tried to find the mirror object among two alternatives explored sequentially with the right hand. Sighted participants were tested in the visual version of the task. The accuracy of judgments was very high (>95%) for all groups, and the blind group had the highest identification times. Correlation analyses were performed between (both single-trial and average) identification times and angular differences. The identification times of the sighted and blindfolded groups increased as linear functions of the angular difference between the mirror and the standard stimuli, supporting the classical model. The identification times of the blind group changed non-monotonically and were consistent with an antiparallel image (180° rotation superimposed) in the mental representation. The dual code model did not fit the data well for any participant group. The performance differences between the blindfolded and blind groups may be attributed to a modified mapping function from the object-properties-processing sub-system to the visual buffer, which was conjectured to be available also to the blind group while processing haptic objects.
Get full access to this article
View all access options for this article.
