Abstract
This paper reports an investigation of integrating the use of speech recognition and manual input devices in a multimodal CAD system. A functional integration of systems components helped to simplify the system, while optimising the functionality of each input device within the multimodal framework. The system was configured by allocating CAD commands and alphanumeric data to each input device using the frequency of use principle. Two alternative multimodal CAD systems were compared to an all-speech input system using data obtained from 24 subjects. A one-way mixed ANOVA design was used, with the system type as independent variable, and behaviour and performance measures as dependent variables. Behaviour was quantified in terms of frequency and duration of eye movements, hand movements and speech verbalizations, while performance was measured on product quality and production costs. The hybrid systems improved user behaviour, but the inflexible mapping of input devices to task items created memory and confusion problems for users. There were also behaviour and performance tradeoffs in using the systems. The findings raised potential design issues for a multimodal CAD system integrating speech and manual input.
Get full access to this article
View all access options for this article.
