Abstract
This experiment integrated a speech-interactive system with a prototype automatic target recognizer. The scenario simulated was that of an attack helicopter pop-up maneuver searching for several target types. The sequence of tasks during operational control of the target screener involved pilot entry of navigation coordinates, targeting mode, target type, target selection and weapon preparation. This task sequence is experimentally evaluated using three different modalities of interaction: a speech recognition and speech generation dialogue, speech recognition with visual prompting, and conventional visual-manual transactions. The results indicate significantly better flight control performance during simultaneous speech-interaction for the targeting tasks. Speech technology results indicate selective performance benefits for the entire sequence of targeting tasks, which are not apparent at the individual switch-closure level. Overall avionics system activities are well supported by the current data.
Get full access to this article
View all access options for this article.
