Abstract
For the most basic embedded systems, an integrator must assume that the system has no keyboard support, no mouse, and no monitor. A programmable keypad and a limited alphanumeric display are the usual interface devices. Our research looks at the new generation of powerful embedded platforms and operating systems to investigate the potential integration of automatic speech recognition (ASR) to control high-end diagnostic systems. Our goal is to present a working design model and identify critical issues to aid future designers and system engineers in the process of incorporating speech control into embedded systems. Critical issues discussed include hardware platforms, use of off-the-shelf ASR software, real-time versus near-real-time operating systems, and user system navigation and location awareness.
Get full access to this article
View all access options for this article.
