Abstract
The emergence of gesture based controls like the Microsoft Kinect provides new opportunities for creative and innovative methods of human computer interaction. However, such devices are not without their limitations. The gross-motor movements of gestural interaction present physical limitations that may negatively affect interaction speed, accuracy, and workload, and subsequently affect the design of system interfaces and inputs. Conversely, interaction methods such as eye tracking require little physical effort, leveraging the unconscious and natural behaviors of human eye-movements as inputs. Unfortunately, eye tracking, in most cases, is limited to a simple pointing device. However, this research shows by combining these interactions into gaze-based gestural controls it is possible to overcome the limitations of each method, improving interaction performance by associating gestural commands to interface elements within a user’s field of view.
Get full access to this article
View all access options for this article.
