Abstract
Eye-tracking technology offers a promising solution for hands-free directional control, with increasing use in assistive devices, robotics, and remote operations. However, challenges exist to limit their widespread adoption. This review analyzes recent developments in eye-tracking-based control systems, focusing on applications, methodologies, and challenges. Fourteen papers were selected from IEEE Xplore and the ACM Digital Library for analysis. The results indicate that applications primarily involved wheelchair navigation (50%) and UAV/drone control (29%), with additional use in robotic vehicles, virtual reality, and computer interfaces. These systems supported both manual directional control and waypoint estimation for automatic navigation, commonly using commercial or custom camera-based eye-tracking devices. Gaze tracking was achieved through image processing techniques such as HAAR cascades, Hough transforms, and advanced deep learning models to trigger control inputs. Some studies also integrated additional modalities, such as eye blinks or head gestures, for mode switching or command confirmation. Despite these advances, significant challenges were reported, including issues with system stability, latency, user fatigue, and false triggers. Additionally, inconsistencies in evaluation methods, such as the lack of standardized testing, sufficient participant numbers, or baseline comparisons, were common limitations across the reviewed papers, hindering the reliability and scalability of eye-tracking control systems in the real world. Future research should prioritize standardized evaluations and larger-scale testing regarding usability and safety to support broader adoption of eye-tracking control.
Keywords
Get full access to this article
View all access options for this article.
