Abstract
This paper presents our framework for human robot interaction. The framework has been designed to allow non-experts to control and program complex robots in an intuitive way that is both reliable and accurate. The principle mechanism behind our model is Augmented Reality (AR), we use this to provide a diagrammatic service to the user, the service uses a range of diagrammatic markers including marker-less AR objects, which can be created and connected together and used to command and control the robot and engage in two way communications. We report on two case studies where our framework is applied to a set of command and control tasks, we measure performance across situation-awareness, task-completion-time and cognitive-load. Our results show that our model leads to greater situational awareness and improved task completion times, when compared to conventional interaction methods, such as a gamepad controller. Future work will integrate our model into multi-modal hybrids and extend the case studies to compare against other interaction methods.
Get full access to this article
View all access options for this article.
