Abstract
Control of the robotic arm on the International Space Station is a challenging endeavor, not only due to the high consequence of failure, but also because the limited number and arrangement of cameras greatly increases the difficulty of maneuvering the arm. There is great potential for automation to reduce such effort, but developing the right kind and degree of automation is a key concern. Mismatches between the perspective of the operator and the view of the robotic arm, and between the direction of control and response of the arm, contribute to performance degradations. In this paper we describe the development of a computational structure that combines a set of existent human performance modules to address such issues. These modules include the Frame of Reference Transformation (FORT), the Basic Operational Robotic Instructional System (BORIS), the Man-machine Integration Design and Analysis System (MIDAS v5), and the Salience, Effort, Expectancy, and Value (SEEV) attention model as applied in a simulation model of a robotic operator termed MORRIS.
Get full access to this article
View all access options for this article.
