Abstract
People with visual impairments may experience difficulties in learning new physical exercises due to a lack of visual feedback. Learning and practicing yoga is especially challenging for this population as yoga requires imitation-oriented learning. A typical yoga class requires students to observe and copy poses and movements as the instructor presents them, while maintaining postural balance during the practice. Without additional, nonvisual feedback, it can be difficult for students with visual impairments to understand whether they have accurately copied a pose – and if they have not, how to fix an inaccurate pose. Therefore, there is a need for an intelligent learning system that can capture a person’s physical posture and provide additional, nonvisual feedback to guide them into a correct pose. This study is a preliminary step toward the development of a wearable inertial sensor-based virtual learning system for people who are blind or have low vision. Using hierarchical task analysis, we developed a step-by-step conceptual model of yoga poses, which can be used in constructing an effective nonvisual feedback system. We also ranked sensor locations according to their importance by analyzing postural deviations in each pose compared to the reference starting pose.
Get full access to this article
View all access options for this article.
