Abstract
The spinal column requires special care through exercises focused on muscle strengthening, flexibility, and mobility to minimize the risk of developing musculoskeletal disorders that may affect the quality of life. Guidelines for spinal column exercises are commonly presented through printed and multimedia guides accompanied with demonstrations performed by a physiotherapist, occupational health expert, or physical fitness trainer. However, existing guides lack interaction and oral explanations may not always be clear to the user, leading to decreased engagement and motivation to start, continue, or complete an exercise program. In this article, we present two interactive and engaging posture-tracking user interfaces intended to promote proper spinal column exercise form. One user interface employs a wooden manikin with an integrated inertial measurement unit to provide a tangible user interaction. The other user interface presents a mobile application that provides instructions and explanations about the exercises. Both user interfaces allow recording key postures during the exercise for reference and feedback. We compared the usability of the interfaces through a series of flexion and extension exercises, monitored with an inertial measuring unit worn around the torso, and a Microsoft Kinect V2 vision-based sensor. Although no significant differences between the manikin user interface and the mobile application were found in terms of usability, the inertial measurement unit provided more accurate and reliable data in comparison to the Microsoft Kinect V2 as a result of body occlusions in front of the sensor caused during the torso flexion. Although both user interfaces provide different experiences and performed well, we believe that a combination of both will improve user engagement and motivation, while providing a more accurate motion profile.
Introduction
Musculoskeletal disorders (MSDs) denote health problems of the locomotor apparatus involving muscles, tendons, cartilage, ligaments, and nerves. 1 MSDs can affect the quality of life depending on their classification (i.e. light, transitory, irreversible, or disabling). According to the World Health Organization (WHO), approximately 30% of the global population suffers from some form of MSD related to the lower back, leading to pain, discomfort, and mobility problems among others. 2 MSDs resulting from back injuries (e.g. ischiatic, disk degeneration, and herniation), account for approximately 60% of global work absenteeism, 1 and MSDs are responsible for the most common causes of acute long-term pain and physical disability. 2 In addition, MSDs are rapidly becoming the leading cause of disability, 3 resulting in increased health- and productivity-related costs,4,5 and a focus on return to work practices. 6 Furthermore, with a worldwide growing aging population, the number of musculoskeletal health-related problems is expected to rise considerably.7,8 Among the numerous forms of MSDs, those specific to the spinal column carry the majority of global burden. 9 However, mechanical overload, repetition, exposure time, posture in the workplace, and daily activities increase the risk of acquiring an MSD.1,3
The continuous rise of MSDs is becoming a public health problem that is leading to the establishment of governmental and WHO policies focused on education, awareness, and prevention, with special attention placed into providing tools to reduce the risk of lower-back pain.1,6 The risk of developing MSDs can be reduced by performing exercises focused on the spinal column and performed on a regular basis. Traditionally, exercise programs using printed guides, videos, and multimedia have been used to educate people about the importance of proper health care through physical activity, with particular focus on MSD prevention. 10 Currently, new technologies are providing engaging forms of physical activity using motion capture, game-based systems with achievements, social networks, and consumer-grade devices (e.g. smartphones, wearables, and gaming user interfaces (UIs) such as the Microsoft Kinect, the Sony PlayStation Move, or the Nintendo Switch motion controllers).11,12
When exercising, engagement, motivation, and joy play an important role, particularly when considering physical activity specific to physiotherapy, occupational health exercises, and routines focused on preventing, maintaining, or recovering body muscle motor functionality. 13 The repetitive nature of exercising, coupled with unclear materials and demonstrations may discourage the execution of the physical activity and the commitment to follow through the process without supervision. 14 Currently, commercial motion capture systems, and consumer-level devices such as wearables and video game UIs are being used to provide engaging and novel forms of exercising in the form of exergames, that is, games designed to elicit physical activity.5,15,16 Motion capture technologies have become as important tools for assessing movements during physical activity that can provide movement tracking, posture comparison, calorie consumption, and performance. 17 User metrics from motion tracking allows for improving progression through customized exercise routines based on the gathered data and provided feedback. 18
In this article, we present the development and evaluation of two posture-tracking UIs to demonstrate proper back posture exercises. These UIs are presented as a complementary tool to traditional media based on printed guides and oral explanations with the goal of providing clearer and engaging forms of lower-back exercising. The first UI is a wooden manikin with an embedded motion sensor that allows for manipulating a virtual avatar that mirrors the manikin movements, records motion data, and allows comparing it with the user’s motion data recorded while exercising. The second UI is a mobile app that presents a virtual avatar performing the exercises while pictures of the user are taken for visual- and motion-tracked comparison purposes. To obtain a greater understanding of user preferences and the potential of the UIs, we conducted an experiment that examined the usability of the two UIs. 19 The experiment required the participants to use both UIs and then complete the System Usability Scale (SUS) questionnaire. Participants were also observed while using both UIs. In addition, we implemented a motion tracking module application employing inertial measurement units (IMUs) and a Microsoft Kinect V2 sensor to monitor the spinal column flexion and extension rotations to quantify the exercise range of movement.
The remainder of the article is organized as follows. In the “Background” section, we present the literature review. In the “UI design” section, we analyzed the spinal column, exercises, and introduce the development process of the UIs and the user’s motion tracking implementation. In the “Experimental design” section, we provide an overview of the experiment. Experimental results are presented in the “Results” section. Finally, a discussion of the results, concluding remarks, and plans for future work are presented in the “Discussion and conclusion” section.
Background
Currently, technology is playing an important role in our daily activities, impacting how we work, communicate, and search for information. Technology has the potential to positively improve the quality of life through robotics for rehabilitation, surgery planning, and motion assistance. 20 Most recently, mobile computing technology is allowing us to expand health coverage to those living in rural areas. 21 For example, mobile technologies provide clinicians with the opportunity to remotely monitor patients, make treatment decisions, and to detect important medical occurrences.22–24
In recent years, interactive immersive technologies such as video games and virtual reality (VR) have provided the grounds for the development of games aimed to elicit physical activity. 25 Leveraging the motivational and engaging aspects of video games, exergames couple video games and exercise, whereby playing a video game becomes a form of physical activity that can help overcome some of the limitations associated with traditional exercise routines (e.g. lack of motivation to continue an exercise program). 26 The widespread use of consumer electronic devices has seen the application of exergaming in a variety of areas including social experiences, 27 physiotherapy, 28 and fitness. 29 The use of VR emphasizes the interaction between the user and virtual environments by means of images, sound, and acquisition systems and haptic devices. VR has been employed in many applications including entertainment, military, medical, and industrial training.30–32 VR also provides the opportunity to engage and motivate the user, and when it is applied to self-healthcare activities such as daily physical exercise, rehabilitation processes, and physiotherapy-based exercises, it can positively impact the quality of life, depression, body functionality, among others.33–37
The employment of VR in healthcare scenarios allows for the development of secure and controlled rehabilitation processes 38 and to acquire metrics from sensors during the user interactions (e.g. motion tracking, position, ranges of motion, and heart rate).37,39,40 VR visual feedback through mirroring a player’s movements with a virtual avatar can provide information to better understand how the execution is being conducted, leading to increased enjoyment even as the level of difficulty increases.41,42 For example, Hassan et al. 43 developed a cloud-based platform game whose goal was to motivate obese people into performing physical exercise by employing sensory stimulus from heart rate, weight, step count, and calorie burn. Massetti et al. 44 conducted a systematic review regarding the impact of VR when monitoring a patient’s progression during multiple sclerosis rehabilitation. They highlighted the motivational benefits and improved movement accuracy, and task performance, which lead to better and faster therapy results. Howard 45 conducted a similar literature review on the application of VR in rehabilitation. They highlight the increased motivation, movement accuracy during the exercising sessions, and an increased understanding of the patient role while in therapy within a clinical environment and outside of it.
Prior VR-related work has primarily focused on safety education for training to avoid occupational accidents. For example, Gonzalez et al. 46 present a virtual scenario to train tractor users on how to avoid accidents related to tractor overturns; Pena and Ragan 47 studied the design of virtual environments to represent possible construction accidents using various strategies that included multimodal information (graphical, textual, etc.); while Jin and Nakayama 48 present the development of a 3D virtual laboratory environment focusing on user interactions. Although prior work has focused on the design and development of virtual environments and user interaction, very little, if any, prior work has focused on the design and evaluation of virtual environments for occupational health-based exercise.
Motion tracking for gaming purposes appeared in the early 2000s in an attempt to improve user–game interactions and has become more relevant due to the consumer-level consolidation of gaming hardware. 49 Such technologies are primarily based on the use of accelerometers, infrared sensors, cameras, and pressure sensors among other technologies.50–52 Motion tracking is gaining momentum 53 and has been coupled with IMUs in combination with the Microsoft Kinect to obtain virtual skeletal measurements in order to improve the tracking motion accuracy of smooth and short movements. However, the tracking technologies may provide limited ranges of accuracy, cost, time response, and metric quality due to environmental influence (e.g. light or temperature).
UI design
The UI design and development process is comprised of an analysis and characterization of the spinal column that allowed us to determine the system components and subsystems, and toward providing suitable feedback and assessment of usability.
Spinal column characterization
The spinal column is comprised of five sections: (1) cervical (seven vertebrae supporting the neck and the head), (2) thoracic (12 vertebrae that in conjunction with the cage rib protects internal organs), (3) lumbar (protected by muscles, thus allowing mobility), (4) the sacrum, and (5) the coccyx, having the last two sections supporting the spinal column. 54 Lower-spine care requires physical activity focused primarily on flexion and extension exercises, which require the bending and straightening of a joint, respectively. Upper body flexion and extension occurs in the frontal plane along the sagittal axis. 55 Each section of the spinal column allows for the following rotations: cervical section 40°, thoracic 20°, and lumbar 60°, for a total of 120°. 56 Physical activity for back care additionally involves a variety of exercises aimed at building strength and flexibility, which are performed at a slow pace (see Figure 1). 57 An exercise routine that includes these exercises will focus on the accuracy during execution of the motions rather than speed. Information regarding such exercises is commonly provided in the form of printed guides (and possibly oral explanations) containing information regarding the number of repetitions, pictures/illustrations depicting what to do, and recommendations on how to properly perform the exercises.

Spinal column exercises and proper postures.
UI development
The analysis of the spinal column provided information regarding the range of movements for flexion and extension. To properly guide the design and development of the UIs, we organized a focus group session with a fitness trainer, a physician, and a physiotherapist. During the focus group session, participants were asked about the challenges and the need to better educate people with respect to physical activity associated with the spinal column. The feedback received from the three experts can be summarized as follows: (1) printed guides are being discarded in favor of online materials but there still exists a lack of clarity in many cases that leads to improper posture and movements, (2) people often forget about the explanations and demonstrations provided to them on-site, and (3) people do not monitor the progress of their physical activity.
To address issues revealed in the focus group session, in addition to results of prior work, we developed two UIs: (1) a physical tangible UI based on an artist’s wooden manikin, and (2) a mobile app that includes an animated avatar that can (through an animation) outline the correct exercise motions and the ability to dynamically capture images of the user performing the exercises which can then be compared to a reference image. The goal of both UIs is to present information on how to properly perform the exercises through the manikin and the mobile app avatar, and to capture the user’s motions while they perform the exercise and provide them feedback with respect to how well they performed the exercises. To capture and verify the movements, we employed the Microsoft Kinect V2 sensor, and a wearable IMU sensor worn around the torso.
Tangible manikin UI
The manikin UI was developed with the objective of presenting a tangible interface for the user to visualize the posture movements and obtain a video to follow up without any supervision or assistance. To develop this UI, the spinal column was segmented as a serial kinematics chain with cervical and lumbar joints to match the degrees of freedom and range of movement of the manikin as presented in Figure 2. To implement the motion capture system, we chose a wooden manikin where an IMU (IMU9250) 58,59 was located at the upper back to measure its torso flexion movements (see Figure 2). The flexion/extension data is sent to an Arduino UNO board through the I2 C protocol. The torso flexion and extension rotation angles are calculated by employing the complementary filter which takes information from the IMU’s accelerometers and gyroscopes that are recorded and sent to a virtual scenario. 60 To visualize and reproduce the motion capture, a virtual scenario including a virtual avatar was developed using the Unity 3D game engine. The virtual avatar mirrors the spinal movements of the wooden manikin by reading the rotational data calculated by the Arduino platform and sent to the Unity 3D engine using Bluetooth communication protocols. The virtual movements are then assigned to the virtual avatar to provide an animation that visually illustrates the exercise for the user to reproduce.

Wooden manikin and virtual avatar.
Mobile app UI
The mobile app UI was developed to provide visual feedback of the postures during the exercise by employing a virtual avatar animation and photos of the user performing the movements. Similarly to the manikin UI, the same virtual assets were employed here, although, in this case, the torso flexion and extension are configured via the touch-screen rather while operating the virtual avatar, rather than operating the physical manikin. The range of movements are configured following the instructions of the physician, or the physical trainer. To provide feedback and monitor the exercise execution, the app allows for taking pictures of the exercise which can then be compared with the virtual avatar as presented in Figure 3.

Mobile UI screen.
Motion capture
In addition to the feedback provided by both UIs, we implemented two methods for obtaining metrics during the interactions: (1) a non-invasive approach by employing Microsoft Kinect V2 optical sensor and (2) a wearable IMU sensor.
Microsoft Kinect V2 motion capture
We developed a virtual environment with the Unity 3D game engine which employs a Microsoft Kinect V2 to capture the user’s motion after using the manikin-based UI or the mobile app. The Microsoft Kinect V2 recognizes 22 joints from which the spine base and the mid spine are of interest given our focus on the lower-spine movements. Although the Microsoft Kinect does not require any calibration, clothing, and room lightning can affect tracking accuracy. Figure 4 provides a graphical illustration of the system configuration and motion capture from the graphical UI. The motion-tracked data are stored for visualization, analysis, and comparison purposes with respect to the physical activity performed by the user.

Kinect V2 setup and motion capture.
IMU-based motion capture
We additionally developed and implemented a motion tracking solution employing an Arduino UNO board, a portable lithium polymer (LiPo) battery, a Bluetooth module HC06 and an MPU9250 IMU. The motion capture scripts were programmed using the Arduino integrated development environment (IDE) and hyper terminal for exporting the comma separated files gathered from the motion tracking. The motion capture information is processed using a complementary filter, 53 due to the hardware limitations of the Arduino board. To properly capture the torso’s flexion and extension movements, we sewed the IMU to a stretching band with a piece of clothing to guarantee its fixation on the user’s back (see Figure 5), while performing the exercise. The stretching band is light and its flexibility allows the sensor to be worn by users of varying body structures. The IMU is wired to the Arduino board and enclosed in a waist pack with the battery as presented in Figure 5. It is worth noting that during our initial testing phase we identified motion tracking loss due to a faulty connection occurring during the flexion movement. The issue was solved by soldering all wiring unions in favor of employing connectors. In contrast to the approach with the Microsoft Kinect V2, the approach with the IMU requires a calibration process to detect the initial position based on the magnetometer readings. 61

IMU and waist pack containing the Arduino board.
Experimental design
The experimental design comprises two main stages: (1) usability testing and (2) motion tracking. For the usability testing stage, participants interacted with both the manikin and the mobile app UIs, and then completed the SUS questionnaire. 19 For the motion tracking stage, participants performed five torso flexion and extension movements to the best of their abilities according to the directions given by the manikin-based UI or the mobile app. The back movements were performed at a slow-pace and with wider ranges of flexion and extension, similar to what it is expected in real life.
To SUS questionnaire 19 was employed to evaluate system usability. The SUS is comprised of 10 questions that cover various aspects of system usability including the need for support, training, and complexity. Responses are based on a 5-point Likert-type scale and based on these responses, a total single SUS score ranging from 0 to 100 that represents the overall system usability is calculated. To calculate the total SUS score, 1 is subtracted from the score for each of the odd numbered questions in the SUS questionnaire, while for each even-numbered question, the resulting value is subtracted from 5. The updated values for each of the 10 questions are then summed and this total is multiplied by 2.5. Based on research, an SUS score above 68 is considered above average and score below 68 is considered below average. 62 A higher score indicates a higher degree of overall system usability.
Participants
We recruited participants from a university environment who spend ≥8 h of seated computer work while disregarding age since anyone can develop any form of MSDs. A total of 32 participants volunteered with an average age of 24 years, comprised of 20 males and 12 females. Of them, 23 were undergraduate students from engineering programs, while nine were administrative staff from Universidad Militar Nueva Granada in Bogota, Colombia. After all activities, risks, and goals of the UIs were explained and presented, participants consented orally and chose which of the two UIs they preferred to use. All data were collected anonymously, and in the case of the mobile app, all pictures were deleted after the participant completed the experimental session.
For the usability testing, 16 participants used the tangible manikin-based UI, and another 16 participants used the mobile app to visualize the lower-spine exercise. The participants were asked to compare their performance against the UI employed. We identified two main traits among the participants, and more specifically, those who engaged in fitness activities (eight), and those who did not (eight). After using the UIs and completing the SUS questionnaire, only 10 participants with sedentary habits volunteered to have their movements tracked with the Microsoft Kinect V2 sensor and the IMU.
Manikin-based UI
The procedure for the manikin-based UI consisted of the following: (1) introduction and explanation about the UI and its objective by a facilitator, (2) hands-on demo showcasing the UI functionality performed by the participant with the assistance of the experimenter, (3) manipulation of the UI to demonstrate a lower-spine flexion/extension exercise conducted by the participant, (4) visualization of the virtual avatar and animated exercise by the participant, and (5) the completion of the SUS questionnaire and providing of feedback by the participant.
Mobile app
For the mobile UI, all actions were conducted by the participant under the supervision of the experimenter as follows: (1) execution of the app on the mobile phone, (2) configuration of the virtual avatar movement ranges, (3) visualization and confirmation of the exercise animation, (4) observation of the exercise animation and taking of the reference photos, (5) comparison of the virtual avatar animation with the reference pictures, and (6) completion of the SUS questionnaire 19 and obtaining of feedback by the participant.
Motion tracking
For the motion tracking stage, the experiment consisted of tracking the flexion and extension of the torso after observing the explanation from a facilitator, the mobile app, and the manikin-based UI. The tracking was performed using an MPU9250 IMU and Microsoft Kinect V2 sensor. Each participant was asked to bend over and attempt to touch their toes with their fingers while maintaining the legs straight for a maximum of 10 s while the flexion angle was measured using a digital goniometer. This posture was employed as a reference to calibrate the IMU. After calibration, participants were asked to perform five torso flexion and extension movements according to the directions presented in the manikin-based UI and the mobile app.
Results
Results are grouped into two main sets, one presenting the usability data and other presenting the motion tracking data.
Usability results
Participants expressed a large interest in the UI and highlighted the importance of having the manikin-based UI to demonstrate proper body movements. The simplicity of the mobile app was well received as the participants found it easy to use. The average SUS results for each of the conditions examined are presented in Table 1, where the results are divided into two types of users: office workers and fitness center users.
SUS comparison.
SUS: System Usability Scale.
Office workers.
Fitness center users.
Both the manikin and mobile app produced SUS scores above 70/100, indicating that both are usable, although there is room for improvement (see Table 1). According to the results, key improvement areas common to both UIs include complexity, inconsistency, and the need for technical support that resulted in requiring further information and uncertainty to achieve the required tasks. The SUS score shows a difference between office workers and fitness center users, where the first group prefers the manikin-based UI, with a score of 87.8/100 against 72.8/100, and the second group is more interested in the mobile app, with a score of 84.8/100 for the app against 77.8/100 for the manikin. This result can be attributed to the portability and the possibility to evaluate and to better understand whether they performed the exercises correctly or incorrectly given by the app for the fitness center users. Office workers preferred the manikin UI since it helped to provide a better understanding of how to perform the occupational healthcare exercises.
To further analyze the data, we conducted a statistical analysis of variance (ANOVA) on the obtained usability data. Our null hypothesis was that there are no usability differences between the manikin UI and the mobile app when employed to demonstrate lower-spine exercises. ANOVA results showed no significant difference between groups (F = 0.038, ρ = 0.847), or between the type of UIs (i.e. app vs manikin) (F = 1.602, ρ = 0.216). However, results showed a significant difference for the interaction between groups and the type of UI (F = 11.611, ρ = 0.002). Further pairwise comparisons showed a significant difference in usability for participants that were not engaged in regular physical fitness. Specifically, participants who did not participate in a regular physical fitness program/routine believed that the manikin-based UI was more usable than the app (ρ = 0.003). For the participants who were involved in a regular physical fitness routine, there was no significant difference for the usability between UIs (ρ = 0.141). Future research will investigate why there are no usability differences for the fitness-engaged group and why the non-fitness-engaged group perceived the manikin-based UI was more useful.
Although the mobile app can be used with or without the assistance of another person to take the reference pictures, 19% of the participants expressed their interest in inviting others to assist during the exercise as a means to promoting cooperative or competitive physical activity. Despite the positive perception of the app, 19% of the participants preferred traditional training guides, as they believe an app would decrease their performance.
We also discussed both UIs with healthcare specialists (one physician, one fitness instructor, a team of three orthopedic surgeons, and one physiotherapist), who provided the following insights: (1) both UIs provide the potential to help people better understand how exercises should be performed, (2) experts expressed their interest in seeing further development in motion capture to keep track of user progression in comparison to the target exercises, and (3) they believed that the UIs can have applications in occupational health care, physiotherapy, and fitness training.
Motion tracking results
After obtaining the usability results, we acquired motion tracking data with the IMU and the Microsoft Kinect V2 sensor with participants who executed a back flexion and extension exercise with five repetitions after using the mobile app and the manikin-based UI. The participants were students from a third- or fourth-year engineering program and volunteered to perform the exercise after using both the mobile app and the manikin-based UIs. The participants were distributed into groups of three, each receiving the back exercise instruction the mobile app and the manikin-based UI, respectively. After completing the instructions, participants executed the posture movements to the best of their abilities based and their understanding of the exercise. Before starting the motion tracking, each participant was asked to fully bend while maintaining the legs straight in order to measure the maximum flexion with a digital goniometer. This measurement provided a reference for the motion captured data.
After obtaining the goniometer measurements, the participants were instructed on how to wear the IMU sensor, stand in front of the Microsoft Kinect V2 sensor, and inform them when to initiate the exercising. Motion capture synchronization between the IMU and the Microsoft Kinect V2 sensor was achieved through the implementation of a start/stop button on the Arduino system that the participant activated when they were ready to begin the exercise, and a conditional initialization condition on the Microsoft Kinect–based environment where motion tracking is only recorded with the back flexion is detected from the standing straight position.
A sample of the motion-tracked data taken from one participant while performing the exercise following the oral explanation, after using the mobile app, and after using the manikin-based UI is illustrated in Figure 6. It is worth noting that participants interacting with the manikin-based UI reached an average flexion of a 62.2° captured with Microsoft Kinect V2 sensor, and an average of 62.7° with the IMU; participants who interacted with the mobile app UI reached an average flexion of 68.1° with the Microsoft Kinect V2 sensor, and 68.5° with the IMU.

IMU and the Microsoft Kinect V2 data compared after receiving an oral explanation, the mobile app, and the manikin-based UI.
We observed the following from the motion captured data: (1) faster back movements introducing more noise than slower-paced ones as a result of the associated with the adaptive filtering supported by the Arduino platform; (2) Microsoft Kinect V2 smooth filtering added noise to the motion capture within a zero and two degrees range, which was disregarded given our focus on larger flexion and extension movements; (3) the Microsoft Kinect V2 sensor motion tracking was often lost after a 70° flexion due to the occlusion of the torso by the head getting in front of the camera, this caused concern among the participants regarding the reliability of the metrics; (4) the clothing worn by the participants negatively affected the motion tracking with the Microsoft Kinect V2 sensor, where in some cases the experimenter had to ask the participants to take off their sweater or jackets; and (5) the elastic band used to affix the IMU to the participant did not cause any discomfort, although during full flexion, an addition of five degrees to the measurement caused by an exerted force of the wires over the sensor was observed. However, this did not constitute a concern as the measurement was within the mean deviation. Overall, both sensors performed similarly, thus providing the opportunity to integrate them as a complementary motion tracking solution. In addition, hardware limitations associated with the Arduino board, whose adaptive filter did not capture faster or finer movements appropriately was observed. We also observed that the Microsoft Kinect V2 sensor had a lower resolution than the IMU, and its smooth filtering often cuts data when tracking is lost as can be seen with the tracked data with the oral explanation (see Figure 6).
Discussion and conclusion
Here, we have presented the development of two UIs (a manikin UI and a mobile app) to demonstrate proper lower-back exercise motions. We also assessed their usability in order to develop an effective tool for performing lower-spine exercises. Previous work related to occupational health care is based on the design and development of virtual environments for training purposes. In contrast to prior work, we proposed a system to engage and evaluate the exercise performance of people which makes spinal column flexion and extension exercises. We also presented the results of a series of preliminary experiments that provided a comparison of both UIs with respect to usability using the verified SUS questionnaire.
Our preliminary results show that the manikin-based UI obtained the highest usability score in comparison to the mobile UI. We believe, this result is associated with the fact that the manikin allowed a physical manipulation and observation of the postures. However, in the mobile UI, the exercise and picture in the device’s screen may hinder proper visualization due its size. The SUS score and the statistical analysis show a difference between office workers and fitness center users, where the office workers prefer the manikin UI and the fitness center users prefer the app. We believe that the results are related to the utility of each UI for each group, where the app helps the user to evaluate the exercise performance while the manikin helps the user to better understand the exercises to be performed. We acknowledge that the novelty associated with using new technology and tools can wear off and thus, may affect the motivation and engagement while employing any exercising tool. However, from our analysis, there was no significant differences between the two UIs and between the two groups, other than a significant difference between the participants who exercise and those who did not.
In addition to the usability of the two developed interfaces, we also compared the tracking capabilities of an IMU and the Microsoft Kinect V2 sensor to track user movements. Based on the results presented here, tracking differences were minimal and within the mean deviation of the tracked movements, indicating that both, the IMU and the Microsoft Kinect V2 sensor, performed similarly. This presents an opportunity to implement redundant or complementary motion tracking systems where commercial exergames may have an IMU to provide quantifiable data, or in stand-alone systems, provide motion tracking redundancy to better represent a user’s progression during exercise. At this point, the current choice of hardware and software allows for the motion quantification of slow-paced back exercises with consumer-grade hardware that can easily be acquired without representing a high investment. The Arduino hardware could also be integrated into mobile devices, and computer and gaming systems, thus providing expansion and user impact opportunities in health care.
Based on the feedback from the healthcare professionals and participant feedback, we can conclude that both UIs have the potential to impact back posture related physical activity from a usability point of view. Furthermore, since the technology presented caused interest among all participants. Although our results are preliminary and greater work remains, future work will focus on developing an exergaming system equipped with signal processing algorithms to minimize error and capture faster and finer movements along with a scenario creation/editing tool that allows physicians to customize ranges of movements and tracking can metrics can be employed to increase the exercise performance and motivation. In addition, we will explore different open electronics hardware with more processing capabilities than those of the Arduino platform. Further research will also focus on studying the effects of the UIs within an exergame over a large participant group to determine their effectiveness.
Footnotes
Handling Editor: Tadeusz Mikolajczyk
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The authors thank Universidad Militar Nueva Granada’s VP of Research for the financial support of Project INV-ING-2377 during 2017.
