Abstract
Robots are being utilized in many different ways in today’s society. Teleoperated robots are robots that can be controlled from a distance. This is commonly done with remote controllers while observing the robot through a screen, but using virtual reality (VR) is also a possibility. VR teleoperation has many benefits, but if a robot is operating in a dangerous environment, as is one of the main applications for these types of robots, the feeling of immersion caused by VR may be problematic. If the operator feels they themselves are in danger, that may negatively impact their performance. The following paper introduces Applied Adaptive VR (AAVR) as a potential solution to this problem. AAVR is a combination of current VR technologies, such as fear modeling, that could be applied to real world teleoperation tasks. The purpose of AAVR is to have VR environments adapt during teleoperation to be less fear inducing, which is expected to improve performance. Relevant topics of interest, and methods and recommendations for developing such a system are discussed.
Introduction
The use cases for robots have been increasing since their invention, including robots that can be teleoperated. Teleoperated robots are robots that can be operated from a distance, an example of this would be Mars rovers. Traditionally, the operations for these robots are done by using remote controllers and observing the robot’s movements through a screen. However, utilizing virtual reality (VR) is a new possibility that places the operator in the robots’ shoes (Hetrick et al., 2020; Whitney et al., 2018). In VR, the operator has a first person perspective and the robot mimics the movements the operator makes in the real world in virtual reality.
Controlling robots in VR in this way can increase immersion and presence, the feeling of actually being in the environment the robot is in (North & North, 2016). The opportunity to have a first-person immersive experience when operating robots is attractive for many domains that require detail-oriented work to be done by robots, such as robots that perform minimally invasive surgery (Xia & Lu, 2021). VR has been shown to increase spatial awareness, improve interactive task performance, and decrease information clutter (Bowman & McMahan, 2007). Another big reason teleoperated robots are used is to do tasks that would be dangerous and potentially life threatening to humans, such as bomb diffusal robots (Hetrick et al., 2020). It stands to reason that these two purposes will have considerable overlap, with dangerous yet intensive work needing to be done, for which VR operation seems like an ideal solution. However, this may not currently be the case.
It has been shown that people experience real fear when encountering scary situations in VR, even though they themselves are physically in a safe environment (Pallavicini et al., 2018). In fact, anxiety experienced in VR can actually increase feelings of immersion (Bouchard et al., 2008). It is safe to assume that when people are in fear for their personal safety, they will find it difficult to focus solely on their performance. One study showed that people teleoperating a robot in VR interacting with high-risk machinery had an increased perception of risk, and their performance was negatively impacted (Shin et al., 2021). This study was able to mitigate some of these problems by displaying the users’ hands in their virtual environment as robotic rather than human (Shin et al., 2021). This is a great step toward maintaining performance but there are countless potential situations where risk will be perceived by the user, and this strategy will not work for all of them. There is a need for development of universal strategies to prevent performance decrements from happening to operators working in VR, when they encounter dangerous situations, without losing the benefits that VR provides. The purpose of this paper will be to demonstrate the needs, challenges, and opportunities for developing these strategies, referred to as Applied Adaptive Virtual Reality (AAVR).
Virtual Reality
Immersion
Immersion has been described as tricking the user’s senses into thinking that they are in an environment somewhere other than where they actually are (Patrick et al., 2000). Immersion has been defined as having three levels (Brown & Cairns, 2004). The first and least immersive is engagement which requires effort, attention, and appropriate game mechanics. The next level is engrossment which requires those aspects as well as causing participants to have emotional reactions to their environment, such as interest in their tasks. Finally there is total immersion which requires a sense of flow utilizing the atmosphere of the virtual environment (Brown & Cairns, 2004).
So, how does VR accomplish immersion? Users wear goggles that track their head movements and cover their whole field of view (Lin, 2017). This gives the impression that what they are seeing is real. VR also gives the user auditory and haptic responses to the virtual environment they are experiencing. Controllers allow the users to interact with their virtual environment. Not only that, but the user’s movement in the real world is reflected in what they are seeing (Lin, 2017). Some additional ways of creating immersion include wide angle projection, cylindrical screens, and warp and blend technology (North & North, 2016).
Fear
Virtual reality causes more intense emotional reactions when compared to non-immersive desktop applications (Pallavicini et al., 2019). This has been supported with both self report and physiological measures (Pallavicini et al., 2019). There are many horror games for VR that are effective at causing feelings of fear, such as Resident Evil 7: Biohazard (Pallavicini et al., 2018). Along with virtual environments, individual differences in people can also affect the amount of fear experienced (Lin, 2017). For example, people with certain personality traits, such as neuroticism, experience worse fear in VR (Lin, 2017).
In VR, like the real world, people have coping mechanisms that they use to deal with the fear they experience (Lin, 2017). Such examples include avoidance, such as denial and disengagement, self-talk, such as that telling themselves that what they are experiencing is not real, as well as closing their eyes or turning their head to avoid seeing what is scaring them (Lin, 2017). All of these coping strategies could have negative impacts on the ability of the user to complete their tasks. Also, individual differences such as gender have been shown to have influence on what coping strategy is used (Lin, 2017).
Adaptive VR
Interestingly, there has been research into how to specifically adapt VR horror games to make them more fear inducing (de Lima et al., 2022). This is an example of adaptive VR (Aarno et al., 2005; Heguy et al., 2001; Rodriguez et al., 2002). Biofeedback, such as heart rate and galvanic skin response information, can be used to identify when a player is experiencing fear (Dekker & Champion, 2007). This information can be gathered in a relatively noninvasive way, such as using a device that clips onto the ends of the user’s fingers. From that information and gameplay behaviors, machine learning and player modeling can be used to identify what is scaring the users the most (de Lima et al., 2020; de Lima et al., 2022). Some examples of this would be darkness and unknown sounds. When comparing the predicted fear and self reported fear ratings, one model correctly guessed fears with 93% accuracy (de Lima et al., 2022). Another model was able to predict fears in around 3 ms with an 80% success rate (de Lima et al., 2020). Once fears have been identified, the game is then adapted to include more of them and therefore become more scary for that specific player.
If the models discussed above, or similar strategies, are employed in a way that is opposite to their current purpose, the least fear inducing scenario could be catered to each person and circumstance by identifying specific fears. Settings could be developed depending on what makes specific stimuli scary and the individual differences of the users that would make them more susceptible to experiencing fear and using damaging coping mechanisms.
Just Noticeable Difference
When advocating for changes in VR environments to occur during use, it is important to consider how they will be made. Ideally, the changes would happen without the user noticing. This is probably not possible for every change, but for those that it is, it is worth discussing the concept of just noticeable differences. Just noticeable differences are the thresholds where changes are detectable by humans (Lee et al., 2015). One example of this in VR is finger tip tracking. A study found that the just noticeable difference between where a user’s fingertips were in the real world and where they were displayed in VR was 5.23 cm when interacting with an object 30 cm away (Lee et al., 2015). For visual changes, a model has been created for detecting just noticeable differences in VR (Liu et al., 2018). This model takes images and can predict the maximum amount of change that can be done to that image that participants will not notice (Liu et al., 2018).
Future Directions
VR teleoperation is useful for many domains, but it may be problematic for the operators in some dangerous situations. AAVR strategies that utilize current VR modeling technology could be the answer to this problem.
Applied Adaptive VR
There are multiple opportunities for advancement in the study of maintaining performance during VR teleoperation while mitigating fear, starting with identifying the fears of individual users. Not only can being fearful impact performance, anxiety increases feelings of immersion (Bouchard et al., 2008). This could lead to a cycle of the virtual environment feeling real, and therefore being scary, and users feeling scared making the environment feel more real to them. Any fears that are known should be taken into consideration to adapt the VR environment to the users. Identifying fears beforehand could be done by getting users’ self reported fears. It would also be advantageous to avoid things that cause fear across the board, such as darkness and unknown sounds (de Lima et al., 2022). The next way of identifying fears could be utilizing physiological methods for real time feedback. This includes heart rate monitoring and skin responses (Dekker & Champion, 2007). This would show in real time when the user experiences fear.
Utilizing the fear responses, subtle and well-timed scaling down of immersive technology could be one answer to mitigating fear and hesitation, as it can hopefully create some disconnect between the robots and their operators during dangerous tasks. Less immersion has been shown to have some benefit for users doing tasks in non-complex situations, such as object manipulation (Bowman & McMahan, 2007), which implies that total immersion is not warranted in every case. It will be difficult to find the right balance of these strategies as to not lose the benefits of immersion along the way.
It is also important to note that physical fidelity is not the same as cognitive fidelity. A simulated environment does not have to look extremely similar to the real version in order to accurately portray the mental processes needed to accomplish tasks in that environment (Bockelman Morrow et al., 2011; Goetz et al., 2012). This has been shown in training simulators that have been able to maintain cognitive fidelity even if the physical representation of the environments are not hyper realistic. This can be done as long as they represent the situations and processes encountered in the real world (Bockelman Morrow et al., 2011; Goetz et al., 2012). It stands to reason that if some kind of lower physical fidelity visual filter was applied to a VR teleportation environment in order to decrease feelings of immersion, the operators could still function properly.
Another promising area to investigate is how techniques for increasing fear can be used to do the opposite. There are multiple models that tailor game experience to its user’s fears in real time (Dekker & Champion, 2007; de Lima et al., 2020, 2022). Instead, this could be used to reduce fear in VR. If what the user specifically fears in a moment is known, then it is possible to lessen that fear. VR is inherently malleable as opposed to actual reality. For example, if darkness is causing the user fear, the brightness of the environment can be increased. So although VR can cause real fear, there are many possible ways to mitigate this.
When making changes in VR that the user might notice, it may be wise to give some sort of explanation as to what they are experiencing. For example, users could be told that the VR environment will adapt to enhance focus and filter out distractions, when in reality, it is adapting to lessen fear. It is also important that the changes that are being made do not become distracting by being too drastic or happening too quickly, or they run the risk of negatively impacting performance and time taken to complete a task. This system would need to be studied to find out how it affects performance, fear, and hesitation, and adjust modifications accordingly. As discussed above, known just noticeable difference thresholds should be taken into consideration, and models specifically made to detect these thresholds in VR could also be a good resource for making subtle changes in real time.
Accomplishing this will not be an easy feat. Although much of this technology already exists in the form of fear modeling, just noticeable difference modeling, and non-invasive biofeedback machines, there needs to be significant changes made for it to be able to apply to the real world and real time VR teleoperation applications. This will be challenging, but worth it if it means the furthering of the promising venture of robotic teleoperation.
Recommendations
The following are five recommendations for the development of applied adaptive VR systems for teleoperating in fear-inducing situations.
Create virtual environments tailored to individual users that can be manipulated in real time to mitigate the user’s fears.
Utilize biofeedback, self report, and adaptive VR modeling to identify fears.
Scale down immersive technology and physical fidelity during certain fear inducing tasks, while being sure to maintain cognitive fidelity.
Avoid using totally immersive environments if it is not necessary.
Keep changes done to the VR environment under just noticeable difference thresholds when possible, and alert users that they might notice the environment change while they are operating in VR, which is being done to improve their performance.
Following these recommendations, it is hypothesized that (a) when teleoperating a robot, increased fear experienced in VR when compared to a non-immersive operation will be mitigated by AAVR, and (b) VR teleoperation with AAVR strategies will have better performance than VR alone, or with non-immersive operation.
Applications for Space Operations
Operations happening in space are particularly suited for an AAVR system. VR teleoperated robots are already being utilized in space domains, such as Pilote on the International Space Station (ISS) (Guzman, 2021). Pilote tests how VR interfaces with robotic arms could be used in the future for things like missions to Mars (Guzman, 2021). The draw of having VR tele-operated robots in a space domain echoes the reasons they are employed in other areas; space exploration typically involves complex mission or safety critical tasks that could benefit from a first person immersive perspective. However, the environments that robots would be operating in are typically not hospitable for humans so the challenges that come with operating in fear inducing situations would apply. Some examples of such situations that could occur in space would be completing tasks on another planet, and becoming stuck, working with volatile equipment, becoming separated from safety equipment such as tethers. Having a first person immersive experience during any of these may be good for completing tasks, but not be ideal in terms of the fear, and therefore distraction, that they may cause.
Fear is not the only intense emotional reaction that operators in space can expect to experience. For example, the overview effect happens to some astronauts when they see the earth from space. It is described as a powerful feeling of awe or a cognitive shift that can change an astronauts’ perspective of the earth and humanity (Kanas, 2020). The novelty of space itself could lead to overwhelming feelings which may distract operators. Something such as obscuring non-task related visuals until tasks are completed or operators become comfortable in previously unfamiliar environments could be useful if these changes could occur based on real time emotions.
AAVR strategies could address these problems by decreasing the level of immersion, and in turn decreasing fear and other emotional responses that operators could have when operating robots in space. Preventing errors and delays while completing tasks is very important because space is an inherently dangerous area to exist in and mistakes can be catastrophic.
Conclusion
While the opportunities for teleoperated robots are just beginning to be explored, it is important to consider what challenges this type of operation faces. Addressed in this paper is a type of scenario that may present difficulties; teleoperating in dangerous situations such as space. However, creation of an AAVR system could mitigate fear and allow for operators to focus on their performance. Several recommendations were introduced for this type of system based on current technologies that have the potential to be utilized for AAVR. An AAVR system is expected to mitigate fear and therefore improve performance compared to VR operation without the same considerations. VR has proven to be useful separately in detail oriented and dangerous tasks. With Applied Adaptive Virtual Reality strategies developed, it could be an amazing tool for situations that require both.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by George Mason University’s Office of Research, Innovation, and Economic Impact (ORIEI) award 215134.
