Abstract
A vision for the future of space exploration and operation is one in which people live and work in space as members of human–multi-agent teams, comprised of humans, robots, and other artificially intelligent agents. As such, people’s opinions on robotics and AI, especially the opinions of those who will likely be users of such systems, should be taken into account for system requirements, design, and implementation. This work analyzes interviews conducted with U.S. Space Force (USSF) Guardians to ascertain these opinions. Overall, USSF Guardians expressed an understanding of the limitations and benefits of what humans have to offer versus what AI and robotic systems do, such as differences in emotional variability and decision making abilities. Also discussed is the desire for limitations to be placed on these types of systems so that humans are always involved in certain tasks. And finally, USSF Guardians described what aspects of their work they believe could benefit from a robotic AI system. These themes could be translated into concrete design features, such as task allocation, of multi-agent systems that are not only successful, but desirable for users to work with.
Introduction
The recent establishment of the United States Space Force (USSF) reflects a larger growing effort to expand exploration and operations in space in both the public and private sectors. As humans migrate to space, they will most likely be supported by teams of non-human agents including collaborative robots, autonomous rovers, and intelligent decision aids. They may be further supported by satellites, sensors, and other intelligent technologies used to aid human decision making, sup-port human-robot and human-machine teamwork, and meet the goals of the exploration missions. For example the NASA exploration campaign (NASA, 2018) outlines a future in which humans are supported by constellations of autonomous machines across the automation to autonomy spectrum. These agents and humans will work together as part of human–multiagent system teams to succeed in the space domain. Vital to their success is the design of such systems that includes what capabilities and limitations they should have. Equally as important is their perceived reliability, and their overall impact on organizations like the USSF (Scrapchansky et al., 2024). Prior work has revealed that in multi-agent systems, the unreliability of any one machine in a fleet has the potential to reduce the trust placed in the rest, and thereby reduce reliance and diminish the utility of the entire system as a whole (Rice & Geels, 2010; Walliser et al., 2023).
Thus, there is a need for research that investigates creating and preserving successful human–multi-agent system interactions for space operations (de Visser & Parasuraman, 2011; Dzindolet et al., 2003; Hambuchen et al., 2021; Ocon, 2010). The goal of this work was to contribute to this need by conducting interviews with subject matter experts (SMEs) in the domain of space, in this case active duty USSF Guardians, as has been done in the past with other SMEs (Yin et al., 2022). The interviews focused on the potential role of autonomous robots and AI agents in sup- porting space operations, especially as they relate to the use of multi-agent systems to manage low Earth orbit and other USSF assets. The specific aim of this project was to investigate the following research questions:
• RQ1: What are potential opportunities for human–multi- agent systems in space, especially when those systems represent a constellation of technologies and distributed human team members?
• RQ2: What are some foreseeable challenges to integrating these technologies into space operations?
Methods
Six USSF Guardians (One woman, Five men) stationed at the U. S. Air Force Academy (USAFA) were interviewed for this study. The interviews were conducted by USAFA undergraduate cadet students, who are also authors of this paper, primarily to support their senior capstone thesis. Before the interviews began, each USSF Guardian described their job duties and the interviewers briefly described human–multi-agent systems. The process involved a one-on-one interview between the students, who read each question aloud, and the USSF Guardians, who responded. Each interview lasted between 25 and 30 min. De-identified transcripts of those interviews were later used to generate the secondary results reported here (Scrapchansky et al., 2024).
The primary purpose of the questions was to ascertain the potential impact of AI and robots on Space Force operations, how Guardians would like such technology to behave, and their assumptions about the reliability of this technology compared to Space Force personnel. The interview questions included:
Do you think utilizing robots and AI would simplify Space Force operations or make them more complex? Why?
How would you handle conflicting information coming from an artificial agent and a Space Force member?
If an AI robotic system was going to be implemented in the Space Force, what would you like to see in such a system? What would you want to avoid?
Would you expect a system like this to be more or less reliable than Space Force members?
The students allowed responses to be given without interruption and only asked the interviewees as many questions as were permitted by time. Due to this, two interviewees did not answer the last question. Additionally, the students asked clarifying questions when necessary, but only if the responses lacked adequate context or were confusing. After the interviews were completed, recordings of them were transcribed and data coding began.
Data Coding
A thematic analysis was performed in five parts following the guidelines set in (Clarke & Braun, 2017): familiarization with the data, generating initial codes, searching for themes in responses, reviewing potential themes, and defining and naming themes. Three of the authors started the process by examining each question and each corresponding response, one by one. Next, they started highlighting information that seemed especially relevant to the questions and the overarching research goals. After those steps were completed separately, the authors met and collectively began grouping these highlighted sections with other similar responses. After this step was finished, the authors completed the data coding by creating themes and descriptions for those themes.
Results
Extracted themes and example responses are given in Table 1. It was found that overall, those interviewed expected that AI and robots would be more reliable than their human counterparts. They also believe that technology like this would make USSF operations simpler. Additionally, they provided in-sights into what they would like a system like this to have. The guardians specified the requirement of imposing limitations on non-human agents while still being able to take advantage of superior information processing capabilities of technology. This implies the desire for a middle ground level of automation in these systems where the humans do not need to handle every task, but the automation does not do so unchecked (Parasuraman et al., 2000).
Table of Themes in Responses to Interview Questions and Representative Example Responses.
Theme 1: Acknowledging Human Limitations
The first theme highlights an understanding of the advantages that a multi-agent system, and especially AI, may have over humans. Specifically mentioned is the fact that AI lacks emotion, something that can interfere with a person’s ability to perform their job. For example, one participant commented “. . .the computer is not gonna have a bad day (Comment #3, Table 1).” Other responses echoed this attitude, and also mentioned that artificial agents would likely be able to complete some tasks faster than a human would be able to.
Theme 2: Acknowledging Machine Limitations
The second theme highlights the opposite of theme 1; what non-human agents lack. Responses focused on human decision making and dynamic thinking skills. One participant stated “The computer may not be able to think through the different dynamics (Comment #5, Table 1).” Decision making came up frequently as a concern when it came to AI. Even though participants recognized that AI had the potential to reach a conclusion faster than they might be able to, they expressed concerns about it lacking the ability to consider all important factors when making a decision.
Theme 3: Human Involvement/Limitations for Systems
Third is what limitations should be placed on multi-agent systems, specifically that some level of human involvement should be preserved. Similar to Theme 2, the interviewees seemed concerned about the decisions that AI will make, and therefore they want humans to be involved; as one participant expressed “I don’t want it to make decisions (Comment #8, Table 1).” Another participant went further and discussed the possibility of an AI getting out of human control “something. . .where we are able to put bounds on it as well where it just doesn’t run rampant and its making decisions on its own or giving it enough ability to then make the ultimate decisions where they then block out the human to protect itself (Comment #7, Table 1).” This theme is especially important to consider for the design of mutli-agent systems.
Theme 4: Future Applications to Aid Humans
The final theme is about what these end users imagine multi-agent systems doing in the future, or what they would like to see them do. Most examples given focused on providing assistance to humans through information processing, and taking over tasks that would be high workload for humans to perform. For example, one participant commented “we could use AI to take care of the simple things, so that we can focus on more complex things (Comment #11, Table 1).” Participants did not seem to believe that a multi-agent system could, or should, replace humans, but instead should work for them to make their jobs less taxing.
Discussion
The purpose of these interviews and analyses was to un-cover potential challenges and opportunities for multi-agent systems in space. In part, these efforts were performed to provide information for design of such systems in the future, specifically for USSF operations. As the USSF is the newest branch of the U.S. military, it is important to understand USSF Guardians’ novel expertise and opinions on this matter. Four themes came to light through this process. (a) human limitations of emotion (b) machine limitations of decision making (c) requests for purposely imposed machine limitations and (d) end users’ wants and preferences.
There are three major takeaways from these themes. First, the benefits of implementing a multi-agent system in the USSF are clear. Guardians discussed the benefits that non-human agents could have on their work. They do not have bad days, they do not get emotional, they are faster than humans when it comes to tasks such as information processing, and they can be given monotonous tasks that could save Guardians’ valuable time.
Interestingly, although Guardians recognized some benefits of unemotional machines, prior research supports the role that emotions and emotional intelligence can play in facilitating human-human teamwork. Emotions can build social capital among team members and facilitate effective team processes (Druskat & Wolff, 1999). And when robots that express emotions are integrated into human-robot teams, they can facilitate higher levels of trust in the team (Correia et al., 2018). For human–multi-agent systems envisioned for space operations, future work could explore when machine emotional expression could support teamwork or how to best to use emotion in these contexts.
The second major takeaway is the potential shifting of responsibilities of USSF Guardians. If multi-agent systems were implemented in a way that is in line with the expectations discussed in the interviews, Guardians could be freed up to work on more complex tasks, especially those that require decision making and dynamic thinking. USSF operations could expand if the same number of people could accomplish more work, which could lead to overall improvement in efficiency and innovation in the USSF.
And third, humans are a nonnegotiable part of multi-agent systems. While there were many comments made about the trouble with human emotions, there was still overwhelming support for limiting non-human agents and keeping humans at the forefront of decision making. A human being required to be involved in a multi-agent system leads to many design considerations. For example, there needs to be continued research on subjects such as communication between human and non-human agents and long-duration interactions between agents, within the unique context of space. However, the effort that would be required to involve humans in a multi-agent system is worth the investment to create an effective system, as again, USSF Guardians were concerned with machines’ ability to both make decisions and the lack of ability to have those decisions overridden by humans.
These conclusions could also have influence on how tasks are allocated between human and non-human agents in a system. In some ways, the current findings align with the theory of function allocation expressed in Fitts’ list, written in 1951 (Fitts, 1951). In a system made up of machines and humans, assigning tasks properly is important. Fitts theorized that people would be better at things such as inductive reasoning and making judgements, while machine would be better at performing repeated tasks, and handling complex operations. It is impressive that while technology has progressed very rapidly over the last 70 plus years, this theory shows that opinions of how systems of machines and humans should work together have somewhat held consistent (De Winter & Dodou, 2014). In other ways, these results contradict Fitt’s list. For example, in Fitts’ article, emotional stress is discussed as a potential draw back for humans, but it also states that sometimes humans will perform better under stress than machines will (Fitts, 1951). In the interviews for this study, human emotional variability was brought up solely as a problem, and it was also indicated that people would prefer machines handle stressing tasks. Addition- ally, there are themes in this study that were not touched on in Fitts’ list. The concern with machines having unlimited abilities that are not checked by humans does not appear in Fitts’ article. This is not unsurprising as autonomy for computer systems is very different today, with things like AI, than it was in the past. Therefore, there is a need for research that expands upon Fitts’ theories. Fitts’ list serves this paper best as a benchmark to compare to that demonstrates how opinions of human-machine teams have stayed the same, how they have changed, and what was not originally considered that may be relevant today. The interviews collected in the current study add to the discussion of task allocation in multi-agent systems by detailing what tasks potential users want to be assigned to who, and why. And it may one day serve as another snapshot in time to compare to as technology and people’s opinions of it evolve.
Conclusion
The interviews discussed in this paper were conducted to identify opportunities and potential challenges that may impact human–multi-agent systems in USSF’s operations in space. This process provided important information that should be used to inform the design of space multi-agent systems in the future, such as knowledge about expected reliability, and the desire to have limitations on what non-humans are allowed to do. Opinions were gathered from USSF Guardians and should be taken into account as they will be some of the end users of this type of technology, and human-centered technology typically needs to be seen as valuable for users to adopt it. Additionally, the answers provided could be used to identify avenues of future research for general space operations. Future re- search could include gathering a more diverse and larger group of USSF Guardians to interview, as well as those involved in with space exploration in other government organizations and commercial ventures.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by George Mason University’s Office of Research, Innovation, and Economic Impact (ORIEI) award 215134, and the Air Force Office of Scientific Re- search award numbers 21USCOR004, FA9550-18-1-0455, and FA9550-21-1-0359. The views expressed in this paper are those of the authors and do not reflect those of the U.S. Air Force, the U.S. Space Force, Department of Defense, or U.S. Government.
