Abstract
Small changes in a system can cause dramatic shifts in individual trust to automation, be it over-trust or under-trust, and such behavior may spread within a team of human agents and automation. Even within the same environment and system variables, trust evolution may differ from team to team due to small changes—which would make traditional statistical analysis inappropriate. Such contingencies are difficult to identify, but may determine a team’s success or failure. We explored how one possible contingent factor, day-to-day group interaction, may influence an individual’s trust development in an automated system represented by a virtual assistant. A preliminary exploration between group interaction and trust differences between participants showed statistical significance (p = .042), but weak correlation (R2 = .03). Trajectory variation is highly attributed to individual differences or unaccounted contingent behaviors. Further analyses of contingent factors and trust development will explore individual and group dynamics over time.
Keywords
Objectives
A small change in a system may lead to unexpected shifts in trust in automation, be it over-trust or under-trust, and such behavior may spread in a team of multiple human agents and an automation. Even within the exact same environment and system variables, trust evolution may differ from team to team due to small changes that may be overlooked.
Seemingly insignificant events can have a large effect on evolutionary trajectories. Human behavior may show similar sensitivity. We have adopted the terms convergent and contingent from evolutionary biology, where evolution is shaped by repeatable events (i.e., convergence) or chance events (i.e., contingent). Paleontologist Gould (1989, p. 289), who coined radical contingent theory in evolutionary biology, said, “replay the tape a million times . . . and I doubt that anything like Homo sapiens would ever evolve again.” Even when a species’ evolution has seemingly achieved equilibrium, new irreversible fates may emerge from small changes described by Waddington’s epigenetic landscape (Ferrell, 2012).
Trust in automation might sometimes follow similar contingent dynamics, which would make traditional statistical analysis inappropriate. Such non-linear dynamics in human-automation dyads have been identified to explain why groups of people sometimes gravitate to extreme, bi-modal levels of trust (Gao & Lee, 2006; Li et al., 2023). Such contingencies are difficult to identify, but these “small changes” may determine the eventual acceptance of technology or success of a team.
This study addresses an important gap in human-automation interaction—there are few longitudinal studies of trust in automation in a team that span more than a few hours of data collection. We study a team-based study that spans multiple days, day-to-day group interaction is inevitable and may seem to exhibit convergent behavior. However, small changes may develop from day-to-day group interaction and have a strong effect on the team members’ trust development in an automated system. To quantify day-to-day group interaction, we calculated the proportion of interactions between team members relative to the entire mission. The inverse of the proportion is the network distance between team members in a team. We compared the trust differences between each team member in the team with the associated network distance.
The study explored how interactions amongst team members influence individual’s trust in the automated system represented by a virtual assistant. The study aims to share the exploratory research effort and foster the development of methods to comprehend contingent behavior that stem from the dynamics of group interaction which may alter the trajectory of trust evolution. If contingent behavior can be identified and predict the trajectory of trust evolution, the result may inform guidance for resilient automation design.
Approach
Four teams of four participants engaged in a 45-days simulation of long-duration space mission. Our analysis focuses on one of the many assigned tasks. Each participant performed six trials (to complete once every week), involving an interactive task with the aid of a virtual assistant. The task involves maintenance of the CO2 level of a simulated habitat through a computer-based procedure system named PRIDE. The virtual assistant was highly reliable for the first two trials, low for the next two trials, and high again for the last two trials. Despite the large volume of longitudinal data, the study includes a small sample of teams.
Audio recordings captured crewmember conversations over the 45 days. Each participant was equipped with a microphone that recorded throughout the day for the duration of study. The audio recordings were transcribed with whisperX (Bain et al., 2023), a derivate of OpenAI’s Whisper (Radford et al., 2022). When participants are close to each other, one microphone may pick up an utterance that is spoken by another participant. We define a conversational turn as when one utterance is captured by multiple microphones. We used Levenshtein distance (Appleton-Fox, 2023) to determine if utterances captured by multiple microphones are similar enough to indicate that they are part of the same conversation. We assessed how shared conversations affect trust evolution. Hierarchical mixed models can assess the degree of contingent behavior through the random slope effect—the degree of the outcome depends on how individuals differ in response to similar situations.
Findings
We qualitatively assess the degree of convergent—contingent behavior based on the spread of trust in automation in hybrid teams through visual inspection of the trust development over time by considering the mean and standard deviation of trust progression of each team. The trust behavior diverges—and persists—judging from the spread of the trust ratings of each team.
We quantitatively analyzed the trust development relative to the first two trials with high reliability automation with linear regression. We compared the trust differences between each participant in a team (i.e., trust distance) with the inverse of the proportion of identified interaction instances between participants (i.e., network distance) relative to the entire mission. The model shows a statistically significant but weak influence (R2 = .04, F(1, 94) = 4.25, p = .042, adj. R2 = .03; network distance’s β = .007, 95% CI [0.003, 0.01]).
We compared two linear mixed effect models using the lme4 package (Bates et al., 2015). The baseline model predicts trust distance with no fixed effect and each pair of participants as the random effect. The baseline model showed a R2 (conditional) = .796, showing that the differences contributed greatly to the trust distance. Another model predicts trust distance with network distance as its fixed effect, and includes participant pair by-reliability interaction as a random effect showed a R2 (conditional) = .922 and R2 (marginal) = .034. This analysis showed that the participant pairs by-reliability effect contributes more than the network distance to explain trust development distance.
Takeaways
Our analysis suggests contingent behavior influences trust development in teams. Considered as a contingent system, small chance events can change the trajectory of trust development. This contrasts with the general assumption that systems are convergent and arrive at the same endpoint regardless of small differences of initial conditions or small perturbations. Our exploratory research identifies signatures of contingent behavior in conversations (e.g., trust distance and network distance) and in trust trajectories across people (multi-level model). Our initial analysis supports the association of trust development differences between participants with their proportion of conversations within the team, albeit weak correlation and size effect. Identifying contingent behavior associated with trust evolution can provide guidance for enhancing resilient automation and system design.
The convergent—contingent distinction has implications for measurement and statistical analysis. Contingent behavior does not result in normally distributed responses, but in lumpy distributions. It also introduces interactions between individuals and conditions, correlations between participant, and auto-correlations within participants. More generally, contingent behavior may follow a causal structure defined by coupled non-linear systems, which is at odds with the implicit assumptions that typically guide our interpretation of behavior (van Gelder, 1998). One implication convergent behavior may explain the true source of variance that is often attributed to “individual differences” that dominates most results.
The convergent—contingent distinction also has implications for design. The outcomes of contingent systems depend on the dynamics of trust evolution and so identifying points of leverage for well-timed nudges could have substantial influence (Chiou & Lee, 2023).
Our exploratory analyses show how considering trust evolution in automation in hybrid teams might inform a more resilient automation design.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by NASA Human Research Program No.80NSSC19K0654.
