Abstract
What shapes warfighters’ trust in military technologies augmented with Artificial Intelligence (AI)? Research often assumes that junior military personnel, including cadets training to become officers, will trust AI during future wars, and at higher levels than senior officers. We test these claims by fielding a survey experiment among a representative sample of cadets assigned to the Reserve Officers’ Training Corps (ROTC) program in the United States. Our analysis reveals that cadets are more trusting of AI-enhanced military technologies than senior officers, but that their trust is shaped by a more conservative understanding of the appropriate use and oversight of AI. We also find that cadets’ trust is shaped by a complex set of instrumental, normative, and operational considerations, including ongoing cognitive development, education, and professional enculturation. These results provide the first experimental evidence of cadets’ trust in AI-enhanced military technologies and have implications for future research, policy, and military modernization.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
