Abstract
As generative artificial intelligence (AI) is increasingly adopted, understanding how its usage is perceived has become crucial for theory and practice. Our investigation highlights how disclosing AI usage reduces trust by triggering legitimacy concerns arising from deviations from taken-for-granted human-centered norms. Drawing on a micro-institutional perspective, we unpack legitimacy into its dimensions and propose that they operate via three context-specific processes—perceived typicality, commitment, and authenticity—that jointly account for the erosion of trust resulting from AI disclosure. An initial structured content-analytic study of directed written interviews reveals that people indeed voice these legitimacy concerns when scrutinizing AI usage and addresses research questions about how such concerns manifest across facets. A subsequent vignette experiment shows that disclosing AI usage sequentially diminishes perceptions of typicality, commitment, and authenticity, ultimately lowering trust. A supplementary replication experiment confirms this pattern. Altogether, our investigation clarifies the paradoxical nature of transparency, advances empirical testing of legitimacy theory, and helps bridge the literatures on trust and institutional theory.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
