Abstract
The perceived ease of use around artificial intelligence (AI) systems like ChatGPT has created an ostensible moral and technological panic around how AI technologies are transforming higher education. This study uses the Technology Acceptance Model to suggest ‘perceived ease of use’ is both a factor in acceptance and panic around emerging technologies. We provide a qualitative analysis of statements, guides, and policies about AI and ChatGPT from 148 U.S. universities to argue that there is not so much a panic happening in higher education as there is a concerted effort to negotiate and integrate artificial intelligence into classroom, research, and daily operations. Across four different categories of statements – university policies, teaching and learning resources, library guides, and professor statements – we find recurring values emphasizing usefulness as well as navigating ethical concerns. ChatGPT’s perceived ease of use has led higher education faculty and staff to avoid panic discourses and focus on problem-solving discourses as they negotiate its institutionalization.
Get full access to this article
View all access options for this article.
