Abstract
This paper provides an overview of the findings from the Ithaka S+R-led collaborative research project “Making AI Generative for Higher Education,” and the findings specific to Stony Brook University, one of the collaborators in this project. Collaborators interviewed instructors and researchers at universities across the United States and Canada, asking them how they were using AI in teaching and research, as well as the associated challenges they were facing. Usage levels and perceptions of AI varied widely across interviewees, with most usage limited and experimental as instructors and researchers continue to identify the most productive long-term ways to integrate AI into their work. Our study also shed light on instructor and research support needs in relation to AI, with training opportunities, chances to learn from peers, secure access, and computational infrastructure among the common demands. This paper summarizes the presentation given by the authors at the NISO Plus conference in February 2025.
Introduction
When OpenAI commercially released ChatGPT at the end of 2022, Generative Artificial intelligence (AI) became a much-discussed topic across the higher education sector. A few years later, the dust has not yet settled. Generative AI, along with other forms of AI, is transforming education, research, and administrative operations at institutions of higher education. Colleges and universities have been grappling with questions such as how and whether to craft institution-wide AI policies, how to coordinate AI literacy training, and how to provide their communities with secure access to AI tools. In the meantime, AI developers and vendors continue to release new products and features geared towards different aspects of higher education.
In response to this predicament, Ithaka S+R, a nonprofit organization that conducts research to help colleges and universities navigate economic, technological, and demographic change, launched the “Making AI Generative for Higher Education” project in the fall of 2023. In collaboration with higher education institutions from across the United States and Canada, Ithaka S+R conducted research on how Generative AI technology is impacting postsecondary teaching, learning, and research. The universities that participated in this research project were: Bryant University, Carnegie Mellon University, Concordia University, Duke University, East Carolina University, McMaster University, Princeton University, Queen’s University, Stony Brook University, Temple University, Wesleyan University, Yale University, University of Arizona, University of Baltimore, University of Chicago, University of Connecticut, University of Delaware, University of New Mexico, and University of North Texas.
At the 2025 NISO Plus conference in Baltimore, this paper’s authors—Mona Ramonetti and John Fitzgerald from the Stony Brook University Libraries, and Claire Baytas from Ithaka S+R—gave a presentation describing our project and provided an overview of their research findings, including findings specific to Stony Brook University. This paper is based upon that presentation, and will share highlights from what we learned about Generative AI in teaching, learning, and research at universities in the U.S. and Canada.
Research methodology
Each team from our 19 participating universities conducted a series of semi-structured interviews in the spring of 2024 with individuals at their institutions who had both teaching and research responsibilities. Interview subjects were mainly faculty, but also included postdocs and graduate students. We had wide disciplinary variation, including representation from the arts, humanities, social sciences, hard sciences, medicine, law, and business. The interview asked about how Generative AI was being adopted in teaching, learning, and research contexts; the challenges instructors and researchers were facing in these contexts; and the AI-related support resources that are most important to them. For the aggregate analysis, Ithaka S+R compiled the two hundred and 46 interviews we received, took a 20% representative sample, and coded for analysis. Detailed aggregate findings were published by Ithaka S+R in a research report in April 2025. 1
At Stony Brook University, the research team conducted a targeted series of 14 structured interviews between March and May 2024 focusing on participants with active roles in both teaching and research. The interview cohort was intentionally drawn from two distinct academic units: the Department of Electrical and Computer Engineering (ECE) and the Department of Philosophy. This selection was deliberately structured to capture both technical and humanistic disciplinary perspectives on Generative AI’s emergence in higher education.
Interview participants at Stony Brook included: • Faculty at various ranks, including Professors, Assistant Professors, and Teaching Faculty. • Postdoctoral researchers, PhD students, and instructional support personnel engaged in research or instructional support roles. • One administrative leader serving in a university-wide capacity, providing insight into institutional planning considerations.
The disciplinary distribution emphasized engineering and technology fields, with 11 of the 14 participants based in ECE. Two Philosophy faculty members with research intersections in AI and computational technologies contributed essential humanities perspectives. Their involvement enabled the Stony Brook team to integrate conceptual, ethical, and historical inquiry directly alongside technical development and adoption patterns.
All interviews were conducted via Zoom and securely transcribed on a private server using WhisperX, 2 which provided AI-enhanced transcription with speaker diarization. Anonymization was applied using Presidio, 3 an open-source data protection framework designed to detect and redact personally identifiable information (PII), safeguarding participant identities while maintaining high transcription fidelity and reinforcing the study’s engagement with AI-driven research tools.
Special considerations for the Stony Brook methodology included: • Capturing disciplinary contrasts in AI adoption, particularly between STEM fields with higher rates of experimental AI use and humanities fields expressing greater skepticism. • Exploring perceptions of AI’s limitations in both technical reliability (ECE) and interpretative capacity (Philosophy). • Documenting institutional barriers such as inconsistent AI policies, limited computational resources, and varying faculty readiness for AI integration.
This approach allowed Stony Brook University to contribute a nuanced, discipline-sensitive dataset to the broader Ithaka S+R project, highlighting how Generative AI is simultaneously reshaping technical research workflows, challenging traditional notions of teaching, and raising fundamental ethical questions within the academic community.
Teaching and learning contexts
Our study found that most instructors had already experimented with Generative AI tools in teaching and learning contexts, but they had not necessarily identified longer term, productive ways of integrating the technology. By a significant margin, the most common ways in which interviewees reported incorporating Generative AI into their classrooms was through what could be categorized as “AI literacy-oriented activities.” These are activities that aimed to help students identify AI’s strengths and limitations, so that they would learn to use the technology productively and responsibly. This usually involved having the students use Generative AI for a classroom assignment under instructor supervision, then together analyzing what AI did and did not do well. Instructors across the board expressed enthusiasm for AI literacy training to be more formally integrated into student curricula. However, there was little consensus on whose responsibility this was to implement and what would be the best method or place to deliver that training.
These AI literacy-oriented activities were often described as one-off or temporary experiments. When it came to longer term integration of Generative AI into their courses, some instructors reported success in having students use AI to assist with research workflows or learn programming basics. However, most instructors—even those who were excited about incorporating AI into their courses—were still grappling with what longer term integration would look like in the context of the courses they taught. In the interviews, many said that they were open to integrating AI, but had not yet seen a way for it to help students meet the course learning objectives. Other instructors were reassessing whether their learning objectives should be revised in the age of Generative AI.
Some instructors expressed concerns over student academic integrity and misuse of Generative AI tools. However, instructors across the board had also accepted that detection was impossible. They did not want adversarial relationships with their students; they instead aspired to foster trust and mutual understanding regarding acceptable uses of AI with their students. The results of that in practice, as reported by interviewees, were mixed. For some, student misuses of Generative AI and breaches of academic integrity were put down to student confusion over what acceptable uses were. Instructors reported that because there was variance in course policies from instructor to instructor, and often not an institution-wide policy, their students reported feeling confused about where they were or were not permitted to use Generative AI, leading to increased cases of misuse.
Additionally, many instructors reported that they were beginning to dabble in using Generative AI for teaching tasks. This was still largely experimental and uses were varied, but the most common use case was using AI to generate course materials, such as ideas for activities, problems sets or worksheets, or rubrics.
At Stony Brook specifically, the use of Generative AI tools was largely limited and experimental. Almost one third of those interviewed used it for course design, followed by creating images and visualizations. Other uses included administrative tasks, tutorials, assessing student assignments, developing lesson plans, and recording audio for lectures. Many instructors were concerned with students’ reliance on Generative AI tools that could potentially compromise students learning foundational problem-solving skills. Instructors were still looking to identify longer term ways for integrating Generative AI into teaching and learning workflows.
Research contexts
As was the case with teaching and learning, when it came to the context of their research, many interviewees had experimented with using AI within their research workflows, but fewer had identified longer term productive uses. Many researchers were committed to finding those productive uses and expressed excitement about the new possibilities AI would bring about for research in their field. However, interviewees also showed strong awareness of AI’s limitations and a commitment to maintaining a high standard in their research.
Concerns about AI’s accuracy and reliability were the main barrier to further adoption in research contexts, especially when it came to certain tasks. Other researchers were hesitant to adopt AI because of ethical or practical concerns. They did not want to feel they were “outsourcing” too much of their work, or they saw AI as ill-suited to core aspects of their research methodologies, such as for those who rely heavily on archival research and non-digitized materials. There was widespread interest in seeing further clarity around best practices and integrity standards for AI use in academic research, from sources such as academic publishers.
At Stony Brook, the concerns mirrored many of the concerns of the cohort institutions. There was significant discussion on uncertainty around research integrity standards notably those around attribution and responsibility. Some researchers noted they felt unprepared to integrate Generative AI into their work. Many added that there was a lack of necessary AI skills and that effective AI training and support would be needed.
Across institutions, those who had used AI for their research were mostly likely to have either used it at the beginning of the research or writing process, such as for brainstorming or ideating, or at the end, such as for revising a final output. Revising writing, especially for those publishing in a non-native language, was viewed by many interviewees as a very exciting development brought about by Generative AI that had the potential to help level the playing field among researchers of varied backgrounds.
Using AI to assist with coding was another common use in research contexts, especially among researchers in STEM fields. Interviewees additionally discussed using Generative AI to summarize scholarly articles and streamline the process of writing literature reviews, though with mixed opinions. There was a lack of consensus on whether crafting the summary of existing research was a task that required human creativity and critical thinking or not, and on whether AI was producing summaries of a satisfactory quality or not.
There was widespread wariness among interviewees across institutions when it came to using AI to generate the text for an academic journal article, especially in cases where it was being used to produce the “core ideas” behind the article. However, based on interviewee comments, there is not a clear consensus on the line between using Generative AI to assist with academic writing vs using it to produce “core ideas.” Interviewees were also concerned about using Generative AI in this way if it could risk breaching the policy of a journal to which they might want to submit. That said, many interviewees also reported seeing manuscripts they suspected were written with AI being submitted to journals, suggesting that Generative AI use for writing academic manuscripts may be more widespread than our interviewees indicated when speaking about themselves.
Support needs
The final segment of our interviews asked interviewees about their support needs. These questions aimed to better understand the types of resources instructors and researchers were using, as well as what resources they felt they were required to best navigate AI’s introduction into higher education.
To educate themselves about AI, interviewees across institutions reported relying primarily on (1) learning from their peers’ knowledge and (2) self-teaching through resources that they found on the internet. When asked what resources they would like to see be made more available, the two most popular categories of response were (1) further opportunities to learn from peers and (2) discipline-specific resources. Peers were a desired source of information because of faculty’s trust in their peers to identify ethical and relevant use cases for the field. As one interviewee put it, a use case a colleague has found successful “gives it a stamp of credibility.”
Discipline-specific examples were especially desirable because they offered a clear model that instructors and researchers could integrate into their own work, mitigating the common barrier of the time required to discover productive uses for AI. Colleges and universities, therefore, may benefit from identifying the heavy users among their faculty or graduate students and giving them platforms with which to share their knowledge.
At Stony Brook University, interviewees expressed strong alignment with wider trends regarding AI-related support needs, while also identifying several institution-specific priorities that reflect both disciplinary expectations and local resource constraints.
Participants across the Electrical and Computer Engineering (ECE) and Philosophy departments emphasized the need for structured, accessible training opportunities to help faculty, researchers, and students responsibly integrate Generative AI into their work. While many had experimented with AI tools informally, there was a consistent desire for formal resources to accelerate skill development, address ethical concerns, and reduce uncertainty around AI’s appropriate use in teaching and research.
The most frequently requested resources at Stony Brook included: • • • • •
Several interviewees emphasized the need for discipline-specific resources, recognizing that AI’s capabilities and limitations differ across technical, scientific, and humanistic fields. ECE faculty expressed interest in piloting AI tools tailored to engineering education, such as chatbots for coursework, while Philosophy faculty called for resources that support critical engagement with AI’s philosophical, historical, and ethical implications. Stony Brook’s findings reinforce the broader conclusion that effective AI integration requires not only technical access, and training, but also discipline-sensitive support structures that promote responsible, ethical, and context-specific use of Generative AI in higher education.
Across all institutions, there was also a clear need for increased product knowledge and access. Interviewees expressed strong interest in their institutions making secure, affordable Generative AI tools available to themselves and their students. This kind of access is more widespread now than at the time of the interviews. The next steps may well be determining who is actually making use of those tools and why, as well as increasing AI literacy levels as the number of regularly used products in higher education that incorporate AI features increases. 4 Interviewees also mentioned using ChatGPT more than any other tool, indicating that a better understanding of the wide Generative AI product landscape for higher education would be beneficial to help users and institutions identify the best product for their specific needs.
Further, the Stony Brook cohort noted some equity considerations. Several interviewees mentioned computational limitations specifically noting the need for necessary computing power to effectively run Generative AI models for teaching and learning. Some faculty and graduate students spoke of the lack of structured workshops, online modules, and institutional support, which disproportionately affects those unfamiliar with AI tools. Access disparities were discussed as well where junior faculty, students, and staff may have less access to AI-driven tools compared to senior researchers with established resources. Others expressed the need to ensure that AI-generated content represents diverse perspectives and not reinforce biases in academic content. Lastly, there was an overarching call for developing institutional policies for AI ethics that would address concerns around misinformation, plagiarism, and over-reliance to establish a fair and responsible AI framework and practices.
Conclusion
As Generative AI becomes ever more ubiquitous in higher education, research on the evolving ways in which this technology is impacting the sector will only grow even more important. Our research findings made it clear that navigating the changes to higher education caused by Generative AI is a complex matter. Interviewees ranged from being excited, eager users of Generative AI to hesitant or critical and barely using the technology at all. To move forward, carefully crafted policies and informed scholarship addressing both the potential and the risks associated with this technology will help to develop standards, best practices, and guardrails, which will be crucial for users across the board.
Footnotes
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
