Abstract
This article addresses the following question: How should authors disclose the use of Artificial Intelligence (AI) in their research, as well as in the crafting of any manuscripts based upon their research so that readers, reviewers, and editors can clearly see how and where AI shaped a scholarly contribution? The question was discussed in a presentation at the 2025 NISO Plus conference in which the current state of AI disclosure was examined. The ultimate recommendation that emerged is that the Artificial Intelligence Disclosure (AID) Framework could serve as the basis of an international standard for AI disclosure because the Framework supports standardized, consistent, and transparent reporting of AI use across the full arc of learning, research, and publication. It is designed to complement, not replace, the conventional citation of direct AI outputs, and it situates AI assistance in context to make process-level work visible.
Keywords
Artificial intelligence (AI) disclosure practices are becoming increasingly vital as these technologies permeate professional, research, and educational workflows. Transparent disclosure of AI use—informing stakeholders about how AI has been used in the creation of products, services, and publications—fosters trust, supports informed decision-making, and demonstrates ethical responsibility. Effective disclosure not only aligns with legal and regulatory standards but also mitigates risks associated with AI deployment.
Generative AI (GenAI) tools have accelerated both routine and advanced scholarly tasks, yet disclosure practices presently lag behind real-world use. Current guidance converges on three principles: AI tools are not authors, disclosure is required, and humans retain responsibility for the integrity of the work. These positions, articulated by leading organizations and publishers, are increasingly embedded in journal policies. The practical question that remains is: How should disclosure be structured so that readers, reviewers, and editors can clearly see how and where AI shaped a scholarly contribution?
This article addresses that question by examining the current state of AI disclosure and recommends that the Artificial Intelligence Disclosure (AID) Framework, 1 discussed at the 2025 NISO Plus Conference, as the potential basis of an international standard for AI disclosure. The framework supports standardized, consistent, and transparent reporting of AI use across the full arc of learning, research, and publication. Designed to complement, not replace, conventional citation of direct AI outputs, it situates AI assistance in context to make process-level work visible.
An overview of the regulatory and business landscape of AI disclosure
The conversation about AI disclosure in academia exists in a larger regulatory and policy environment requiring the practice to enhance trust. 2 (Renieris et al., 2024). As AI tools are increasingly integrated into user-facing services, many jurisdictions now require businesses to inform users when AI is involved in decision-making processes. For example, in the United States, Colorado’s AI Act (Consumer Protections for Artificial Intelligence, 2024) mandates that companies disclose when AI influences decisions affecting individuals, such as in hiring, lending, or healthcare contexts starting in 2026. 3
This trend is not exclusive to the United States. AI disclosure is increasingly mandated by international regulations to ensure transparency, accountability, and consumer protection. 4 The European Union’s AI Act (European Union, 2024), 5 which began phased enforcement in 2025, sets a global benchmark by categorizing AI systems based on risk and requiring disclosures for high-risk applications, such as those used in hiring, credit scoring, and healthcare. Businesses operating in or affecting the EU must inform users when AI is involved in decision-making and maintain documentation for audits.
In other jurisdictions, there has been movement toward revisiting existing data protection and privacy laws to address the need for AI disclosure. For instance, in India, the Digital Personal Data Protection Act, 2023 (Indian Ministry of Law and Justice, 2023), 6 intersects with AI regulations by requiring that individuals be informed when automated decision-making is involved. Similarly, multiple Canadian provinces have introduced or passed new privacy legislation that require companies to disclose AI use, especially when it impacts individual rights. 7 These regulations aim to mitigate risks related to bias, discrimination, and data misuse and, in the case of Québec, come with significant monetary penalty if violated. While the efficacy of these approaches remains to be seen, the adaptation of privacy legislation to transparency frameworks may prove fraught as transparency is not traditionally codified in law, particularly in Western legal systems, to the same extent as privacy. 8 (Felzmann et al., 2020).
In response to the regulatory push toward transparency and disclosure, corporations have moved to developing technical solutions to automate disclosure processes. For example, the Adobe Corporation developed its Content Credentials standard as a solution for AI transparency and disclosure. 9 Likewise, researchers at IBM have developed an approach to AI disclosure based on a “set of questions to help users clarify how AI contributed to their creative process.” 10 Content platforms such as YouTube have created disclosure processes for altered and synthetic materials, 11 building on existing processes for the removal of materials where there was a failure to disclose or where synthetic content was deemed to be harmful. 12 With this corporate move toward disclosure solutions, there has been recognition that standardized disclosure is difficult 13 and largely takes the form of a voluntary processes, such as the Hiroshima AI Reporting Framework. 14
AI disclosure in scholarly publishing and academia
AI is being used widely in scholarly publishing. Lund and Naheem found that, out of a study of three hundred journals, ninety-seven percent had policies allowing use of AI.
15
Perkins and Rowe
16
examined academic publishing trends related to artificial intelligence, finding the following common themes: • Authorship cannot be assigned to AI and must be human • Authors are fully accountable for their work, including anything produced by AI • Publisher policies exist, but lack specificity • Authors must consider implications of AI tools for privacy, security, and research integrity • Transparency and disclosure of AI use is expected when tools are allowed for research tasks • Using AI tools to aid peer review may violate copyright, privacy/confidentiality, or ethical conduct of research
Despite these widespread and common expectations, disclosure expectations have yet to reach this same level of adoption or consistency. Disclosure rates tend to increase when publishers have clear policies in place, as shown by Suleiman et al., 17 yet these policies vary significantly across journals and are subject to rapid change. 18 To bring some understanding of AI use in manuscript preparation, the International Association of Scientific, Technical & Medical Publishers (STM) recently produced a report on the classifications of AI use to support journal editors in setting disclosure policies. 19 In practice, Ganjavi et al. found that twenty-four percent of publishers and eighty-seven percent of journals had generative artificial intelligence policies with disclosure requirements, but lacked specificity and standardization. 20 This indicates an increase in AI disclosure as a practice over time as an earlier study found 37.6% of nursing journals and 14.5% of medical journals had a requirement for AI disclosure just a year earlier. 21 Journal editors themselves have a variety of expectations around AI disclosures, but they share concerns that vague disclosure undermines trust. 22 Wang and Zhao proposed a three-tiered disclosure framework, though they did not provide specific guidance on what should be disclosed or how disclosures should be structured. 23 Despite these inconsistencies, a majority of academics express support for certain uses of AI in research, provided that such use is transparently disclosed.24–26
Recent studies highlight evolving trends and challenges in the disclosure and use of generative AI (GenAI) in scholarly publishing. Kousha found that approximately eighty percent of disclosed AI use pertains to text editing and proofreading, while only about five percent involves direct research tasks such as data analysis or programming. 27 Pesante et al. examined abstracts of orthopedic journals, finding while 4.8% of abstracts had artificial intelligence-generated text, only 3.6% disclosed the use of an AI tool. 28
Several organizations and publishers themselves have drafted publishing guidelines addressing AI use and how to properly disclose it. 29 The Committee on Publication Ethics affirmed that AI tools cannot be credited as authors, as they are not capable of fulfilling the responsibilities required for authorship. 30 The focus on preserving authorship as uniquely human is because AI tools are non-legal entities who cannot take accountability for the work, declare conflicts of interest, or manage copyright and licensing agreements. 31 When authors utilize AI tools in any aspect of their research, they must clearly disclose the use and nature of these tools in the Materials and Methods or a similar section of the manuscript.32,33 Ultimately, authors bear full responsibility for the entire content of their work, including any parts generated with the assistance of AI, and are accountable for any breaches of research or publication ethics.
Other organizations have also contributed to the conversation. The International Association of Scientific, Technical & Medical Publishers released a classification of AI use in the preparation of a manuscript. 34 This recommendation, which expands on the organization’s earlier guidelines regarding AI use in publishing, helps codify types of use, but falls short of addressing the form of reporting required to achieve standardization. Recent workshops held for the scholarly publishing community by NISO found that publishers were very interested in, “standards for model disclosures and AI tool documentation.” 35 The American Psychological Association released updated guidance in August 2025 recommending a combination of notes, incorporation into the methods section, description of prompting in the introduction section, and disclosure incorporated into figures and tables as appropriate. 36 While this updated information provides more direction for authors and editors, it still lacks a level of consistency and harmonization across publishers and disciplines that would aid consistent, widespread adoption of AI disclosure.
Critics have also highlighted the importance of AI disclosure for maintaining research integrity. For instance, Staiman 37 argues that current publisher policies on author use of AI tools are inadequate and leave significant gaps in ensuring research integrity. It emphasizes that while researchers are rapidly adopting AI in their workflows, publishers have been slow to implement clear, practical guidelines, placing the burden of responsible use and ethical oversight largely on individual authors.
Discussions of appropriate use of AI tools in educational contexts have encountered similar challenges as in scholarly publishing. Given the strong relationship between citation and academic integrity, many organizations overseeing citation manuals moved to provide guidance on directly citing AI outputs. 38 This has created conflict within higher education contexts as traditional citations are designed to reference static, tangible outputs and the intellectual contributions of identifiable authors. However, generative AI systems produce content through dynamic interactions between prompts, models, and parameters. Such outputs are often non-repeatable and non-reproducible. 39 Moreover, AI tools can play diverse roles in the research and writing process, acting not only as sources of information but also as collaborators, editors, or critics. While citations offer a degree of transparency, they are insufficient for capturing the multifaceted and evolving ways in which AI is being integrated into education.
Implementing AI disclosure practices offers an option to supplement traditional citation practices while preserving transparency and building trust between instructor and student. Incorporating AI disclosure into the classroom fosters digital literacy and encourages ethical engagement with technology, helping individuals understand not just how to use AI tools, but when and why to use them. By practicing transparency, learners build habits of academic and professional integrity that will serve them well in future educational, publishing, and workplace settings. For educators, a consistent disclosure standard could help refine and more clearly articulate acceptable uses of AI tools in classroom settings and learning outcomes. AI disclosure cultivates a responsible and informed approach to technology use, reinforcing trust and accountability in academic environments.
The Artificial Intelligence Disclosure (AID) framework
Within the wider discussion of AI transparency and disclosure, there is broad support for a standardized disclosure practice that is easy to understand. Renieris et al. note, “In order to promote transparency and accountability, they [AI disclosures] need to be as easy to understand and as user-friendly as possible.” 2 Standardization helps advance compliance, awareness, and builds consistency into expectations and behavior. The current lack of a consistent AI disclosure standard negatively impacts companies, scholars, educators, and students.
To address this gap, the author developed a standardized disclosure approach for AI. Called the Artificial Intelligence Disclosure (AID) Framework,
1
it is meant to facilitate standardized, consistent, and transparent disclosure of artificial intelligence use in education and research. The author took inspiration from the Contributor Roles Taxonomy (CRediT), applying the structured approach to the potential uses of AI.
40
The work required careful consideration of the following aspects of a standardized disclosure framework: • An AI disclosure framework must be openly licensed to allow adaptions across contexts. • An AI disclosure framework should work in concert with citation practices meant for disclosure of direct outputs. • An AI disclosure framework should be adaptable to different educational levels, academic disciplines, and needs. • An AI disclosure framework must be both machine readable and machine producible to enhance compliance and comparison of AI use going forward.
The AID Framework contains fourteen elements representing information about the AI tool, date of use, and use case for the AI tool within the context of the work. The structured AID statement is meant to be added to the end of a paper prior to the references, similar to an acknowledgments section. In educational settings, there is occasionally concern around student compliance where AID statements are incorporated directly into assignments. In such instances, it is recommended that AID statements are created and submitted separately to address these concerns while providing the ongoing benefits of practicing AI disclosure.
It is not expected that all AID Framework elements will be used for a single project. Individuals should select only the relevant elements applicable to their work, apart from the Artificial Intelligence Tool(s) element, which is required. In generating a disclosure statement, individuals are also encouraged to create a single statement, even if using multiple AI tools for different functions within their work.
The following information about AID Statement structure, elements, and descriptions, is reproduced from Weaver’s 2024 seminal publication: AID Statement: Artificial Intelligence Tool: [description of tools used]; [Heading]: [description of AI use in that stage of the work];…
Each heading: statement pair will end in a semi-colon, except for the last statement, which will end in a period. Any other symbols can be used in the “statement” portion of the heading: statement pair except for colons and semi-colons.
The AID Statement elements, and their definitions are the following: 1. Artificial Intelligence Tool(s): The selection of a tool or tools and versions of those tools used and dates of use. May also include note of any known biases or limitations of the models or data sets. 2. Conceptualization: The development of the research idea or hypothesis, including framing or revision of research questions and hypotheses. 3. Methodology: The planning for the execution of the study, including all direct contributions to the study design. 4. Information Collection: The use of artificial intelligence to surface patterns in existing literature and identify information relevant to the framing, development, or design of the study. 5. Data Collection Method: The development or design of software or instruments used in the study. 6. Execution: The direct conduct of research procedures or tasks (AI web scraping, synthetic surveys, etc.) 7. Data Curation: The management and organization of those data. 8. Data Analysis: The performance of statistical or mathematical analysis, regressions, text analysis, and more using artificial intelligence tools. 9. Privacy and Security: The ways in which data privacy and security were upheld in alignment with the expectations of ethical conduct of research, disciplinary guidelines, and institutional policies. 10. Interpretation: The use of artificial intelligence tools to categorize, summarize, or manipulate data and suggest associated conclusions. 11. Visualization: The creation of visualizations or other graphical representations of the data. 12. Writing—Review & Editing: The revision and editing of the manuscript. 13. Writing—Translation: The use of artificial intelligence to translate text across languages at any point in the drafting process. 14. Project Administration: Any administrative tasks related to the study, including managing budgets, timelines, and communications.
An example AID Statement may be found at the end of this article, with additional examples available in the seminal publication by Weaver on the AID Framework. 1
Conclusion
As GenAI becomes commonplace, clarity about its role in scholarship is both required and foundational to research integrity and reader trust. The AID Framework gives anyone producing AI-supported content a shared language and standardized approach to communicate where and how AI tools contributed to scholarly work, without diluting human accountability. While the AID Framework is currently the most optimized approach to meet a variety of needs across disciplines and outputs, it should form the basis for a larger international reporting standard that can be used consistently across contexts and supported by major standards organizations such as NISO. Key to this effort will be engaging stakeholders from a variety of backgrounds and perspectives while maintaining the simple, structured approach that makes the AID Framework so effective.
Footnotes
Acknowledgments
Many thanks to my partner, Dulany Weaver, an expert in AI, members of the University of Waterloo Associate Vice-President Academic Standing Committee on New Technologies, Pedagogies, and Academic Integrity, and colleagues at the Ontario Council of University Libraries, who have been instrumental in supporting my development of and continuing work on the AID Framework.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Artificial Intelligence Disclosure (AID) Statement
Artificial Intelligence Tool(s): Microsoft Copilot (University of Toronto Institutional Instance), May–June 2025, Perplexity, May 2025; Information Collection: I used Microsoft Copilot and Perplexity to identify international laws and regulations related to artificial intelligence; Writing—Review & Editing: I used Microsoft Copilot to restructure select paragraphs and edit at the sentence level to improve clarity and readability of the article.
