Abstract
This paper offers an overview of some of the highlights of the 2024 NISO Plus Baltimore Conference that was held February 13–February 14, 2024. While this was the fifth such conference, it was the first to be held in-person since 2020 as the following three were held in a completely virtual format due to the global impact of COVID-19. These conferences have emerged from the merger of NISO and the National Federation of Abstracting and Information Services (NFAIS) in June 2019, replacing the NFAIS Annual Conferences and offering a new, more interactive format. The ultimate goal of the NISO Plus conferences is to have a discussion, identify information industry problems and, with the collective wisdom of the speakers and audience who are representative of the information industry stakeholders, generate potential solutions that NISO or others can develop. As with prior years, there was no general topical theme (although the impact of Artificial Intelligence was a common thread throughout), but there were topics of interest for everyone working in the information ecosystem—from the practical subjects of persistent identifiers, standards, metadata, data sharing, Open Science, and Open Access to the potential future impact of Artificial Intelligence and Machine learning.
Keywords
Introduction
In February 2020 NISO held the first NISO Plus Annual Conference in Baltimore, MD, USA. It replaced what would have been the 62nd Annual NFAIS conference, but with the merger of NISO and NFAIS in June 2019 the conference was renamed NISO Plus and adopted a new, much more interactive format. The inaugural conference was labeled a “Grand Experiment” by Todd Carpenter, NISO Executive Director, in his opening remarks. When he closed the conference, all agreed that the experiment had been a success (me included), but that lessons had been learned, and that in 2021 the experiment would continue. It did, but due to the pandemic the experiment became more complicated as the 2021 conference was held for the first time in a totally virtual format and it continued in that format for the next 2 years. I want to add that the NISO virtual meetings have been among the best that I have attended, including their new (2024) NISO Plus Global Conference that was held in September—but more about that later.
Finally, in 2024 the NISO Plus Annual Conference returned to a much-anticipated in-person format, again held in Baltimore, MD, USA—NISO’s home base and is now entitled the NISO Plus Baltimore Conference. Since there was no virtual component for attendees, attendance returned to the pre-pandemic level of two-hundred plus rather than the six-hundred plus level of the 2022 and 2023 virtual-only conferences (the 2021 virtual conference attracted eight hundred plus attendees!). But the energy of the attendees was palpable, probably due to the fact that the build-up to the opening of the conference began the day before with a well-attended four-hour workshop entitled “Introduction to AI and Machine Learning: The Current and Future State of AI Systems and Services.” This served to whet the appetite of the attendees since AI was a topic in each of the six parallel break-out sessions that were held over the 2 days of the meeting.
Attendees were a representative sample of the information community—librarians, publishers, system vendors, product managers, technical staff, etc., from all market segments—government, academia, industry, both for-profit and non-profit. There were approximately 23 sessions plus an opening and closing keynote and the Awards lunch at which the Miles Conrad Lecture was given. The keynotes and Miles Conrad Lecture were recorded and are now freely available for viewing. 1
As in prior years, Todd Carpenter, NISO Executive Director, noted in his welcoming remarks that it was important to lay out NISO’s vision for the conference. He noted that many attendees might be new to this concept, and he wanted everyone to understand the conference goal, its format, and why NISO is building on the success of the past 4 years—they simply want to keep the momentum going. He emphasized that the attendees themselves are integral to making the event special because this meeting is not purely an educational event, it is meant to be an interactive, collaborative event—a place where participants can openly identify and discuss current problems and brainstorm on how those problems can be solved or mitigated.
He went on to say that content distribution was the first truly industrialized process (think the emergence of the printing press in 1440) and that publishing was among the first industrialized processes to apply standards. The goal of the conference is to continue to drive things forward, identifying the standards needed for the next generation of content and publishing tools, for example, what are the page numbers of the future? He believes that the best way to do this is to leverage the collective wisdom of the speakers and the conference audience. The speakers are not “sages on the stage,” they are the sparks that light the fire of discussion, Over the next 2 days he wants to focus on the discussions—not the presentations that will stimulate and drive those discussion.
He pointed out that the attendees are a representative sample of participants involved in every step of the content supply chain—a diverse group of individuals representative of more than one community. They provide the multiple perspectives, expertise, and experience that are essential for the successful development of practical solutions to the real-world problems and challenges with which the information industry must wrestle. He noted that the problems may not be ones facing everyone today, but ones that are foreseen to be coming and for which we need to be prepared. The ideas should produce results that are measurable and that can improve some aspect of information creation, curation, discovery, distribution, or preservation. In other words, the ideas need to have a positive impact—improve our work, our efficiency, and our results.
He again said that he would like to have attendees look at the ideas that are generated over the next 2 days and ask how those ideas could make a difference in their own organization or community and how they themselves might want to be involved. He made it clear that NISO is delighted to have a lineup of brilliant speakers who have agreed to share their knowledge, but that the goal of the conference is not simply to take their wisdom. He believes that everyone participating in this conference is brilliant and that he would like to hear from each and every one because the diverse reactions to the speakers and the ideas are what will make the event a success.
Carpenter went on to say that the structure of the conference was designed to foster discussions and at least half of the time in each of the non-plenary sessions would be devoted to discussion. Each session was assigned a moderator and a NISO staff that helped encourage and record the conversations. He added that if this NISO Plus conference is similar to its predecessors, lots of ideas will be generated, of which some will be great, some will be interesting, some will not take-off, some will sprout, and perhaps a few will turn into giant ideas that have the potential to transform the information landscape. He made it quite clear that NISO cannot make all of this happen as they lack the resources to manage dozens of projects. As in the past, they will settle on three or four ideas and perhaps the other ideas will find homes in other organizations who are interested in nurturing the idea and have the resources with which to do so.
In closing, Carpenter said that on a larger scale the NISO Plus conference is not about what happens over the next 2 days, but rather what is important are the actions that are taken over the days, weeks, and months that follow. It is what is done with the ideas that are generated and where they are taken. Whether the ideas are nurtured by NISO or by another organization does not matter—what matters is that the participants take something out of the conference and that everyone does something with the time that is spent together in the discussions.
I can attest that all of the sessions which I attended were interesting, had in-depth discussions, and a few did generate ideas, but I focused primarily on the Artificial Intelligence presentations that were held in each of the six parallel break-out sessions. While I am fairly good at multi-tasking, even I cannot attend more than one session at a time. Also, other than the two plenary sessions and the Miles Conrad Lecture, the sessions were not recorded as they were for the completely virtual conferences; however, slides, if provided by the speakers are usually posted to the NISO website (as of October 31, 2024 I could not locate them, but keep your eyes open). As a result, this overview does not cover all of the sessions. However, I hope that my overview motivates you to attend next year’s meeting which is being developed as I write this. That is my personal goal with this brief summary, because in my opinion, the NISO Plus conference is worthy of the time and attention of all members of the information community.
Pre-conference workshop
The pre-conference workshop was headed by Andromeda Yelton, a software engineer and librarian associated with the Berkman Klein Center and the San José State University iSchool.
I include it in this overview because she gave a nice history of Artificial Intelligence which I had learned about earlier and her workshop provided a solid groundwork for many of the AI-related presentations given at the conference.
She noted that the concept of AI was captured in a 1950’s paper by Alan Turing, “Computing Machinery and Intelligence,” 2 in which he considered the question “can machines think?” I read the paper several years ago and I found it to be well-written, understandable by a non-techie like me, and worth a read. Five years later a proof-of-concept program, Logic Theorist, 3 was demonstrated at the Dartmouth Summer Research Project on Artificial Intelligence 4 where the term “Artificial Intelligence” was actually coined and the study of the discipline was formally launched (for more on the history and future of AI, see, “The History of Artificial Intelligence” 5 ). I should note that Yelton also mention an AI-related paper 6 on neural networks that pre-dated Turing’s, but I found it to be a challenging read. However, I did find a video that explained how it fits into the AI history. 7
A concise one-sentence definition of AI is as follows: “Artificial Intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.” 8 Machine learning (ML) is one of the tools with which AI can be achieved and Yelton provided the following definition: “Machine Learning is the science of getting computers to act without being explicitly programmed.” 9 She noted that the terms AI and ML are often incorrectly interchanged (ML and AI are not the same. ML, as well as Deep Learning, are subsets under the overarching concept of AI). She also briefly mentioned Generative AI, a subset of ML wherein computer systems learn to generate images, text, etc. in response to prompts. Just as an FYI, I found an article that defines a lot of the ML jargon and it is definitely worth a read. 10
Yelton went on to describe traditional programming versus ML. For traditional programming, a human writes explicit rules and the computer follows them. For ML, a human writes a framework for developing rules and supplies data and evaluation criteria. The computer chooses initial rules and then runs the data through the rules and checks against criteria. Depending on the results, the rules are tweaked and the process repeated with some level of human intervention. She went on to highlight the importance of using high-quality, unbiased data and provided examples of how “bad” data can provide negative and possibly dangerous results.
In closing, she reinforced that fact that AI has been around for decades, but that we humans keep moving the goal post. She described ML as math plus feedback as follows: computers make a guess; they use a human-supplied criterion to grade themselves; and they automatically update their equations to get slightly better answers next time around. She said that we humans do not usually know why computers pick the answers that they do! However, she noted that it has been proven that ML can do many useful things, but for the machines to do so she warned that we humans need to be aware of ethical and legal issues as well as biases when we build our datasets. ML models can magnify the imperfections of data and the inclusion of biases.
The workshop allowed ample time for discussion throughout and really set the stage for the conference itself. Yelton’s slides are posted on her website. 11
Opening keynote
The opening keynote, entitled “States of Open AI,” was given by Thomas Padilla, Deputy Director, The Internet Archive. It was noted that the nature of “open” AI is hotly contested in the popular press, academic discourse, between companies advancing products, and in State, Federal, and international regulatory spaces. Assessing various permutations of open AI practice and strategy is a necessary precondition of defining and advocating for a version of open AI that best aligns with core missions. Cohering advocacy efforts, in turn, provides opportunities to prioritize open AI work streams that libraries are—or could be—well-positioned to deliver on in combination with a diverse set of partners. Padilla’s presentation described the states of open AI as they are and as they could be, and also highlighted the potential roles that libraries and their partners could play in the work that lies ahead.
He opened by asking what is Open AI and why does it matter? He said that to some degree having a measure of interpretive flexibility around different types of concepts is helpful, particularly when we participate in interdisciplinary or interprofessional work. Such flexibility provides some sort of wiggle room and it accommodates a certain degree of collaboration. When concepts are stamped in concrete it can make things a bit more difficult. He added, however, that having interpretive flexibility around the definition of Open AI becomes a bit problematic when we start to move into operations. Why? Because the terms “open” and “open source” are used in confusing and diverse ways, often constituting more aspirational marketing than technical descriptor. There is no commonly accepted definition about what “open” means in the context of AI and it is applied to widely divergent offerings with little reference to a stable descriptor. 12 This becomes consequential as we move into implementation and the assessment of licensing terms and infrastructure requirements. We think we know what “open” means and then we realize that the reality does not match our expectations.
He went on to say that upfront we need to have a concrete sense of what we want from AI and the knowledge work that we do collectively together. We need good practices and a common understanding of what those good practices might be. For him, open AI and knowledge work need to be reusable, transparent, accountable, affirmative, and sustainable, and in its use, we need to adopt a stewardship mindset.
He noted that the Open Source Initiative is working to define open AI, noting that it supports reuse. Their definition currently is as follows: 13
“Following the same idea behind Open Source Software, an Open Source AI is a system made available under terms that grant users the freedoms to: • Use the system for any purpose and without having to ask for permission. • Study how the system works and understand how its results were created. • Modify the system for any purpose, including to change its output. • Share the system for others to use with or without modifications, for any purpose.
Precondition to exercise these freedoms is to have access to the preferred form, to make modifications to the system, and to the means to use it.”
He talked about the value of open source software. A recent paper stated that the supply-side cost to create open source software is about four billion dollars (U.S.) while the demand side for its use is much larger at almost $9 trillion. The authors of the paper estimate that firms would need to spend 3 and 1/2 times more on software than they currently do if open source software did not exist. 14
Padilla noted that transparency is needed throughout the chain of AI production and application. Without transparency there is no assurance of knowledge integrity and in the absence of knowledge integrity why bother being involved in it. Transparency provides credit and attribution for knowledge creators. He gave the example of generative AI (e.g., ChatGPT) where often different answers are provided for the exact same question, and the sources that were used to obtain the answers are not provided. He stressed the importance of Model Cards that provide the information needed for transparency and accountability, for example, the data sets used, the intended purpose of the AI system, and limitations. 15
In closing, he mentioned a 6-month joint study undertaken by the Association of Research Libraries (ARL) and the Coalition for Networked Information (CNI) that was announced in December 2023, the purpose of which is “to articulate a set of scenarios for possible futures for the research enterprise that are heavily shaped by recent developments in AI and machine-learning, with a particular emphasis on generative AI.” 16 This initiative was discussed briefly at the end of the conference by Cynthia Hudson Vitale, the first speaker in the Closing Keynote session.
Padilla’s keynote was recorded and the video (along with a transcript) have been posted and are freely accessible. 17
AI tools in scholarly research and publishing
After the opening keynote there were four parallel sessions. I followed the AI track and this session had one speaker, Brian Pichman, Director of Strategic Innovation, Evolve Project. This, for me, was one of the best presentations of the conference and was packed full of information.
Brian opened with some general disclaimers about AI: It has challenging credibility (remember when you could not use the Internet as a source of information?); it is inherently neutral (technology can be used for good and bad); it is constantly changing (new rules and new techniques); and it has puzzling opportunities (AI can create jobs—the invention of cars did not eliminate jobs, it changed them). He then gave a brief history of the highs and lows of AI, 18 stating that the “Golden Years” were 1956–1974 when people found the programs astonishing and then the 20th century “winter” set in because the algorithms remained limited and could only handle trivial versions of the problems that they were supposed to solve. But in 1987 there was renewed interest from the business community and AI continued to advance.
He talked at length about the many uses of AI, saying that it can be used almost everywhere and noted that there are a plethora of companies that are developing AI tools and that there are both cross-industry applications of AI as well as industry-specific uses. He went on to talk about Generative AI that uses trained data to create diverse media types such as text, audio, or visual content and described the training process: • Training Data: Generative AI models are trained on a dataset of desired content. • Generator Network: Creates new data from random input. • Iterative Learning: Continuous model adjustments during training to improve data generation. • Sampling: Utilizes the trained model to generate new data samples (improperly done gives you AI “hallucinations” • Most AI models that generate context are trained on things found on the World Wide Web.
Pichman provided interesting examples of Generative AI, noting that the first AI-generated book in chemistry, “Lithium-Ion Batteries,” was published in 2019 by Springer. 19 He also showed fascinating examples of programs for visual content such as the Lensa App 20 and Deep Fake. 21 He noted that in 2022 it was easy to tell what was AI-generated and what was not via word choices or when the text resembled anything in existence. Early AI detection methods worked with high accuracy. Since then, there has been much more content created by Generative AI, thus when an AI algorithm leverages content that was created by AI as part of its “brain” it will not know what is AI-generated and what is not (this is called “Generative Inbreeding”).
He moved on to talk about Operational AI. This is the use of AI to make day-to-day tasks easier, for example, using AI that joins meetings to take notes or function as a coach. It can also be used to edit content or videos, to write or de-bug programs, to generate social media, etc.
Pichman also talked about Conversational AI that allows you to communicate just like speaking to someone else. ChatGPT is a great example. Conversational AI leverages many types of components to function: Natural Language Processing (NLP), Natural Language Understanding (NLU), Natural Language Generation (NLG), Speech Recognition, and Dialog Management. This type of AI is commonly used in Chatbots, Virtual Assistants, and Interactive Voice Response (IVR) Systems.
The next topic he addressed was the use of AI in the Metaverse—the use of something called “Digital Twins,” 22 that is the building of a Virtual (or Augmented) Reality environment for communication or training. One can create a digital twin of a librarian or a tutor that understands a student’s classes, what they are struggling with, and builds a relationship with this partner to help them through anything with which they may be struggling (sounds like science fiction, but it is reality!). He also addressed Analytical AI that is the use of AI for data management. It digests information faster than a human can with more accurate calculations and new observations. It can pull in all data sources to find trends and predictive analytics and it can also identify objects, do language translations, and discuss what it sees in a piece of data, document, or image. He gave the example of Office 365 Co-pilot, 23 Microsoft’s remarkable built-in AI tool, with which you can discuss what you want to accomplish and it will gather the solutions through a conversation and provide access to the data in a spreadsheet.
Pichman noted several of the uses of AI in research, such as the transformation of data collection with automated data scraping and aggregation, and the use of sensors and IoT devices in data collection and the revolution of data analyses with the use of machine learning models for pattern recognition and the use of AI for predictive analytics and modeling. He went on to provide examples of the use of AI in academic publishing such as the automation of processes, for example, AI-driven peer review and editorial decision-making and the automation of formatting and style checks; plagiarism detection via the use of advanced AI tools for detecting plagiarism and ensuring originality; and the enhancement of accessibility via AI-powered tools for improving the readability and accessibility of academic texts and translation services to bridge language barriers in research dissemination.
He did highlight the challenges and ethical considerations of the use of AI such as (1) ensuring the confidentiality and integrity of research data; (2) identifying and mitigating bias in AI models and datasets; (3) the responsible use of AI in research; and (4) the need for ethical guidelines for the use of AI in research. In addition, he highlighted copyright concerns such the need to understand copyright laws in the context of AI-generated content and the need to balance innovation with respect for original works. Also, he believes that best practices for using copyrighted materials in AI research need to be developed so that people always ask for permission to use material authored by others—a sentiment echoed by others throughout the conference. Also, one must navigate the complexities of Intellectual Property rights for AI-generated outputs and he provided information on recent legal cases and precedents in AI and copyright law.
In closing, Pichman noted that we should be “ok” with change, understand that not all AI is created equal, that AI changes fast—what works today may not work tomorrow, and that AI is only as good as the data that is used to generate results.
Pichman’s slides should eventually be posted on the NISO Plus website—they are excellent, information-packed, and worth down-loading for future reference. Also, he submitted a high-level manuscript entitled, “Knowledge Powered by Artificial Intelligence,” that appears elsewhere in this issue of Information Services and Use.
Miles Conrad Lecture
The conference attendees re-convened after the morning’s parallel sessions ended to attend the Awards Luncheon. The awards ceremony traditionally closes with the presentation of the Miles Conrad Award that was a significant highlight of the former NFAIS Annual Conference. The award was named in honor of one of the key individuals responsible for the founding of NFAIS, G. Miles Conrad (1911–1964). His leadership contributions to the information community were such that, following his death in 1964, the NFAIS Board of Directors determined that an annual lecture series named in his honor would be central to the annual conference program. It was NFAIS’ highest award, and the list of Awardees reads like the Who’s Who of the Information community. 24
When NISO and NFAIS became a single organization in June 2019, it was agreed that the tradition of the Miles Conrad Award and Lecture would continue and the first award was given in 2020 to James G. Neal, University Librarian Emeritus, Columbia University. In 2021 the award went to Heather Joseph, Executive Director of the Scholarly Publishing and Academic Resources Coalition (SPARC). In 2022, the award was presented to Dr Patricia Flatley Brennan, Director of the U.S. National Library of Medicine (NLM). In 2023, Dr Safiya Umoja Noble, Professor of Gender Studies and African American Studies, University of California, Los Angeles (UCLA) was presented with the award. This year the award was given to Ed Pentz, the first and current Director of Crossref.
Pentz became Crossref’s first Executive Director when the organization was founded in 2000 and he manages all aspects of the organization to ensure that it fulfills its mission to make research outputs easy to find, cite, link, and access. Ed was Chair of the ORCID board of directors from 2014 to 2017 and is the current Treasurer of the International DOI Foundation. Prior to joining Crossref, Ed held electronic publishing, editorial, and sales positions at Harcourt Brace in the U.S. and UK, and managed the launch of Academic Press’ first online journal, the Journal of Molecular Biology, in 1995. Ed has a degree in English Literature from Princeton University and lives in Oxford, England.
Ed’s presentation drew from over 3 decades of his involvement in scholarly publishing, including 24 years at Crossref. He explored the critical role of collaboration and diplomacy in developing an open scholarly infrastructure. He examined the key inflection points in scholarly communication, the lessons that he learned from collaborative initiatives, and he discussed the future challenges and opportunities that he sees in the field. His presentation also reflected on the importance of diversity, equity, and inclusion in shaping the future of an open scholarly infrastructure.
I will not go into the details of his presentation as he has submitted an article entitled, “Building Open Scholarly Infrastructure: A Journey of Collaboration and Diplomacy,” that is based upon his Awards Lecture and it appears elsewhere in this issue of Information Services and Use. The recording of his presentation can be accessed on the NISO website. 25
AI and machine learning in discovery and search
Immediately after the awards luncheon there were four parallel seventy-five-minute tracks on the following topics: Author identity and name changes in metadata, Open scholarship and bibliodiversity, What Browser Enhancements mean for Libraries and Authentication, AI and Machine Learning in Discovery and Search. I briefly stayed on the AI track.
The first speaker was Heather Kotula, the President and CEO of Access Innovations, Inc. Her presentation was an excellent primer on the necessary basics of an artificial intelligence (AI) system and how some of the systems approach certain methodologies. She also stressed data quality as input to AI systems (as did all of the other speakers in the AI track throughout the conference) and provided examples of some of the criteria of which users need to be aware when training an AI algorithm, such as the system keying in on anomalous parameters, loopholes in the logic when designing the system, and the origin of data sources.
Up front, loud and clear, she stated that everything to do with discovery and search is and always has been based on AI systems, whether a machine learning (ML) component is included or not. She said that we use such systems every day even if we are unaware of it, noting that most people attending the conference had a hand-held device near them that has an app with an AI system called a Monte Carlo simulation 26 on it that they probably use multiple times a day without thinking about it.
Kotula said that there is one thing required for any AI algorithm and that is an expert system 27 —a computer system emulating the decision-making ability of a human expert. An expert system is made up of two components: (1) a knowledge or rules base and (2) an inference or reasoning engine. Kotula went on to describe such a system, the process for developing the rules, how important it is to know the rules underlying the AI system that you use, etc. and she did this in a non-technical and humorous manner. She provided examples of successes and failures in the use of AI and all of the caveats and parameters that need to be considered.
In closing she quoted a LinkedIn posting made by friend, Marianne Calihanna, which is as follows: “Using the term ‘AI’ when you specifically mean ‘generative AI’ is like saying ‘vehicle’ when you really mean ‘bike’. Just like how a vehicle refers to cars, trucks, bikes, and more, ‘AI’ is a broad term that encompasses various types of artificial intelligence, including generative AI. So, using ‘AI’ alone does not accurately convey the specific type of AI to which you are referring, just like saying ‘vehicle’ does not specify whether you are talking about a car, a truck, or something else.” This is something that Brian Pichman made very clear in his talk as well. There are many flavors of AI.
Kotula’s slides are not yet on the NISO website, but she has submitted a paper based on her presentation entitled, “What Changes with Machine Learning, Large Language Models, Generative AI, and ChatGPT in Search and Discovery? EVERYTHING!”, that appears elsewhere in this issue of Information Services and Use. It is as much a pleasure to read the paper as it was to hear her presentation.
Open scholarship and bibliodiversity
I took a break from AI to attend this session because the term “bibliodiversity” caught my attention. The session featured several speakers: Maureen Walsh, Scholarly Sharing Strategist, The Ohio State University Libraries, Columbus, OH, USA; Nataliia Kaliuzhna, Research Associate, Leibniz Information Centre for Science and Technology, Hannover, Germany; Nokuthula Mchunu, Deputy Director, African Open Science Platform, National Research Foundation, Pretoria, South Africa; Mohamad Mostafa, Regional Engagement Specialist, Middle East and Asia, DataCite, Hannover, Germany; and Katherine Witzig, Library Administrative Assistant, Oklahoma City University, Oklahoma City, Oklahoma, U.S.A. This was a diverse group representing researchers, institutions, and projects all focused on open scholarship and ensuring that the scholarly infrastructure includes a multitude of voices, is accessible to everyone, and can be expressed in a variety of ways. Each spoke briefly on their specific topic and the conversation continued with a panel discussion.
The term “bibliodiversity” was first coined by Chilean publishers in the late 1990s, and was originally defined as “cultural diversity applied to the world of books.” 28 In the context of scholarly communication the term is referred to by Shearer et al. as “Diversity in services and platforms, funding mechanisms, and evaluation measures [that] will allow the scholarly communication system to accommodate the different workflows, languages, publication outputs, and research topics that support the needs and epistemic pluralism of different research communities.” 29 In the context of open scholarship the term refers to the diversity of publishing models, platforms, and formats that are available for scholarly communication. It emphasizes the importance of a varied and inclusive ecosystem for acquiring academic knowledge and for the dissemination of research. An important part of bibliodiversity is the inclusion and the promotion of a diversity of scholarly voices.
Dr Nokuthula Mchunu presented on Open Science as a means to achieve bibliodiversity with a focus on the African Open Science Platform (AOSP), that is envisioned as a pan-African endeavor that aims to position African scientists at the forefront of data-intensive science.
Maureen Walsh discussed the role played by a large North American research library in promoting bibliodiversity through its investments in open scholarship, including the potential for Open Access investments to lead to unintended barriers for the inclusion of diverse scholarly voices.
Nataliia Kaliuzhna talked about the identification of obstacles to open access publishing for researchers having weak institutional ties and provided an overview of the newly launched IDAHO project. 30 The project aims to identify and describe the obstacles to OA publishing and their underlying mechanisms. It focuses on refugee scientists, independent researchers, individuals from the Citizen Science domain, and those affiliated with non-governmental institutions, who lack sufficient institutional support for Article Processing Charges (APC funding). It underscores the necessity of ensuring that a diverse range of voices from various backgrounds have an equitable opportunity to engage in research, knowledge creation, and dissemination.
Mohamad Mostafa discussed his work building a trusted and equitable open research infrastructure using persistent identifiers (PIDs). He also touched on his support for universities, institutions, and emerging research communities across Asia, the Middle East, and North Africa on their transition towards Open Research and the implementation of its principles He also discussed his involvement in efforts to enable communities in lesser-represented regions to benefit from an open infrastructure.
Katharine Witzig, who I should note is a citizen of the Choctaw Nation of Oklahoma, talked about her work as co-chair of the Program for Cooperative Cataloging (PCC)’s Task Group for Metadata Related to Indigenous Peoples of the Americas, 31 where she is helping to develop reports and recommendations for inclusive revisions to existing organizational structures for information. This project is under the auspices of the U.S. Library of Congress.
The talks were brief (it was a lightening talk session), the panel discussion was interesting, and I encourage you to read the well-written paper that the speakers submitted entitled, “Open Scholarship and Bibliodiversity,” that appears elsewhere in this issue of Information Services and Use.” Their article provides a lot of detail and, in the true spirit of bibliodiversity, includes sections written both in English and in the native language of the author. I applaud Sage Publishing, the publisher of Information Services and Use, for allowing this since their journal policy is English-language content only. Kudos to Sage Publishing!!
The Seamless Access Audit Toolkit: A framework for librarians to audit resource access
This session focused on the importance of ease of access to library holdings. The speakers were John Felts, Head of Information Technology and Collections, Coastal Carolina University, Jason Griffey, Director of Strategic Initiatives, NISO, and Tim Lloyd, CEO, LibLynx. They stated that access is important because users expect and want access to be easy (seamless) and if it is not, the use of the library’s holdings will not be maximized. This is especially important in an era of increased digital holdings. They said that the main challenges to ease of access are technology, the regulatory environment (especially regarding privacy), and scale. In order to overcome these challenges, Seamless Access, 32 a free service that enables single sign-on for users, is producing a toolkit that gives librarians a framework for auditing their resource access. Structured into four key areas—Usability, Privacy, Reliability, and Security—the toolkit enables libraries to identify the risks and opportunities that inform decision-making and advocate for future investment. During their presentation they explored usability and privacy, and feedback from the library perspective demonstrated how the toolkit helped to (1) identify and address access-related issues, (2) assess potential ethical or legal exposure, and (3) identify best practices and recommend next steps for moving forward. They also discussed how the toolkit can serve as a communications tool that can help librarians to improve knowledge and awareness within their teams as well as with key stakeholders outside the library.
Their presentation included links to some valuable information regarding the protection of library users’ privacy which I repeat below: • Privacy Field Guides for Libraries: https://libraryprivacyguides.org • Library Freedom Project Vendor Privacy Scorecard/Audit: https://libraryfreedom.org/resources/ • ALA Privacy Guidelines: https://www.ala.org/advocacy/privacy/guidelines
The speakers submitted a joint manuscript based upon their presentations and it is published elsewhere in his issue of Information Services and Use.
I should note that John Felts spoke on Seamless Access from the library’s perspective at the 2023 NISO Plus Conference. In addition, at that same conference Julie Zhu spoke on the same topic from a publisher’s perspective (IEEE). The manuscripts based on those presentations were published in a special issue of Information Services and Use. 33
Building or buying? AI for the scholarly ecosystem
One of the speakers in this session, Russell Michalak, Library Director, Goldey-Beacom College, gave an interesting and extremely informative presentation on how he ultimately decided between purchasing Artificial Intelligence tools for his library or having the university develop the tools themselves. He decided on purchasing and discussed at length the benefits and challenges associated with this approach over in-house development. He also provided insights that can help to guide other academic libraries in making informed decisions about AI-driven tool adoption for the support of undergraduate research workflows. The tools that he discussed were: • Grammarly
34
: enhances writing skills by providing real-time feedback tailored to each student’s needs, fostering better communication skills. • Scholarcy
35
: streamlines the research process, making academic papers more accessible and digestible, thus speeding up the literature review process. • Yewno Discover
36
: utilizes knowledge graphs to visualize complex relationships between concepts, encouraging interdisciplinary exploration and a deeper understanding of the subject matter. • Litmaps
37
: extends the capabilities of traditional research tools by visually mapping the scholarly conversation, enhancing students’ ability to engage in and contribute to ongoing academic debates (purchased to replace Yewno Discover when it was discontinued).
For each tool, he discussed their pros and cons and the activities that were developed to help students to navigate and use the tools.
Michalak said that the decision to purchase rather than build AI-driven tools has resulted in significant strategic benefits and manageable challenges. The benefits include immediate implementation, cost efficiency because purchasing reduces the development and maintenance costs of building in-house solutions, and ongoing access to vendor support that ensures that the tools remain updated and functional. The challenges are vendor dependency, limited customization, and the fact that subscription models can create an ongoing financial burden.
In closing, he said the despite the challenges, the benefits of adopting commercially available AI tools have been significant. They offer personalized learning experiences, support diverse academic needs, and facilitate extensive research capabilities, and that adopting these technologies aligns with his library’s goals of enhancing accessibility and preparing students for technologically driven futures.
Michalak has submitted a manuscript based upon his presentation and it appears elsewhere in this issue of Information Services and Use.
AI and machine learning: What to know and how to talk about it to researchers and patrons
The two speakers in this session, Trevor Watkins, Teaching and Outreach Librarian, George Mason University, and Qiana Johnson, Associate Dean of Libraries, Collections and Content Strategies, Dartmouth College, looked at artificial intelligence (AI) and machine learning (ML) from two perspectives. One was the librarian’s view and the need for libraries to adapt and support both researchers and students in AI literacy. The other was the researcher’s view on the uses and misuses of AI and ML. Combined, the two views provided a broad, high-level snapshot of the evolving landscape of AI and ML, highlighting both the opportunities and challenges these tools present for researchers and library patrons. They, too, discussed terminology and definitions, making clear, as did other speakers throughout the conference, that AI and ML were not the same, despite the use of the terms being frequently interchanged.
Wakins discussed in-depth how his team created three groups—an AI task force, an AI community of practice, and AI salon series—to determine how the library at George Mason University (GMU) could collectively adapt its services and upskill librarians and staff. The purpose of the task force is to create a public-facing information guide for the GMU community regarding best practices and resources for use and research involving AI and AI tools. The community of practice focuses on the use of specific artificial intelligence tools for curriculum and research best practices. It allows members of the community to gain experience and use specific tools each semester. The AI Salon Series creates a space where library faculty, staff, administration, students, and the university community can engage in meaningful conversations about AI in an informal and collaborative environment. The series emphasizes the importance of dialogue, community, and the exchange of diverse perspectives and opposing viewpoints in advancing the understanding of AI.
Watkins also stressed that it is important that librarians interview researchers who are contemplating using AI in their research. It is important for librarians to understand the research context fully by knowing more about the research project and how the researcher plans to incorporate AI. This includes identifying or discussing specific problems that AI is supposed to solve, as well as AI techniques or tools that can be used to help find potential solutions. He said that researchers seeking help may feel more positive about librarians who themselves are actively working on AI projects or research because they are more likely to be knowledgeable about the latest developments, tools, and best practices in the field.
Johnson stressed the importance of researchers being aware of potential misuses of AI, the ethical issues that must be taken into consideration, the potential expense of AI tools, and the potential environmental impact of the computational power that is required for some AI/ML uses. She provided interesting examples of AI/ML uses—both good and bad. One successful use is the creation of a chatbot that allows medical students to practice the conversation flow of a doctor-patient interaction so that medical students can begin to practice taking medical histories and receiving additional information from medical histories and test results. 38 One unsuccessful use was by two lawyers who were sanctioned for using ChatGPT to write a legal brief. The problem was not that they used the tool; it was that the tool created fake case citations that the lawyers included in the brief. 39
Both speakers were excellent and they collaborated on a joint paper that encapsulates their session. The paper appears elsewhere in this issue of Information Services and Use.
NISO working group updates
Since the launch of the NISO Plus conferences it has been a tradition to have several sessions dedicated to providing updates by those who are developing new NISO/ISOStandards or updating existing ones. This year was no exception. Marjorie Hlava, Chief Scientist, Access Innovations, Inc., and Project Leader for the Working Group that is updating the ISO 25964 standard for metadata, spoke at one of the sessions. She noted that all standards developed by the International Standards organization (ISO), as well as those developed by related national standards organizations are required to be reviewed every 5 years, with the standard either being reaffirmed or retired. She said that when ISO 25964 came up for review it was clear from the comments that it was time to revise it and went on to provide a high-level overview of the very time-consuming process that is required.
I learned a lot, but not enough to intelligently describe the process, so I leave it to you to read the paper appearing elsewhere in this issue of Information Services and Use that expands on her presentation. It will give you a fuller appreciation of NISO’s work. Also, I want to point out that Jason Griffey, Director of Strategic Initiatives, NISO, has provided an overview of NISO—who it is, how it is structured, what it does, how it does it, and who it serves. That paper, “Engaging with NISO,” also appears elsewhere in this issue of Information Services and Use.
Closing Keynote: 2034 AI futures
The Closing Keynote actually was a panel comprised of several speakers—Cynthia Hudson Vitale, Director of Science Policy and Scholarship at the Association of Research Libraries (ARL), Christine Stohn, Senior Director, Clarivate, Sayeed Choudhury, Associate Dean for Digital Infrastructure and Director of Open Source Programs Office, Carnegie Mellon University and Kareem Boughida, Dean University Libraries, Stony Brook University Libraries, who served as moderator. Boughida said that the purpose of the session was to provide some scenarios about how we will be working in the information industry in 2034 based upon the potential impact of artificial intelligence (AI) and machine learning (ML). Each speaker was asked to give a brief statement to be followed by Q&A among the panelists and then Q&A from the audience.
The first speaker was Cynthia Hudson Vitale who represented the Task Force established by the Association of Research Libraries (ARL) and the Coalition for Networked Information (CNI) that was announced in December 2023, the purpose of which is “to articulate a set of scenarios for possible futures for the research enterprise that are heavily shaped by recent developments in AI and machine-learning, with a particular emphasis on generative AI.” 40 This initiative was mentioned by Thomas Padilla in his presentation on the first day of the conference.
She noted that scenario planning is a very powerful tool in the strategic planning toolbox for use during times of uncertainty or in service of topics that have a high degree of uncertainty. In the case of AI, there are significant amounts of instability or uncertainty around AI with regard to societal acceptance, policy, regulations, Intellectual Property, issues around trust, the veracity of AI results, AI workloads, technical development, etc. She said that it is difficult for libraries and other institutions to determine where to make strategic investments or shifts in resourcing—how do we plan if we do not know what is going to happen? The Task Force will develop four scenarios of plausible AI influenced futures, from the most optimistic picture to one that is more dystopian in its outlook. She commented that in the end, the future will never be captured accurately or comprehensively by any one scenario. The belief is that the scenario of the future will be made up of components of each of the four scenarios that are developed. The ultimate goal is that each organization can leverage these scenarios to conduct their own SWOT 41 analysis to evaluate ways by which they can mitigate their own risk from changes due to AI and ML. As organizations go through each scenario, a number of robust strategies will rise to the surface that can be put in place in order to be proactive moving forward.
Vitale said that the Task Force started their efforts in November 2023 and that it has generated a lot of interest in the information community. In January of this year, they held a number of focus groups and individual interviews with community members and thought leaders in order to identify the critical uncertainties around AI, many of which were raised during this conference. In order to better understand the key questions that the Task Force hopes to address, they had more than one hundred and 50 individuals participate in the initiative and share their expertise. There will be more community engagement events in the future.
In January, the Task Force also conducted six interviews with thought leaders, AI researchers, and others on the future of AI in 10 years. She noted that the comments were fascinating and varied. One provocateur suggested that in 10 years AI will create superhumans if everything goes well and brain—computer interactions will be widespread. While another suggested a future where the UN Sustainable Development Goals 42 will be exceeded and shifted to focus more locally. They plan to publish those incredibly thought-provoking interviews in late spring and encouraged those in the audience to take a look. She said that they met last week to do a framing for the four scenarios that will be developed and hope to have a draft set published before the CNI meeting in March. This will be followed by another round of community listening sessions, both virtual and in-person, and they will use the feedback to revise the draft scenarios. They hope to have the final scenarios published by the spring meeting in May of 2024. 43 After the scenarios are published, they will host a series of workshops on helping organizations assess their local strategic implications and how they might leverage the scenarios for strategic planning within their own organizations. She closed by encouraging conference attendees to talk to her about the initiative.
The second speaker, Christine Stohn, spoke from the corporate perspective. The first topic that she discussed was increased productivity. She said that she believes that in 2034 we will be using AI tools to increase productivity in many different processes. We may not fundamentally change what we are doing; rather, we will be doing it in a different way. One example is metadata creation for digital objects. We will load an image into a system and the system will automatically recognize the image and create a metadata record. She also talked about records for e-books. Today they are missing a lot of abstracts. A lot of these records are rather thin, so we can automatically enrich those records by uploading the entire book. And we have an AI tool that automatically creates a summary/abstract. Also, consider disambiguation. She went on to say that today we have metadata records that have different author names, although some of the names may actually refer to the same author. AI might be able to help here. Another issue is research integrity and here Paper Mills come to mind. We are already using AI tools to set guardrails in terms of what we publish and what we index. There are AI tools, for example, that can set red flags if there are anomalies in the abstract or somewhere in the text. And then a human looks at the red flags and checks to determine if the research is real or not. Unfortunately, Paper Mills also use AI and we have to be careful. She said that writing is another area where we can increase productivity by having AI automated writing of script followed by human review. She clearly noted that the work starts with a human, is picked up by AI, and then ends with human review. She does not believe that jobs are going away—they are just changing (echoing most prior speakers in the AI track). She believes that human expertise is really important and will increase in value in 2034 and that AI knowledge/expertise will become the norm in job descriptions. As a result, education needs to increase its focus on AI “literacy.” Future generations will need to know how to use AI. They may not need to know how to write AI algorithms, but they will need to know how to responsibly use an AI application—complying with policies, ethical rules, etc., something that can only be done by a human.
Another point that Stohn made is that AI changes the ways in which we discover, access, and consume information. In 2034, she believes that AI technology-based discovery will be the norm. She also believes that we will see a lot more personal assistants popping up, either system-specific or perhaps institutional-specific, that helps students, researchers, etc. to do their everyday work. There are already examples of such tools (see Russell Michalak’s paper that appears elsewhere in this issue in which he describes some of the AI tools that he has purchased for use in his library). The system basically guides students through their coursework and asks the questions that the students need to answer.
She believes that in 2034 everybody will use a reading aid for long documents. Today we already have such systems such as Scholarcy, Grammarly, and Wordtune 44 (the first two are reviewed in Michalak’s paper). The reader can upload a PDF or a link and the AI tool will summarize the article. She added that in 2034 we will still be discussing copyright law for such legislation always lags behind technological advances.
Her final point was that she loves the fact that many libraries are digitizing their collections, including national libraries that digitize their national heritage. With a huge, digitized collection, users can run tools such as ChatGPT on that collection, quickly answering questions that have in the past required hours, if not days of work. By 2034, AI will definitely improve our productivity if used wisely and responsibly.
Sayeed Choudhury was the final speaker. He opened by saying that during the break before the session he was asked if he was going to be the “doom and gloom” guy who depresses everyone. He assured the audience that this was not his plan, but that he did want to provoke thinking and pick up on the uncertainty around AI. He went on to say that one of the best leadership coaches he ever had said that there is an optimal level of anxiety and it is not zero. Too much anxiety is paralyzing. On the other hand, no anxiety equates to no urgency. With AI, we need to hit the optimal spot of anxiety. He turned the mirror to himself when in December 2021, he gave a presentation where he made 10-year predictions on AI. He did it because he assumed that no one would remember what he said, so he had no fear of predicting.
One of those predictions was that in 10 years most software (80%) would be written by AI. He is confident that today (just about 3 years later) there are many contexts where more than 50% of the code is generated by AI, so he believes that the pace of change is faster than he anticipated. He said that he recently spoke to someone at Microsoft who said that every day they have to make thousands of reviews for the software development process, pull requests, documentation, license choices, feature requests, bug fixes, and so on and the number is as high as six hundred thousand per day. At that scale, a single person (or even a dedicated team) cannot review them thoroughly, cost-effectively, and efficiently. They have to make difficult tradeoffs and choices about what risks they accept and how they move forward, and they use AI tools to help.
He said that he attended workshops on the topic of automated science and one of the key questions was “what chemistry can be done by machines, not people? How can we design experiments with automation as the focus? Research is becoming a multiplayer game and it could be possible to have thousands of researchers with diverse expertise working at various levels on a problem. He thinks that is where we are headed in terms of global research. And it will be a form of gamification. He also believes that in 2034 there will be so much output from research that the current publishing workflows will not be able to process it. Scientific publishing will change and users of the content will have to sift through the material in completely new and novel ways using AI tools.
He said that it is impossible for him to say what percentage of jobs will go away. The private sector is focused on the bottom line and as AI creates efficiencies and increases productivity fewer people will be needed. He noted that in 2023 the tech sector let about a quarter of a million people go—50 percent more than in 2020. People need to continuously adapt and think about how AI could impact their job and also know what they bring to the job. For example, there are tasks that are done better and faster by an AI assistant and there are tasks that require creativity and human thinking. We need to understand AI and know how best to apply it.
After the speakers gave their comments about the future, the session continued with Q&A with some quite out-of-the-box theories that you might find interesting. Note that the session was video-recorded and that the recording is freely available. 45 In my opinion it is worth the time to view it.
Closing
In his closing comments, Todd Carpenter, NISO Executive Director, first thanked everyone who made the conference a success—the organizers, the sponsors, the NISO Board and staff, the speakers, and especially the audience (some 250 plus) who actively engaged in the discussions that took place over the 2 days. He went on to say that NISO could not fulfill its mission and do what it does without the talented and dedicated volunteers who give so much of their time, talents, and expertise.
He said that while we are at the end of a 2-day journey, what happens tomorrow and in the coming months is what is most important. Tomorrow NISO will assess all of the ideas that have emerged—will they make an impact? Can they transform our world? etc. NISO will have to prioritize the many ideas that have the potential of offering a positive impact and he asked that if any of the ideas struck home with anyone to please send him an email and state what idea(s) are of interest and why. He also noted that an evaluation survey will be sent to all attendees and begged that everyone be brutally honest about what they liked, disliked, and what they would like to see changed in the future.
Note that as of this writing the planning of the 2025 NISO Plus Baltimore Conference has already begun. 46 The dates are February 11–12, 2025 at the Baltimore Marriott Waterfront hotel, and the call for proposals for presentations closed November 1, 2024. There will be a pre-conference workshop on February 10th.
He added that the virtual NISO Plus conferences that were held from 2021 to 2023 will continue as an annual fall event—NISO Plus Global—starting this year, September 17–18, 2024 and will continue at least until 2027. By the time that you are reading this article, the 2024 Global event has already taken place. I attended and I can tell you that it was fantastic (papers will appear in Information Services and Use early next year). I find the two events to be complementary with regards to the issues under discussion, so I recommend that you Mark your calendars for both 2025 events NOW!!
Conclusion
As you can see from this overview, AI was a major conference theme, but it was not the only topic of discussion. The program actually was quite diverse, and there were common themes/issues raised throughout the conference, some of which resonated even with topics of prior years’ conferences. • Open Science, Open Access, and the sharing, citing, and reusing of datasets remains a major topic of discussion due to cultural and behavioral norms among researchers around the globe and due to issues of regional technical infrastructures. • AI has rapidly moved from a behind-the scenes tool for researchers concerned with improving data discovery and access tools to a “commodity” for the general public. Its ethical usage and the validity of outcomes of its usage remain an issue of concern. The development and adoption of Best Practices is critical. • We need to ensure that all voices have a role in the flow of scholarly information. The communication system must accommodate the different workflows, languages, publication outputs, and research topics that support the needs of different research communities—including Indigenous communities. • Using standards is essential to the global sharing of data and scholarly information (always a theme at any NISO meeting!).
The majority of the presentations that I attended were excellent. I thoroughly enjoyed the opening keynote on Open AI and it tied in well with the closing keynote that looked at the potential impact of AI on the information industry by 2034. Both keynotes served perfect bookends for the information-packed conference.
I always like it when I walk away from a conference with new knowledge. At the 2022 NISO Plus conference, I was blown away by a technology of which I was unaware—Visual-Meta. 47
There were no new technologies discussed this year, but I did walk away with a new concept (new for me) and that was “bibliodiversity.” I always like hearing about new scientific activities in Africa and I was disappointed that there really was no discussion on that topic this year as there had been in the past few years. However, that was made up for at the September NISO Global Conference, which is why I said that the two events are complementary.
At the first NISO Plus meeting in 2020 Todd Carpenter called the conference a “Grand Experiment.” When writing the conclusion of my conference overview I honestly said the experiment was successful. I also said that, as a chemist, I am quite familiar with experiments and am used to tweaking them to improve results. And as successful as that first meeting was, in my opinion it needed tweaking. To some extent the 2021 conference reflected positive modifications, but even then, I said that there needs to be more of the information industry thought-leadership concepts similar to what the NFAIS conferences offered, and I still hold fast to that opinion. But perhaps I am being unfair. I will repeat what I said last year. In the term “NISO Plus,” NISO comes first and when I think of NISO, I think of standards and all of the everyday practical details that go into the creation and dissemination of information. I do not instinctively look to NISO to answer strategic questions such as what new business models are emerging? Are there new legislative policies in the works that will impact my business? What is the next new technology that could be disruptive? I had hoped that those questions would be answered to a certain extent in the “Plus” part of the conference title, but to date the “Plus” part has been a much smaller portion of the conference symposia. This year that portion was expanded by the focus on Artificial Intelligence and I certainly hope that the expansion continues.
Having said that, I sincerely thank the NISO team and their conference planning committee for pulling together yet another excellent conference and I offer my congratulations to Todd and his team for a job well done!!
Reminder
For more information on NISO, please note that Todd Carpenter and Jason Griffey have submitted an article that appears elsewhere in this issue of Information Services and Use.
Additional information
As noted above, the 2025 NISO Plus Baltimore Conference will take place in-person from February 11–12, 2025 at the Baltimore Marriott Waterfront hotel. As in 2024, there will be a pre-conference workshop on AI on Sunday, February 10th during which there will be a two-part exploration of the state of Artificial Intelligence in the world of scholarship and research. In the first half of the session, there will be a moderated discussion with a group of technical experts from publishers, vendors, and others actively working on products that use AI in the marketplace now. The group will explore the strengths of each approach to using AI and the possibilities and limitations moving forward, and give the audience a chance to participate in conversation to learn more about these emerging tools.
In the second half, the ARL/CNI Artificial Intelligence Scenarios 48 that were announced at the 2024 NISO Plus Baltimore Conference will be used to examine the potential of these tools. The goal will be to find areas where libraries, publishers, vendors, and researchers have a need for collaboratively developed best practices and standards around the development, use, and evaluation of AI tools.
If permission was given to post them, the speaker slides that were used during the 2024 NISO Plus Conference will be made freely accessible in the NISO repository on figshare. The opening and closing keynotes, plus the Miles Conrad Lecture were recorded and are freely available for viewing on the NISO website. 49
About the author
Bonnie Lawlor served from 2002 to 2013 as the Executive Director of the National Federation of Advanced Information Services (NFAIS), an international membership organization comprised of the world’s leading content and information technology providers. She is currently an NFAIS Honorary Fellow. She is also a Fellow and active member of the American Chemical Society (ACS) and serves on its Board of Directors. She is also an active member of the International Union of Pure and Applied Chemistry (IUPAC) for which she chairs the Subcommittee on Publications and serves on its Executive Board as well as on the U.S. National Committee for IUPAC. Lawlor is also on the Boards of the Chemical Structure Association Trust and the Philosopher’s Information Center, the producer of the Philosopher’s Index, and serves as a member of the Editorial Advisory Board for Information Services and Use.
About NISO
NISO, the National Information Standards Organization, is a non-profit association accredited by the American National Standards Institute (ANSI). It identifies, develops, maintains, and publishes technical standards and recommended practices to manage information in today’s continually changing digital environment. NISO standards apply to both traditional and new technologies and to information across its whole lifecycle, from creation through documentation, use, repurposing, storage, metadata, and preservation.
Founded in 1939, incorporated as a not-for-profit education association in 1983, and assuming its current name the following year, NISO draws its support from the communities that it serves. The leaders of about one hundred organizations in the fields of publishing, libraries, IT, and media serve as its Voting Members. More than five hundred experts and practitioners from across the information community serve on NISO working groups, committees, and as officers of the association.
Throughout the year NISO offers a cutting-edge educational program focused on current standards issues and workshops on emerging topics, which often lead to the formation of committees to develop new standards. NISO recognizes that standards must reflect global needs and that our community is increasingly interconnected and international. NISO has been recently appointed by the American National Standards Institute (ANSI) to manage the accredited U.S. Technical Advisory Group (TAG) to the International Organization for Standardization’s (ISO) new Technical Committee (TC) on Cultural Heritage Conservation (TC 349). NISO also serves as the Secretariat for Subcommittee 9 on Identification and Description, with its Executive Director, Todd Carpenter, serving as the SC 9 Secretary.
In 2024, NISO was appointed by the American National Standards Institute (ANSI) to manage the accredited U.S. Technical Advisory Group (TAG) to the International Organization for Standardization’s (ISO) new Technical Committee (TC) on Cultural Heritage Conservation (TC 349). Todd Carpenter will serve as the Chair of the Committee. 50
