Abstract
This paper offers an overview of some of the highlights of the 2024 NISO Plus Global Online Conference that was held September 17–September 18, 2024. This conference grew out of the virtual NISO Plus meetings that were held during the Pandemic. While NISO Plus conferences resumed in-person in February 2024, the global sessions were so successful that NISO held a purely virtual one later that same year. Speakers who could not usually travel to the USA are able to give their presentations virtually and the global conferences allow for a much wider net to be cast for speakers and attendees. The ultimate goal of the NISO Plus conferences is to have a discussion, identify information industry problems and, with the collective wisdom of the speakers and audience who are representative of the information industry stakeholders, generate potential solutions that NISO or others can develop. As with prior years, there was no general topical theme (although Open Science, Open Access, sustainable research, and scientific integrity were common threads), but there were topics of interest for everyone working in the information ecosystem—from the practical subjects of persistent identifiers, standards, metadata, data sharing, Open Science, and Open Access to the potential future of the research article and journals.
Keywords
Introduction
In February 2020 NISO held the first NISO Plus Annual Conference in Baltimore, MD, USA. It replaced what would have been the 62nd Annual NFAIS conference, but with the merger of NISO and NFAIS in June 2019 the conference was renamed NISO Plus and a new, much more interactive format was adopted. The inaugural conference was labeled a “Grand Experiment’ by Todd Carpenter, NISO Executive Director, in his opening remarks. When he closed the conference, all agreed that the experiment had been a success (me included), but that lessons had been learned, and that in 2021 the experiment would continue. It did, but due to the pandemic the experiment became more complicated as the 2021 conference was held for the first time in a totally virtual format and it continued in that format for the next 2 years. I want to add that the NISO virtual meetings have been among the best that I have attended.
However, in February 2024 the NISO Plus Annual Conference returned to a much-anticipated in-person format, again held in Baltimore, MD, USA—NISO’s home base—and is now entitled the NISO Plus Baltimore Conference. But the success of NISO’s Global conferences, both in the reach of speakers and the worldwide geographic participation, motivated NISO to hold yet another NISO Plus conference in 2024—this one branded as the NISO Plus Global Online conference. It was another great NISO virtual session which they will repeat in the foreseeable future—at least through 2027. The next Global OnlineConference is scheduled for September 2025. As of this writing, there is no further information—other than “coming soon.”
According to NISO, the virtual attendees were a representative sample of the information community—librarians, publishers, system vendors, product managers, technical staff, etc., from all market segments—government, academia, industry, both for-profit and non-profit. There were approximately twenty-five sessions plus a virtual Game Show. All of the sessions were recorded and are currently available for access by those who attended. Also, I know that some of the recordings are freely available on the NISO website. 1 If you are curious, contact NISO to see if all of them can be accessed by anyone who did not attend the meeting. In past years they were.
As in prior years, Todd Carpenter, NISO Executive Director, noted in his welcoming remarks that it was important to lay out NISO’s vision for the conference. He noted that many attendees might be new to this concept, and he wanted everyone to understand the conference goal, its format, and why NISO is building on the success of the past 5 years—they simply want to keep the momentum going. He emphasized that the attendees themselves are integral to making the event special because this meeting is not purely an educational event, it is meant to be an interactive, collaborative event—a place where participants can openly identify and discuss current problems and brainstorm on how those problems can be solved or mitigated. The goal is to generate ideas, develop practical solutions to problems, and be results-oriented. In other words, the ideas need to have a positive impact—improve our work, our efficiency, and our results.
The objective of the NISO Plus conferences, both the in-person and the virtual, is to continue to drive things forward, identifying the standards needed for the next generation of content and publishing tools, for example, what are the page numbers of the future? He believes that the best way to do this is to leverage the collective wisdom of the speakers and the conference audience. The speakers are not “sages on the stage,” they are the sparks that light the fire of a structured conversation. He made it clear that NISO is delighted to have a lineup of brilliant speakers who have agreed to share their knowledge, but that the goal of the conference is not simply to take their wisdom. He believes that everyone participating in this conference is brilliant and that he would like to hear from each and every one because the diverse reactions to the speakers and the ideas are what will make the event a success.
He added that if this NISO Plus conference is similar to its predecessors, lots of ideas will be generated, of which a few will sprout and perhaps a few will turn into giant ideas that have the potential to transform the information landscape. He made it quite clear that NISO cannot make all of this happen as they lack the resources to manage dozens of projects. As in the past, they will settle on three or four ideas and perhaps the other ideas will find homes in other organizations who are interested in nurturing the idea and have the resources with which to do so.
In closing, Carpenter said that on a larger scale the NISO Plus conference is not about what happens over the next two days, but rather what is important are the actions that are taken over the days, weeks, and months that follow. It is what is done with the ideas that are generated and where they are taken. Whether the ideas are nurtured by NISO or by another organization does not matter—what matters is that the participants take something out of the conference and that everyone does something with the time that is spent together in the discussions.
I can attest that all of the sessions that I attended were interesting. However, I did not attend everything (or even view all of the recordings) as the conference structure had parallel sessions in each time slot. I admit that I focused primarily on the sessions that covered topics that were of interest to me. As a result, this overview does not cover all of the sessions. However, I hope that my overview motivates you to attend the 2025 Global Online Conference which is being developed as I write this, and, if possible, attend the next meeting in Baltimore that will most likely take place in February 2026. That is my personal goal with this brief summary, because in my opinion, the NISO Plus conferences are worthy of the time and attention of all members of the information community.
Opening keynote
The opening keynote, entitled “Open research, incentivizing change and underpinning infrastructure—a UK perspective,” was given by Rachel Bruce, Head of Open Research at the U.K. Research and Innovation (UKRI), the UK’s largest public funder of research, spanning all disciplines as well as innovation with industry. She opened by saying that her talk will be from a policy perspective rather than from that of an information scientist and that she would share UKRI’s vision of open research and highlight in particular activities that are underway regarding research assessment as a key lever to incentivize open research.
UKRI has a budget of nine billion pounds and has nine sections that focus on different disciplines and industry. Their strategic vision is to transform tomorrow together to achieve an outstanding research and innovation system that gives everyone an opportunity to contribute and to benefit, with the ultimate objective of enriching lives locally, nationally, and importantly, also globally. They see their role and responsibility not just to fund research and research projects, but also to work with partners across the research and innovation system so that together we shape a more inclusive, dynamic, productive, and trusted research environment. Open research is one aspect of that direction of travel, and it is a key priority throughout UKRI’s work, aiming to shift research culture to support openness and transparency, collaboration, and to really ensure that we are not hindering the creation and stimulation of ideas. She noted that “Excellent Research” is open and transparent. It • Enhances Collaboration • Supports faster and more efficient research • Underpins integrity, reproducibility, and public trust • Impacts through greater reach and the opportunity to build on knowledge • Maximizes the value of publicly funded research
In addition, open research policies, practices, and infrastructures enable and incentivize open and transparent research as follows: • Open Access to research findings • Makes research data and other research outputs findable, accessible, interoperable, and reusable (FAIR) • Responsible assessment and recognition of the diversity of skills, talent, teams, and practices that underpin excellent research.
She added that governments are articulating similar positions. The UK reinforced its commitment to open research in 2021 through the UK Research and Development Roadmap 2 which really strongly articulated the fact that publicly funded research should not be behind a paywall and that mandated open access would be a priority within the U.K. government as would be looking at ways in which to incentivize data sharing and recognizing that without that, the validity and trust in research is at risk. Also, in 2023 the Science Academies of the G7 countries (Canada, France, Germany, Italy, Japan, U.K., and U.S.A.) agreed to freely disseminate the outputs of publicly funded research. 3 She noted that when UNESCO talks about Open Science it is not just about making scientific knowledge accessible, but it is also about ensuring that the production of that knowledge itself is inclusive, equitable, and sustainable.
She used the pandemic as an example of the benefits of open research when data sharing became essential. The number of preprints increased and people truly collaborated. She said that she read somewhere that forty-percent of English language scientific work was made available via preprints during the pandemic. In 2022, UKRI issued an immediate open access mandate for research articles, but they also widened that mandate to include long form outputs such as monographs, edited collections, and book chapters, although they do allow for a more restrictive license as well as an embargo period of 12 months. For research data they have had data management plans and policies to support data sharing, but they are currently updating those plans and policies across UKRI to provide a more harmonized framework and also to take into account a whole range of different research outputs, including software and code, which are not covered strongly in their current policies.
They are also investing in a whole range of activities under the banner of a national digital research infrastructure. These include data infrastructure, large-scale computing, secure services and tools for sensitive data, skills and career pathways, and foundational tool, techniques, and practices. And these activities are not just from UKRI, they are a multistakeholder effort from research communities, publishers, and infrastructure providers. And they have seen progress in terms of open research. She said that in 2012 thirty percent of publications were open access, and by 2022 this increased to over sixty-five percent but, she added that there is more work to be done.
What they see is that research, culture, and career structures appear to remain way too focused on recognizing and rewarding a narrow range of contributions and too focused on publishing in prestigious journals. Some of this culture is so embedded that there is a real reluctance to change, even if research funders, research organizations, and researchers themselves can see the benefits of change. There is now an increased interest and collective action to address some of these cultural barriers and try to incentivize and perhaps encourage open and transparent research.
One of the key developments has been through the European Commission and the Council of the European Union that annually make recommendations based on reflections in this space. She gave two examples. In 2022 they encapsulated the strong link between research assessment and reward and open science and the need to recognize the diversity of people in their roles and outputs and activities. One of the Council conclusions was that this should be introduced into research policy and assessment and, in order to do this, there needs to be a responsible approach to research assessment. Also, they concluded that they needed to develop ways in which they could have both quantitative and qualitative responsible indicators. In 2023, they continued to build, pointing to some of the challenges that still needed to be overcome in order to work differently and encourage open research, including a recognition that some of the business models around open science that are related to publication are inequitable because of the cost to authors and readers. In parallel, the costs of traditional business models are becoming increasingly unsustainable.
In order to accomplish their objective, they established the Coalition for Advancing Research Assessment (CoARA). In July 2022, more than seven hundred research organizations, funders, assessment authorities, professional societies, and their associations agreed on a common direction and guiding principles to implement reform in the assessment of research, researchers, and research organizations. This is outlined in the Agreement on Reforming Research Assessment that provides an outline for reform and implementation. 4
She also mentions the Declaration on Research Assessment (DORA) 5 that recognizes the need to improve the ways in which the outputs of scholarly research are evaluated. The declaration was developed in 2012 during the Annual Meeting of the American Society for Cell Biology in San Francisco. It has become a worldwide initiative covering all scholarly disciplines and all key stakeholders including funders, publishers, professional societies, institutions, and researchers. Another initiative, the Barcelona Declaration on Open Research Information, 6 was launched in April 2024. Signatories of the declaration believe that the research information landscape requires fundamental change and they are committed to taking a lead in reforming the landscape and transforming current practices by (1) making openness of research information the default, (2) working with services and systems that support and enable open research information, (3) supporting the sustainability of infrastructures for open research information, and (4) working together to realize the transition from closed to open research information. She said that UKRI has not yet signed.
In closing, she thanked NISO for the opportunity to spread support for Open Science. Her keynote was recorded and the video (along with a transcript) have been posted and are freely accessible. 7
Exploring emerging technologies in archiving and preservation: Leveraging 3D models, interactive environments, and AI tools
After the opening keynote there were three parallel sessions. I followed the emerging technologies track and this session had one speaker, Aaron Paul, the Digital Curation Librarian at the University of Alabama at Birmingham. It was an excellent presentation. He explored how emerging technologies intersect with archiving and preservation practices within cultural heritage institutions. He described ways to strategically incorporate 3D models into exhibits and use models to preserve original archival material while allowing users to have greater interactivity with archival representations. He also explored how to develop interactive virtual environments to enhance patron interaction, lengthen the lifespan of popular exhibits, and provide significantly more accessibility to a much wider audience. Finally, he covered ways in which AI tools can be a beneficial resource in archival and preservation work through tasks such as transcription and coding while also highlighting where AI tools should be avoided to ensure that ethical boundaries are maintained and trust in the field remains high. It was an interesting presentation that delved into practical applications, ethical considerations, and the implications of adopting these technologies in archival workflows.
He first covered his background and experience. As noted above, he is the Digital Curation Librarian at the University of Alabama at Birmingham. He has been there about 18 months and he primarily works with their special collections and historical content. He is working on some acquisitions, but primarily he is working on digitization and providing access to the public. Previous to this, he worked at a museum in Florida as a digital archivist, and worked on everything involved with digital items, from acquisition to digitizing legacy media to long term preservation and public accessibility. But today he intended to focus on some more modern technology—artificial intelligence (AI)—a technology with which most people are familiar at this point. He noted that it is expanding out into most areas of our lives and it is being incorporated into many products that we are already using and being added as new products in a lot of different areas. AI offers a lot of opportunities, but there is also a lot of concern about how this will affect what they do and the ethics and trustworthiness behind some of this technology. And one of the challenges is finding out how AI is going to be useful for what cultural heritage museums, libraries, and archives do and how they work, while maintaining the level of trust and ethical consideration that is inherent in their field. He added that while there are some great ways that we can use this technology there are also a number of ways where he would classify it as more of a fun thing to play with rather than a tool that will add value in our daily work.
As an example of the latter, he displayed a slide that he created by simply typing in “AI” and Power Point designer created the visual—almost zero time and thought involved in its creation. However, the visual could be just about anything and provided no meaning—so he thinks that image generators are not quite there yet. He also wanted to create a slide that included a collage of AI tool logos so that he could talk about a few of them and their strengths, weaknesses, and provide the audience with a little bit more background. He decided to try to kill two birds with one stone and enter that image descriptor into two different image generators—Adobe Firefly 8 and Deep AI 9 and he then displayed the results. Adobe Firefly almost created tools, except none of those would actually really work as AI tools, but the image looks like it is almost there. The image from Deep AI was not good at all. He re-iterated this is one of those areas where he believes that AI is more something with which we can play rather than something we can really use and rely upon in our daily work.
In areas that AI can help us and is helping him has been the ability to use essentially a text prompt in something like ChatGPT to create a Python script. He said that he does not know how to write Python scripts and he does not know how to write code, yet it is something he uses almost on a daily basis as part of his digital preservation workflows. So having these tools and using them while not really knowing how to use them makes things a bit challenging. He said that they have one person on their team that does know how to write code from scratch who wrote the code that they are using on a regular basis, but other than that, the rest of us do not. And because of that lack of knowledge, it was never something that he even considered as a way to accomplish a task because he simply did not know how to do it. He then displayed two very simple scripts that he generated with ChatGPT.
One script looks at a list of folders and extracts any PDFs that it finds and places them into a new folder. The second script copies all of the file names, including the extensions, and places them into an Excel spreadsheet. By using ChatGPT, he was able to generate the scripts within less than 5 minutes. He then tested them which is essential before actually using any of this on important material so that if there is something in the script that does not work, important content will not be deleted, corrupted, or lost. You test with a sample batch for which potential loss or damage does not matter. He was able to generate the scripts, test them, and run them within a total of 20 minutes for both of these combined. He said that if he had tried doing this by hand, it would have taken him a couple of hours to organize and extract this information. AI in this use case has freed up a lot of his time. He has also done the same thing with much more complex scripts, but to create those he uses GitHub Copilot 10 that was created via a collaboration between GitHub and OpenAI for people who know how to code. For him this was groundbreaking and allowed him to do research that he might not have been able to accomplish.
Other areas where he finds AI helpful is in transcriptions, using Adobe Premier Pro 11 and Whisper AI, 12 and in creating close captions with YouTube and Zoom. He said that while these are not one hundred percent accurate—maybe eighty-five to ninety percent—they are great time savers and continue to improve. AI also helps with translating—again, not one hundred percent accurate, but better than starting from scratch. It also helps with Handwritten Text Recognition (HTR). He sees a lot of potential us of AI in this considering how much handwritten content is stored in archives, museums, and libraries. The opportunity to digitize that material and have it be searchable is massive. This is still pretty new, bit it is getting better. He does not believe that it is ready at a commercial level where you can just use it and put the results out there, but it is getting there and when it does, the amount of content that we will be able to provide for research will be massive. It will unlock a ton of content, making it more accessible and more usable.
He recommended against using AI in academic or professional research. In fact, he noted that many journals either prohibit the use of AI in the research that they publish or will require that authors disclose any use of AI or Machine Learning (ML). There will be questions about how much of the results are AI-generated (is it really your work?), how much of it is generated, how accurate is it. And then there is the whole copyright consideration—another can of worms which he decided not to open.
Having said that, he did recommend AI for finding published sources. He has had good success with ResearchRabbit. 13 He also said that Assistant by Scite 14 is helpful in finding published resources. He has found the results to be accurate and relevant, but he did miss a lot and that may be due to the material that it is searching.
He then went on to talk about 3D printing. His university has had 3D printers for a number of years and they have been used mainly as a fun tool for students who can use them to make pieces for a game or play around with, and the printers are occasionally used by staff. They also have 3D printers with the medical school and some with engineering that are much more focused on those field areas. They wanted to find a way to use these printers within the context of their historical collections with two goals in mind. The first goal was to protect items in their collection that are used for classes pretty regularly and thus could get damaged or degraded. They wanted to create 3D models of those items so that students could handle those 3D objects. The original item would be present and visible, but it would not get the same level of wear and tear and handling, and it would survive and be preserved for a much longer time. The second goal was to include some more interactive and engaging elements in their exhibits so that the public could be more engaged with what they are presenting.
The net result was a pilot exhibit called navigating communication, breaking invisible barriers. It was focused on the history of Otolaryngology, 15 and they were expecting to incorporate a lot of elements for visually impaired individuals. They were expecting a lot of visitors with visual impairments to visit the exhibit. He went on to talk about successes and failures in creating the exhibit, tools that worked and tools that did not, and why. He then said that one of the things that always frustrated him was that they could spend a year or more doing research, doing background, preparing for designing, installing an exhibit, and then it may be up for 3 months, 6 months, a year. Then the exhibit is taken down and is essentially gone. They extend a great deal of effort to create a really valuable resource for the public and then it disappears. They did things such as taking photos of the exhibit, but that does not quite capture the “feel” of an exhibit very well. He then looked into virtual exhibits and ended up using a product called CenarioVR. 16 It is similar to a real estate walkthrough platform. He said that if you have ever taken a virtual tour of a house, this is similar to that, but it includes a lot more interactive elements that you can explore. He then went on to show the results and went on to say that this can be useful for promotion. So, a museum or library can open up an exhibit and 3 months, 6 months later, they can put out another announcement about the virtual one—so if people have not had time to check out the physical one, they can visit the virtual exhibit. It also opens it up a lot for accessibility purposes. Not everyone can come to the exhibit. Not everyone has the time or is available during a museum’s business hours. So, this opens it up and provides more people the opportunity to experience an exhibit. It also means that it is available and usable for longer than the duration of the physical exhibit. And they can create a whole collection of these exhibits that are available to the public in order to see what people identify with, which ones they are clicking on, how they are interacting with it, and it can help shape what exhibits they are going to do in the future that align with people’s interests.
In closing he said that there are a lot of opportunities for the use of AI. He said that they do have to be careful about ethical considerations and maintaining the trust that people have in what they do, but there are a number of ways that they can use AI that do not impact on the issue of trust and he looks forward to seeing how things such as Virtual Reality, 3D Printing, and AI evolve.
Paul’s presentation was recorded and the video (along with a transcript) have been posted and are freely accessible. 17 Also, he submitted a manuscript that appears elsewhere in this issue of Information Services and Use.
Views from a data journey
The next time slot also had three parallel sessions. I briefly attended a session that tracked the journey of data created by researchers as it traveled through the submission and publication systems, to repositories and APIs into analysis-ready data. The speaker was Ted Habermann, CTO of Metadata Game Changers. He talked about the global research infrastructure (GRI), a network of organizations and repositories that provide persistent identifiers (PIDs) and metadata about research objects (including preprints, peer-reviewed papers, datasets, software, etc.) and the connections between them. He noted that CHORUS provides multiple views into this infra-structure constructed using queries and connections chained across Crossref, ScholeXplorer, DataCite, and ORCID. He said these views yield insights into the data, but, perhaps more importantly, they yield insights into the journey of the data from researchers through submission and publication systems, repositories, and APIs into analysis ready data.
He said that he and his team traveled these same journeys and explored some alternative itineraries to improve the understanding of their impact during the INFORMATE Project. 18 “Informate,” a term coined by Shoshanna Zuboff in her book In the Age of the Smart Machine (1988), 19 is the process that translates descriptions and measurements of activities, events and objects into visible information.
The INFORMATE Project combined three data sources to focus on understanding how the GRI might help the U.S. National Science Foundation (NSF) and other federal agencies identify and characterize the impact of their support. Haberman presented INFORMATE observations of three data systems. The NSF Award database represents NSF funding while the NSF Public Access Repository (PAR) and CHORUS, as a proxy for the GRI, represent two different views of results of that funding. His team compared the first at the level of awards and the second two at the level of published research articles. Their findings demonstrate that CHORUS datasets include significantly more NSF awards and more related papers than PAR does. Their findings also suggest that time plays a significant role in the inclusion of award metadata across the sources analyzed. Data in those sources travel very different journeys, each presenting different obstacles to metadata completeness and suggesting necessary actions on the parts of authors and publishers to ensure that publication and funding metadata are captured. He went on to discuss these actions, as well as the implications that their findings have for emergent technologies such as artificial intelligence and natural language processing.
This was a very interesting and data-heavy presentation and fortunately Habermann has submitted a detailed paper that appears elsewhere in this issue of Information Services and Use. In addition, his presentation was recorded and can be accessed on the NISO website as of this writing. 20
Content discovery in the age of AI: Publisher perspectives on the evolution of discovery channels, access, and metrics
This is one of three parallel sessions that kicked off the evening of the first day of the conference (there were two consecutive sets of parallel sessions). This one brought together seasoned discovery specialists—one from a society publisher (IEEE), another from an aggregator of scholarly content (OCLC), and the third a librarian (University of Michigan) to discuss the future of content discovery in the age of artificial intelligence (AI).
The first speaker was Julie Zhu, Manager of Discovery Services Relations, at the Institute for Electrical and Electronics Engineers (IEEE). The title of her presentation was “Generative AI for Content Discovery in Academic Publishing: A Content Provider’s Perspective.” Her premise was that Generative Artificial Intelligence (GAI) is reshaping the landscape of content discovery, access, and usage within the academic publishing ecosystem. In her presentation she examined the complex relationships between content providers and GAI, focusing on the multifaceted roles that publishers play as consumers, developers, feeders, and influencers of GAI technologies. She explored the ethical considerations, policy developments, and principles that guide the use of GAI in academic publishing. She also analyzed the impact of GAI-assisted search on content discoverability, linkability, accessibility, and trackability, highlighting the challenges and opportunities that GAI presents. In closing, she called for collaborative efforts among stakeholders to promote ethical AI practices and discussed future directions, including the upcoming NISO Open Discovery Initiative survey on GAI.
The second speaker was Jay Holloway, Director, End User Services Global Product Management, at OCLC and the title of his presentation was “Perspectives on AI in Library Discovery.” He said that AI is a tool that is used to amplify human creativity and ingenuity and it has done this for a long period of time. It is important, but not new. He said that in the 1970s, we had early automated decision-making programs that led the way for the early neurocomputing foundations in the 1980s, ultimately culminating in some exciting wins that he went through.
The first example was IBM's Deep Blue beating Garry Kasparov, a Russian chess Grandmaster, 3 and 1/2 to 2 and 1/2 in a rematch held in New York City. What was exciting about this was that this was a case that proved that machines could compete as well as humans in some limited facets of life. He went on to say that in 2001, we saw the introduction of interactive voice systems and call centers, and thus began a culture of people (I am one of them) smashing the zero or pound button in hopes of getting access to a person. And jumping into 2005, an autonomous vehicle wins the Defense Advanced Research Projects Agency, or DARPA, which has been a driving force behind numerous technological advancements, including the Internet. A vehicle first failed this in 2004, and then after a year of work, a vehicle was able to successfully navigate the terrain required to win this prize. Moving on to February 2011, Watson wins Jeopardy—a TV game show that was viewed by millions of people—and beat two of the foremost, all time Jeopardy champions, Brad Rutter and Ken Jennings. He said that this was exciting because it was a clear demonstration of the power of the natural language processing that AI is developing. He then jumped into 2016 when Google’s AlphaGo defeated the Go world champ ten years sooner than predicted by experts. He said that this showed the rapid development of AI solutions for a game that has ten to one hundred times more complex scenarios than chess did back in 1997.
His final example was the release of GPT-3 in 2020 by OpenAI and he said that by now we have all seen some exciting examples where AI has offered some good, bad, and fun different ways with which to experience it.
He first talked about the good ways in which we have experienced AI, for example, by the new efficiencies that automate routine tasks, freeing us to do more complex, more rewarding and thoughtful work. His example was one that impacted him personally. He said that he is a musician and has to do a lot of calendaring of rehearsal schedules and concerts. He can now use ChatGPT to quickly build his schedules—he simply enters all of his concert and rehearsal dates and it just does it for him. He then imports the results into his calendar, saving him hours of tedious calendaring. He noted that there are health care advances such as AI-driven diagnostic tools as well as consumer products that we can wear on our wrists, for example, the Apple watch is able to identify serious, undiagnosed heart conditions such heart arrhythmias. Other examples included accessibility improvements, including real-time speech-to-text transcriptions, intelligent prosthetics to help people gain use of limbs, and education customization that helps to adapt experiences to individual student needs.
He then moved on to some the negative impacts of AI such as job displacement, the biases that can be part of the data used to train AI models, misinformation, fake photos, security vulnerabilities due to hacking tools, inequities in access to AI tools, and the environmental cost of large data service centers needed for AI applications.
As an example of the “fun” ways in which AI has impacted us he said that he prompted GPT 4 to write a Haiku poem about AI and how it helps users to discover new innovations. And the response was cute: In circuits they find, New paths forged by unseen light, Innovation blooms.
He then went on to talk about AI in libraries and said that OCLC has been working with AI and Machine Learning (ML) for decades. One of the most recent examples he gave was one from 2019 when they had an OCLC research practitioner in residence, Thomas Padilla, who worked on a yearlong project to research the topic of ML and AI in libraries (note: Padilla was also the opening keynote speaker at the 2024 NISO Plus Baltimore Conference where he spoke on the states of Open AI). 21 Together, they convened a group to talk about the topic and provide insights and then the results were published along with recommendations for further study and cooperation. He highly recommended that everyone read the report. 22
He went on to say that the goal of the systems that they are trying to build at OCLC is to use AI for solutions that drive library success and to use it where it makes the most sense, not just for the sake of using it. Their work is measured by the impact that it has on their users. OCLC has identified three major areas where they are looking to make improvements for libraries—metadata management, user experience, and staff efficiencies—all in service of improving the discoverability of library services for users. He provided some examples of AI-generated recommendations in their WorldCat service and said that users who take advantage of that service engage with more search results than those who do not use the recommendation service, and closed by saying that they look forward to seeing where else OCLC can combine the power of a large language models with the credibility and accuracy of traditionally curated and aggregated library metadata.
The final speaker in this session was Ken Varnum, Senior Program Manager and Discovery Strategist at the University of Michigan library. His talk was entitled “Towards Tolerable AI” and he said upfront that he has a more skeptical view of AI because nobody really knows what is going on “under the hood” and why AI works the way it does. His presentation focused on artificial intelligence tools, especially the generative large language model-based tools, and how they fit into library and scholarly research, particularly by the students and faculty who are actually using the tools. He said that he wanted to talk a bit about the topics that Julie and Jay had already mentioned and move a little bit more toward a potential model that might at least satisfy his own personal “curmudgeonliness” and perhaps address concerns of those in the audience as well. He added that we have spent a lot of time talking about the great promise of AI today. And Jay mentioned a few areas where there might be possible threats or where threats already exist. He did not plan to focus quite so much on that, but he wanted to talk more about how AI is used in an academic research library as a research tool and how we might find a baseline set of functionality that could, if actually delivered in some way through AI tools, might raise the level of suitability for these tools within the academic research library setting.
He went on to say that libraries have multiple roles in the way AI is rolling out into the academic world. The first is that they are sometimes providers of services. So many libraries are doing experiments, trying things out either internally or to the public to augment their catalog data, to provide overviews and summaries of items in their digital repositories and to provide mediated chat services. They also license services from a wide range of vendors, not least of all by those represented by Jay and Julie, that are building AI into their own user interfaces and their own research tools. Librarians are exposed to these tools and are indirectly exposing their user bases at their academic institutions to these tools. Therefore, librarians have a vested interest in making sure that they are doing what their users expect them to do.
He then turned to what he believes is one of the fundamental quandaries about generative AI and used the following statement that is attributed to Heraclitus. “You cannot step twice into the same river.”
He thinks that this quote really applies beautifully to artificial intelligence, but this provides a real challenge when applied to the information that AI services generate. He went on to illustrate his point. He wanted to know what ChatGPT thought about this quote and how it might explain it to him, so he asked it what does “you cannot step twice into the same river mean.” And he displayed the response on a slide. It gave him several paragraphs of text that seemed very plausible to him (and me) although he noted that it was missing a few niceties such as where it obtained the information, to whom it could be attributed, etc. He wondered that if he was a student who used the ChatGPT-generated answer in a paper along with the URL of the response his teacher went back to see if he attributed the search result correctly, what would that teacher find? So, he entered the same question again four minutes later and got a similar, but different answer—one with even more words. He wondered how many times he would have to ask the same question before ChatGPT actually gave him the same answer a second time. He said that he found it fascinating that even a very “simple” question, could lead to answers that at least were structurally different in content. And again, neither answer was connected to any database in particular so he could not backtrack to understand why the search results differed.
He went on to say that AI-generated text often feels like just a stream (or river) of text. It is never the same thing twice or rarely the same thing twice in the experiences that he has had. In many ways, the responses that an AI tool provides are not too dissimilar from what you might get if you listen to the same talk by the same scholar. Over time, the words are not the same. The meaning often is, but you never quite know what is going to be said word for word over time. And you also do not necessarily know that the person delivering the talk has the authority or the knowledge to back up the specific words that were said. You can hope that they do, but you do not know that for certain. The challenge here is that the inputs are mysterious. The linking together, the threading of the information is sort of mysterious. And you can never be sure it is leaving out something that is actually essential to the question that you asked when you asked it that one time or, if you ask it again, that there will be something else essential that is added of which you were not previously aware.
He asked (rhetorically) what are some of the implications of a library offering an AI tool. Could it be perceived that the library is endorsing the tool and the output of that tool? The hallmark of the scientific method and of academic research is that you can trace the evidence that you are using to reach your conclusion and if you do the same process with the same rules, you ought to come up with the same output. That is the scientific method. With AI tools the outputs this does not always happen and the outputs become even more confusing.
He said that he has expressed some of the concerns that have been rattling around in his head and has a modest proposal for a Minimally Viable AI Product—one that has the simplest application that meets the bare bones minimum of what we want to do. It may not do everything that we want it to do, but it does the basics. And so, he offered a few proposals about what those basic features might be that could make a generative artificial intelligence tool more tolerable. First, the tool needs to point to sources. It needs to tell the user where it found the text from which the response was generated. It may be technically impossible to say that it got this word from this document and that word from that document because that is not how generative AI works, but it should be able to point to a corpus of work, something to which the user could go back and look at themselves to try to understand the search results. The tool ought not to hallucinate 23 or make stuff up—the user has to know that the tool is reality-bound. He personally does not haves a lot of faith that even the most reality-bound generative AI tools being offered today are, in fact, that closely bound to reality because there are responses that he sees even from well done tools that are not quite right. The tools need to generate reproducible responses. Users need to be given at least a context for the response to their question. He said that perhaps when he asks a question or Julie asks the question, they get different answers because they have had different interactions with that tool and it has learned something about what they want and the kinds of questions that they ask. That is certainly possible, but there ought to be a way to repeat a search and tell the tool to go back and do this again just like Ken Varnum was on August 17 at 300 AM when he asked this question the first time.
In closing he said that AI tools need to be given training that is akin to what any academic library attempts to instill in first year college students, which is the basic fundamentals of writing and research, how you do it reliably and predictably, where you use quotes, and how you cite your quotes. That is the challenge that he would love to see met by an AI tool.
This was a great session. Unfortunately, only Zhu has submitted a paper that appears elsewhere in this issue of Information Services and Use. However, the recording of the session is freely available on the NISO website 24 and definitely worth a look.
Evolving library practice toward the sustainability of supporting Open Access
This session was presented in parallel to the one described in section five. It featured a dialog between Maureen Walsh, Scholarly Sharing Strategist, The Ohio State University Libraries, and three panelists from a range of library types/geographic locations. The panelists were Miranda Bennett, Director of Shared Collections, California Digital Library, Oakland, California, Matthew Goddard, Head of Access and Acquisitions, University Library, Iowa State University, Ames, Iowa, and Joshua Shelly, Transform2Open Project Member, Potsdam University Library, Potsdam, Germany.
It was noted that as open access publishing models have evolved over the last three plus decades, new funding mechanisms have proliferated to maintain and sustain them. The session attempted to answer the following questions: • In the current landscape of publisher revenue models that include author-pays, institutional sponsorship, and collective action, have libraries reached an inflection point for open access business processes that underpin their financial support of open access publishing? • With limited monetary and labor resources, how can libraries move beyond their current pain points driven by lack of business process standardization as they look to a future where open access is the norm for library functions, not the boutique?
The speakers shared their perspectives on how library practices are evolving in response to developments in the global open access landscape, their pain points with supporting open access, their innovative approaches, and how they might collectively work toward scalable and sustainable open access workflows with standardized data, reporting, and impact metrics.
The group submitted an excellent manuscript based upon this session and it appears elsewhere in this issue of Information Services and Use. The recording is freely available for viewing on the NISO website (see reference 24).
Using standards to support research integrity
This session was one of the final two parallel sessions that were held on the first day of the conference. The speaker was David Turner, Business Solutions Consultant at the Data Conversion Laboratory (DCL).
He opened by saying that research integrity is the cornerstone of scholarly advancement, yet despite its critical importance, research misconduct continues to undermine this integrity, threatening the very foundation of academia. The consequences are far-reaching, impacting researchers, publishers, institutions, and society at large. He noted that the proliferation of paper mills and unethical practices within the scientific community poses a significant threat to research integrity. He said that the journal Nature recently stated, “The retraction rate for European biomedical-science papers increased fourfold between 2000 and 2021, a study of thousands of retractions has found.” 25 He went on to say that the extent of this problem across the entire industry is both recognized and yet ambiguous, making it imperative for publishers to implement scalable solutions. He believes that the scientific community has a powerful tool for combatting research misconduct: STANDARDS. Metadata and XML tagging offer a structured approach to scrutinizing journal articles on a large scale, empowering publishers to identify potential integrity issues proactively.
He used an unconventional analogy (it is absolutely brilliant) to explore some of the common attacks on research integrity while highlighting ways to use industry standards to recognize and remediate the results of misconduct. He chose to use the 2004 Steven Spielberg film, Catch Me if You Can 26 that is one of my favorites. Based on the true story of Frank Abagnale Jr, the film portrays a young—and brilliant—“con artist,” a term for someone who uses deception and psychological manipulation to gain the trust of others and exploit it for personal gain. The “con” in “con artist” stands for “confidence,” as their schemes rely on instilling confidence in their victims to achieve their fraudulent goals.
If you have seen the movie, you know that although only 17 years old, Abagnale is a skilled forger who also impersonates a pilot, an FBI agent, a doctor, and a lawyer throughout the course of the film. Turner said that Abagnale’s ability to outwit systems meant to detect fraud is not unlike the challenges posed by those in the research community who commit misconduct for personal gain. Just as Abagnale thrives in a world of trust and outdated processes, research fraudsters exploit gaps in workflows, overwhelming data, and misplaced assumptions to propagate their schemes.
Turner explored the parallels between Abagnale’s confidence scams and research misconduct. He analyzed the systemic vulnerabilities that enable fraud, and proposed innovative solutions to enhance research integrity. The said that the “thrill of the chase” lies not only in catching misconduct, but also in implementing proactive measures to prevent it altogether. He said that by leveraging standards, sharing data, and fostering collaboration across the scholarly publishing ecosystem, we can shift the narrative from detection to prevention.
Turner submitted a paper based upon his presentation and it appears elsewhere in this issue of Information Services and Use. I highly recommend reading it. Unfortunately, the recording of his session is not freely available on the NISO website (at least as of April 29, 2025 I could not find it, but I could access it using my credentials as a registrant of the conference). Perhaps you could reach out to NISO and ask if you are interested—the presentation was clever and very well done.
A multilinear PID approach using open infrastructure to disclose indisputable African data in the global knowledge economy
This was one of four parallel sessions that opened the second day of the conference. The speaker was Joy Owango, Executive Director of the Training Centre in Communication (TCC-Africa), the first African-based training center to teach effective communication skills to scientists.
The focus of her presentation was a project called the Africa PID Alliance, the objective of which is to produce African-originated persistent identifiers to help increase the visibility of African research outputs, particularly on Indigenous knowledge and cultural heritage. She provided an update on the initiative and what they hope to achieve through various partnerships that they intend to create even through partnerships with the stakeholders participating in this conference.
She said that Africa represents about 18 percent of the world population, and that it produces between one to two percent of global research and innovation. About eight percent of African patent data is accessible via global databases. These are facts that she found in various sources such as the World Bank, UNESCO, and the World Intellectual Property Organization (WIPO). She said that there are challenges on Africa’s contribution to global research and innovation, and that there are various factors that contribute to those challenges—the cost of research, the cost of infrastructure, and also the visibility of African researchers’ work. She gave the example of patents. The fact that African researchers cannot afford to file for a patent in the IP5 27 means that their work cannot be seen because the reality is that all citation databases, including Google Patents, will only harvest information from the IP5. She added that in reality Africa is producing more than eight percent of patent data, but it is not accessible via the citation databases even though the same metadata is used in the African regional patent offices’ databases. They hope to change this. Their goal is to champion the preservation and promotion of Global Indigenous knowledge and cultural heritage by empowering scientists and inventors to transform their research innovations into impactful realities. Their focus is on how they store data in Africa and how they can protect African data, particularly Indigenous knowledge and cultural heritage, and ensure that the data is protected from helicopter research. 28
And since the metadata standards for cultural heritage are extremely localized, they will protect the entire pipeline showing the various outputs that came out of the Indigenous knowledge.
This is now happening at the legal certificate authorization level whereby you get certifications of the entire pipeline of authorization to conduct research, to talk to the community, and then also another set of authorization before you release that research. They are protecting the entire pipeline as they try to figure out how they can develop a standardized metadata schema for Indigenous knowledge and cultural heritage. They want to keep the data in Africa so their first data center is the Kenya Education Network. And they are also having conversations with the Zambia research Education Network. In addition, she said that they have already begun a conversation with one of the registration agencies of a DOI foundation based in Taiwan so that they can be their managers. The Africans will produce the data feed itself, and the DOI Foundation will be in charge of the management of the information that is sent. They are doing this because Indigenous knowledge and cultural heritage has been quite vulnerable. And most importantly, when it comes to African output, this has been a victim of helicopter research. She said that when you ask anyone to give you information on Indigenous knowledge and cultural heritage, indeed you will get the publications, but it is difficult to connect those publications to the Indigenous knowledge because of poor data management and they want to change that. And that change, in her opinion, will only happen through a pyramid of cultural change and through an investment in infrastructure. And that is what they are doing.
The African PID 29 alliance is producing the digital object container identifier that is their multilinear API to help mitigate this challenge. But this is not a PID problem—it is a data management problem. She said that when they come to an institution to talk about the new PID she finds that most of them do not have data management policies. Fortunately, some of them are now slowly understanding the importance of having a data management plan simply because funders have made it mandatory. She went on to say that the Africa PID Alliance aims to specialize in indigenous knowledge and cultural assets, as well as patent digital object containers. It proposes a hybrid digital object identification that will link locally developed handles (prefix 20.) and DOIs created by the DOI Foundation (prefix 10). In the case of patents, a digital object identifier container would aid in the consolidation of DOIs and locally developed handles, which can supplement the dissemination of the complete picture regarding the research lifecycle, the invention that led to the patent, and associated works, documents, and media. She said that this multilinear data architecture is also suited to indigenous knowledge since it combines biocultural traits and scientific data in a single digital object container.
Another aspect is connecting the various of things that need to be digitized. Ceremonies are a good example since they demonstrate the presence of various types of cultural assets that must be maintained and identified digitally: textile art, song, dance, studies based on that, and so on. All of this data can be identified using locally constructed handles or DOIs. The goal is to establish a digital object identifier container called the Digital Object Container Identifier (DOCiDTM) that stores all of this information.
She said that they are in phase one of the initiative and each phase is three years. Now (phase one) they are building on the community engagement, the infrastructure set up, and setting up partnerships. This is where they are reaching out to the stakeholders in the audience as they seek more partnerships to make sure that they reach phase two which will be an extension of the project scaling and breaking even. And phase III, which is their major objective, is to become a registration agency.
In closing, she said that the Africa PID Alliance will be supporting not only the continent, but also research output from partners from the Global North such that they can use their infrastructure to increase and improve the data management life cycle of Indigenous knowledge and cultural heritage.
Owango is a frequent speaker at NISO Plus conferences and works tirelessly on the protection and global dissemination of indigenous research. 30 While she did not submit a manuscript, the recording of her presentation is freely available on the NISO website. 31
Beyond the article
The speaker in this session was Bill Kasdorf, Principal, Kasdorf & Associates, LLC, Co-Founder, Publishing Technology Partners, a frequent speaker at NISO events. His premise was that today’s scholars and scientists need to communicate their research long before the final article is published, and do so in smaller chunks, and in a variety of ways. Researchers must be able to communicate effectively not just within their research team, but with colleagues in their discipline who can provide them with insights and feedback throughout the research process. He believes that this ongoing communication needs to consist of smaller, more focused chunks of content in more flexible and accessible formats than the fixed-format PDF still common for the version of record (VOR). Collaborative online platforms enabling this are becoming increasingly common.
And it is not just text that needs to be shared. Research data needs to be provided in ways in which it can be easily and effectively accessed by peers, along with code, access to software, and other resources that enable outside experts to interact with it. This can involve multiple media as well, for example, a 3D model of a molecule or a galaxy that can be viewed from any angle, a video or livestream of a surgical procedure, or LIDAR 32 of an archaeological dig.
He put forth some of the issues that make this so important: • Research articles are often published long after the research has been completed. • Negative results are often not published resulting in duplicative work. • Text may not be the best way to convey important aspects of the research. For some fields better alternatives may be video, 3D rendering, and animations. • Research data needs to be available and findable by other researchers working on related investigations. • Code, reagents, protocols, and other resources are also needed if the ongoing research is to be truly useful to other researchers working in related areas.
He stressed that he is not saying that the final, formally published, peer-reviewed article is obsolete. It is still the gold standard. It is usually necessary—but not always sufficient. And he provided some concrete examples of the ways in which modern research and scholarship is communicated more dynamically and effectively today—particularly the use of videos for surgery. He closed by saying that scholarship is fundamentally collaborative. That is how progress is made
Kasdorf has submitted a manuscript based upon his presentation and it appears elsewhere in this issue of Information Services and Use. Also, the recording of his presentation is freely available on the NFAIS website (see reference 31). If you have not heard Bill give a presentation you are in for a treat.
Toward standards for trust markers on published content
This session reinforced the comments made by David Turner in his presentation. It was noted that there are a growing range of challenges facing the scientific and publishing communities around the integrity of published content. Checks conducted by preprint servers and journals vary greatly and are largely opaque, making it difficult for readers to evaluate how much to trust a publication. Meanwhile, the broader society often view this content amongst a range of sources from social media to news sites, all of which informs their views on major societal issues. Without trust markers on quality content, there is the risk of exacerbating the growing lack of trust in science, which can negatively impact public support when specific actions are required to address major societal challenges such as vaccine uptake, and can negatively impact their support for future science funding.
The three speakers were Rebecca Lawrence, Managing Director of F1000 and one if its Co-founders (it is now part of Taylor and Francis), Kathryn Funk from PubMed Central, and Blaine Butler from United2Act and the Center for Open Science. They, like Turner, talked about the enormous growth in integrity cases and that however thoroughly manuscripts are checked, some errors do get through. The problems can be unintentional or due to truly intentional misconduct and the issues center around the following: authorship, plagiarism, use of generative AI tools, content/data similarity, image integrity, biased reporting, manipulation of citations, use of fake data, papermills, reliability of peer review, etc. The growth of preprint servers was also noted along with the fact that most, but not all, preprint servers conduct very light checks on the content. Also, across preprint servers and traditional publishers the checks that are conducted differ. In addition, peer reviewers can be time-strapped and thus are not as thorough as they could be if not short of time. Also noted was that diverse research outputs need different types of experts for review such as data curators, software engineers, etc. Lawrence said that a shift toward responsible research assessment (see the opening keynote lecture) could provide some indicators and tools that will help recognition and therefore incentivization of research integrity, which is absolutely critical, but at the moment is much harder to recognize and incentivize.
Lawerence has submitted a paper based upon her presentation and Butler has submitted a brief visual summary of her comments as well. Both appear elsewhere in this issue of Information Services and Use.
Author identity and name changes in scholarly publications: Ethics, logistics impact
This session focused on author identity and name changes in scholarly publishing. One of speakers, Julie Zhu, Manager of Discovery Service Relations at the Institute of Electrical and Electronics Engineers (IEEE), summed the issues very well in the introduction to the paper she submitted based upon her presentation.
The accuracy and consistency of author names in academic publishing are essential for proper attribution, discoverability, and scholarly integrity. Researchers rely on consistent author records to track citations, measure impact, and establish professional credibility. However, there are numerous circumstances in which an author may need to update their name in published works, including legal name changes due to marriage or divorce, gender transition, cultural realignment, or personal preference. Historically, many publishers and indexing services have lacked clear policies for managing such requests, resulting in barriers that prevent authors from maintaining accurate professional records.
“These reasons for author name changes are varied and stem from personal, legal, or professional considerations, and they reflect broader societal and institutional dynamics. Marital status changes, gender identity transitions, and religious or cultural affiliations are frequent drivers. Additionally, authors may seek changes due to legal processes, personal preferences, or to address technical and ethical considerations, such as maintaining consistency across digital platforms and the ethical handling of ’deadnames,’ 33 the names that transgender or non-binary people were assigned at birth. The variety of reasons underscores the importance of flexible and respectful name change policies.”
She points out that several publishers allow authors to replace their prior name with their current name without requiring justification or legal documentation and they update their publications and related data confidentially, and she went in depth on IEEE’s name change policies.
Tilla Edmunds, Director, Web of Science Content Management at Clarivate Analytics, Clarivate spoke about name change policies for the Web of Science and Bri Watson, PhD Student, University of British Columbia School of Information, focused on name change issues related to Indigenous, trans gender, and other marginalized groups. Bri is also the coordinator of the Queer Metadata Collective, 34 a group of catalogers, librarians, archivists, scholars, museum and information professionals with a concerted interest in improving the description and classification of queer people in Galleries, Libraries, Archives, Museums, and Special Collections (GLAMS) and other information systems. The Collective’s primary goal is to develop a set of best practices for the description, cataloging, and classification of queer information resources in GLAMS. Bri is also the coordinator of NISO’s Name Change Policy Working Group 35 which has been approved and is currently being formed.
This was a well-organized and very interesting session and while only Julie Zhu submitted a paper that appears elsewhere in this issue of Information Services and Use, the session was recorded and is freely available on the NISO website. 36 I highly recommend that you take a look.
Closing keynote: How journals can survive and thrive in an age of innovation and disruption
This closing keynote was given by Virginia Barbour, Adjunct Professor, Faculty of Health, Queensland University of Technology, Brisbane, Australia and Editor-in-Chief, Medical Journal of Australia. She was also a Co-founder of PLOS Medicine. Her presentation was a thoughtful and insightful reflection on what journals need to do to thrive in an open and rapidly changing world and to consider what their future role will be as disruption in publishing continues and was based upon her experience working on three journals—Lancet, PLOS Medicine, and the Medical Journal of Australia.
In summary, in order to thrive now and in the future, journals must: • Acknowledge and understand the environment in which they exist—they cannot publish everything. • Collaborate with their communities against threats to the integrity of publishing. • Champion Innovation in ways that are relevant to their communities. • Tell the stories that matter. • Be allies and advocates • Be open to alternative views
I very much enjoyed her presentation and reading the article that she submitted for this issue of Information Services and Use. Also, her session was recorded and is freely available on the ISO website. 37 If you are a publisher or work for a publishing firm you should read her paper and view the recording.
Closing
In closing, Todd Carpenter thanked everyone for joining today and noted that the world does not need to come to NISO—NISO can come to everyone across time zones via its Global Conferences. He is glad that the organization can do this, particularly from an inclusivity perspective—all those who are interested can join in. He said that he sat in on most of the sessions and thought it was a fantastic event (I agree).
As he said at the conference’s opening, part of the purpose of this event is to generate ideas. Some of his thoughts are as follows: • Could we build a model for APIs of transferring and building a better metadata exchange around APCs to help facilitate an understanding of what is going on with open access business models? • What kind of challenges is inherent in creating centralized infrastructure, particularly for the Global South, and how can NISO help? • What sort of potential exists for decentralized persistent identifier systems (PIDs) and how can those various systems be connected? • Could we create best practices around using artificial intelligence to improve, edit or create alternative text for the visually impaired? • Could NISO help with the advancement of the Research Data Framework (RDF) system and move it from a self-assessment practice to something on which benchmarks could be created? • Could summaries be created to help push the community forward and drive better use and practice of research data management in the community? • What kind of recommended practices might be developed around preserving privacy as it relates to entering things into artificial intelligence system.
He said that these are just a few of the ideas that have been discussed over the past two days. The real work of the conference starts tomorrow when NISO starts reviewing these ideas. He said that at NISO they think about growing, if you will, of different ideas. Some of them will not blossom, but some will. Hopefully, NISO can help to provide better information and better service to the patrons that the library and publishing community serve.
He said that they will be sending out a short survey to gather some feedback about how conference attendees experienced the last two days of events. As NISO continues to develop its programs, the input and feedback that they receive from the community is really valuable.
He said that NISO will be returning in person next year for the 2025 NISO Plus Baltimore Conference from February 10 to the 12th. Also, a Global Online Conference will be held again in September 2025.
In closing, he thanked the conference sponsors as well as Jason Griffey, the chair of the conference and its planning committee. He also noted that all of the NISO projects are a result of community engagement and he thanked everyone who is involved in those projects. And, of course, he thanked the NISO staff who all worked tirelessly over the past two days to help make the conference a success.
Conclusion
As you can see from this overview, the program was actually quite diverse, but there were common themes/issues raised throughout the conference, some of which resonated even with topics of prior years’ conferences. • Open Science, Open Access, and the sharing, citing, and reusing of datasets remains a major topic of discussion due to cultural and behavioral norms among researchers around the globe and due to issues of regional technical infrastructures. • AI has infiltrated every aspect of scholarly communication. Its ethical usage and the validity of the outcomes of its usage remain an issue of concern. • We need to ensure that all voices have a role in the flow of scholarly information. The communication system must accommodate the different workflows, languages, publication outputs, and research topics that support the needs of different research communities—including Indigenous communities. • Using standards is essential to the global sharing of data and scholarly information (always a theme at any NISO meeting!).
All of the presentations that I attended or viewed were excellent and the two keynotes were perfect bookends for the information-packed conference.
I say this every time that I write a conference overview—I like to walk away from a conference with new knowledge. At the 2022 NISO Plus conference, I was blown away by a technology of which I was unaware—Visual-Meta. 38 There were no new technologies discussed at this conference, but I did walk away with a new term (new for me) and that was “deadnaming.” Also, I always like hearing about new scientific activities in Africa and Joy Owango did not disappoint me.
At the first NISO Plus meeting in 2020 Todd Carpenter called the conference a “Grand Experiment.” When writing the conclusion of my conference overview I honestly said the experiment was successful. I also said that, as a chemist, I am quite familiar with experiments and am used to tweaking them to improve results. And as successful as that first meeting was, in my opinion it needed tweaking. To some extent the 2021 conference reflected positive modifications, but even then, I said that there needs to be more of the information industry thought-leadership concepts similar to what the NFAIS conferences offered, and I still hold fast to that opinion. But, as I have said before, perhaps I am being unfair. I will repeat what I said last year. In the term “NISO Plus” NISO comes first and when I think of NISO I think of standards and all of the every-day practical details that go into the creation and dissemination of information. I do not instinctively look to NISO to answer strategic questions such as what new business models are emerging? Are there new legislative policies in the works that will impact my business? What is the next new technology that could be disruptive? I had hoped that those questions would be answered to a certain extent in the “Plus” part of the conference title, but to date the “Plus” part has been a much smaller portion of the conference symposia. Last year that portion was expanded by the focus on artificial intelligence. I found that the 2024 NISO Plus Global Online Conference took on more of that thought leadership and I certainly hope that this continues.
Having said that, I sincerely thank the NISO team and their conference planning committee for pulling together yet another excellent conference and I offer my congratulations to Todd and his team for a job well done!!
Additional Information
As noted earlier, the 2025 NISO Plus Global Online Conference will take place virtually in September 2025—the exact dates have not been set as of this writing. The 2025 NISO Plus Baltimore Conference was held February 11–12, 2025 at the Baltimore Marriott Waterfront hotel. There is no mention of the 2026 Baltimore conference on the NISO website.
If permission was given to post them, the recordings of the sessions that took place during the 2024 NISO Plus Global Online Conference are currently freely accessible for viewing on the NISO website. 39
About NISO
NISO, the National Information Standards Organization, is a non-profit association accredited by the American National Standards Institute (ANSI). It identifies, develops, maintains, and publishes technical standards and recommended practices to manage information in today’s continually changing digital environment. NISO standards apply to both traditional and new technologies and to information across its whole lifecycle, from creation through documentation, use, repurposing, storage, metadata, and preservation.
Founded in 1939, incorporated as a not-for-profit education association in 1983, and assuming its current name the following year, NISO draws its support from the communities that it serves. The leaders of about one hundred organizations in the fields of publishing, libraries, IT, and media serve as its Voting Members. More than five hundred experts and practitioners from across the information community serve on NISO working groups, committees, and as officers of the association.
Throughout the year NISO offers a cutting-edge educational program focused on current standards issues and workshops on emerging topics, which often lead to the formation of committees to develop new standards. NISO recognizes that standards must reflect global needs and that our community is increasingly interconnected and international. Designated by ANSI to represent U.S. interests as the Technical Advisory Group (TAG) to the International Organization for Standardization’s (ISO) Technical Committee 46 on Information and Documentation. NISO also serves as the Secretariat for Subcommittee 9 on Identification and Description, with its Executive Director, Todd Carpenter, serving as the SC 9 Secretary.
In 2024, NISO was appointed by the American National Standards Institute (ANSI) to manage the accredited U.S. Technical Advisory Group (TAG) to the International Organization for Standardization’s (ISO) new Technical Committee (TC) on Cultural Heritage Conservation (TC 349). Todd Carpenter will serve as the Chair of the Committee. 40
Footnotes
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
