Abstract
This paper provides an overview of the highlights of the 2015 NFAIS Annual Conference, Anticipating Demand: The User Experience as Driver, that was held in Crystal City, VA from February 22–24, 2015. The phrase “Content is King” can arouse hot debates over its relevance in a technology-dominated world. But the reality is that searchers are looking for answers to their questions. Information remains the Holy Grail. But the convergence of content, robust computing technologies and innovative devices has changed the search experience and elevated user expectations. The goal of the NFAIS conference was to take a look at how content providers and librarians are building products and services that will not only provide the ultimate in today’s information experience, but will also raise that experience to a new level of satisfaction. Conference attendees used their smart phones throughout the meeting to give their opinions on key issues that were raised and the questions/responses are included at the end of each session.
Keywords
Introduction
The term “Use Experience” is now part of information industry jargon. It even has its own abbreviation – UX. But what is it? According to Wikipedia, the term began to be visibly used in the mid-1990s [8] and it is defined as involving a person’s behaviors, attitudes and emotions about using a particular product, system or service. There is even an ISO Standard on it (ISO-9241-210) which defines the term as “a person’s perceptions and responses that result from the use or anticipated use of a product, system or service”. It includes all of the users’ emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviors and accomplishments that occur before, during and after use. You can find a list of other apt definitions at
Why is it important? Because whoever provides the best User Experience ultimately captures the user’s loyalty. Once captured, it is very difficult for anyone to lure them away – who wants to learn another system? That lesson was learned during the early online days of Dialog, BRS and SDC. Time constraints alone make it unlikely that students, faculty and researchers will shop around once they find a system that meets their needs.
Who owns the user today? No doubt in my mind that it is Google. When attendees of this conference were asked what has the greatest impact on user expectations of research information services, sixty-eight percent said the products and services released by Google, Apple, Amazon or Microsoft. In fact, several speakers noted Google’s dominance during their presentations. Kalev Leetaru, Founder of GDELT and Adjunct Faculty at Georgetown University, noted in his opening keynote that Google has become part of the fabric of our lives. It has become the go-to service because it has created a “frictionless” interface for searching the world. Kate Lawrence of EBSCO encouraged publishers and librarians to accept the influence of Google and Wikipedia and design for innovation that is rooted in familiarity. Mark Jacobs of Delta Think said that Google and Amazon have set the bar for search. And Adam Wade of Microsoft referred to a study by the Queensland University of Technology on the information-seeking behavior of graduate students that showed that sixty-four percent of the students used Google as their starting point [4]. While Google was not the theme of the conference it became apparent that it provides the User Experience that is the Gold Standard to which all others are compared. But read on, there’s lot more to learn.
Setting the stage
The opening keynote was given by
He first made an observation about the search process and how not much has changed from the user’s perspective since online systems such as Dialog were launched almost half a century ago. The information-seeker will sit down, usually in front of a PC, figure out what keywords to use, and enter them into a search box – the same box that was used fifty years ago! What has changed is that searching, especially via Google, has become far more powerful and has been integrated into the fabric of our lives. Google is used daily for e-mail, calendars, maps, document sharing and storage, etc. It knows our preferences, it knows where we are when we perform our searches, and it has enormous power behind the scenes. Google is far more powerful than when it first came on the scene, but the search process remains simple to the user – all of the bells and whistles are hidden away under the hood, not incorporated into the search interface so as to confound and confuse the user. Kalev described it as a “frictionless” search process. The experience is pleasurable and seamless.
His second observation was the need for content providers to understand their users and he gave the example of Facebook and LinkedIn. Both are highly-successful social media, but they focus on different markets and have a deep understanding of the needs and behaviors of those markets. They build services for their target audience and really do not compete.
His third observation may be the most important – and that is the need for content providers to allow users to take their content and build on it. He encouraged them to look at their digital content as a valuable archive, not a library. With libraries the material is checked out and read at home – the content travels to analysis. With archives the user physically travels to the reading room, analyzes the material there, and takes only their notes when they leave – analysis travels to content. Kalev used this latter model when working on the Internet Archive Reading Room project and he suggested it as method for offering “legal” data mining as a new revenue stream [6].
Kalev then briefly described the GDELT Project which he founded. It is the largest and most comprehensive open database of human society ever created. It is a real-time monitor of global news media (print, broadcast and web formats) in more than one hundred languages that stretches back to January 1, 1979 through present day and is updated daily. His vision is to “leverage this data to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what’s happening around the world, what its context is and who’s involved, and how the world is feeling about it, every single day” (see http://gdeltproject.org/about.html#creation). The content can be freely-used and distributed as long as attribution is given.
In closing, Kalev encouraged content providers to jettison the past and re-think their content. How are people using it now and how will they want to use it in the future? He urged them to give serious consideration to enabling researchers to build on that content in order to unleash its secrets and its relationships with other data and to look at new business models that will support the user of the future. For more details read two of Kalev’s articles that are published elsewhere in this issue. One is a more detailed account of his keynote address and the second is a reflection (with many concrete examples) on the growth in data mining initiatives over the last two decades and how these projects are impacting libraries and scholarship.
The audience question and responses for this session are as follows: Of these disruptive influences, which do you believe will have the greatest impact on the researcher’s workflow? Volume and availability of data for mining and analysis – 52.3%; Cognitive computing/smart systems – 21.5%; Innovative means of interacting with devices – 15.4%; New content sources and formats – 10.8%.
Librarians and publishers: Caught in the middle
Kalev was followed by a session entitled, “The user experience: Increased demands that we must satisfy”. The first speaker was
Schumaker talked about all of his information sources – webinars, e-mails, books, blogs, databases, conferences, etc. – and how difficult it is for him to adequately manage that information. First, information access is not always easy as he is often not allowed to retrieve information unless he is working on his campus’ network. Once he has access to the information he is faced with the challenge of tracking it. He uses a commercial personal bibliographic management product (name undisclosed), but entries always need to cleaned-up and the import process is far from seamless. Also, he retains his documents in the Cloud and not in the bibliographic system. Maintaining those two in sync is hit and miss. He noted that he views information as a means to an end. Once the problem is solved he moves on and the information that has been used is not always stored in an easily-retrieved manner.
He said that he knows that publishers create really good tools that would eliminate his problem, but he doesn’t have the time to look for them and perhaps even lacks the funds to purchase one. He said that a potential solution to his problem is the use of librarians as change agents – especially “embedded” librarians who work closely with faculty and researchers, know their search behaviors, and know what they need. Publishers who have the tools that users want need to be able to get to the user and vice versa. Publishers know the librarians; the librarians know the users and can serve as an effective conduit for both.
In closing, Schumaker said that while we are caught in the middle of multiple opposing forces that are related and feed upon one another, collaboration between publishers and librarians can lessen the impact of the opposing forces on the average academic information seeker. He did not mean this to sound like a trite, simplistic, easy solution. He realizes that it is hard or the problem would have already been eliminated. But he has seen change already and is optimistic that there is more to come. See more from David in his article published elsewhere in this issue.
The design of information platforms
The theme of the session was continued by the second speaker,
He shared a slide that he borrowed from Bill Rous at Georgia Tech that compared a traditional organization with a global collaborate group of researchers; e.g. in a traditional system a role may be management, the methods used in that role are command and control, the measurement is activities with a focus on efficiencies, and the relationships are contractual. In the virtual network the role is leadership, the methods are incentives and inhibitors, the measurement is outcomes with a focus on agility, and the relations are based upon personal commitments. The structure of a traditional system is usually hierarchical; in a virtual, complex network the structure is a heterarchy – a web of interconnections that requires a clear understanding of roles and relationships. A key feature of virtual organizations is a high degree of informal communication. Because of the lack of formal rules and clear reporting relationships, more extensive informal communication is required. Documentation takes on a major role. Fox also said that it is essential that a project has a value philosophy that focuses on outputs. The value must be derived from the benefits of the outcomes (not the outcomes themselves), the benefits should be relevant, usable and useful, and it is essential that the users or beneficiaries fully understand and appreciate the benefits.
He noted that they always use small development teams (seven to eight people) representing a mix of skills: a facilitator who knows the development method being deployed and who has the key skills that are required; two to three domain experts who know the resources, data, applications, tools, etc.; one or two modelers who extract the concepts; one or two software engineers who understand platform architecture and technology; and a scribe to write everything down. Fox noted that the social aspect is key because it is very much a team effort.
A diagram that shows the iterative steps of the development process can be seen at
The audience question and responses for this session are as follows: Which of the following do you believe might be of the greatest concern to researchers in the current information environment? Appropriate assessment of the impact/value of their research by funders and peers – 62.9%; Discovery of relevant materials once published – 24.3%; Effective presentation of research in forms that promote subsequent re-use – 12.9%.
Emerging workflow tools
The final session of the opening day highlighted new startup companies that are providing value to users by providing solutions to current problems in information access and discovery – problems that traditional content providers are not adequately addressing. The most notable are described in the following paragraphs and their web sites are worth looking at.
The first was Kudos (
Basically, a researcher registers on the Kudo site and lists their published articles (each article must have a DOI). The related metadata is provided from CrossRef. The author can add additional information for each article; e.g. explain the article in plain language and why the research is important. He/she can also enhance the article by providing links to related ‘resources’ (videos, slides, blogs, media coverage, data, code, documents, etc.) that help to set the publication in context. Using the Kudos toolkit, the author can share links via social media, e-mail, etc., track the effectiveness of such sharing, and map it against the actions that they have taken. Each author has a dashboard, showing the activities that they have undertaken to explain and share their work. Tracked links then show the effect of these activities on key metrics, including full-text downloads and ‘altmetrics’. A fourteen-week pilot was undertaken in 2013. The results showed a nineteen percent higher article usage per day for articles shared using the Kudos tools compared to a control group.
The basic service is free for researchers who register. Funding agencies and publishers pay a fee for access to support tools, information on publication performance and author-sharing effectiveness within Kudos, and also to supplement the data set available to help authors evaluate the impact of their use of the Kudos tools.
Kenneway said that since May 2014, a quarter of a million researchers have been invited to register and Kudos now has 30,000+ registered users (12%), with 345,000+ publications claimed, 4,500+ publications explained, 1,500+ publications enhanced and 1,500+ publications shared. Two hundred and twenty-seven countries are accessing the content with the highest usage from USA (18%), UK (12%), India (7%) and China/Italy/Australia (0.5%). There are 10,000+ participating institutions, with the highest usage from Oxford, Cambridge, Imperial College, Edinburgh, UCLA and Harvard. And there have been 592,000+ views of Kudos publication pages and 13,500+ click-throughs to publisher sites.
Users to date can be categorized as follows: Professors: 32%, Faculty members: 18%, Lecturers: 8%, Post-docs: 7%, Research fellows: 6%, Research associates: 5%; and Graduate students: 5%. Subject disciplines represented are: Medicine and Medical Sciences: 17%; Chemistry: 11%; Life Sciences: 10%; Health Sciences: 10%; Materials Science: 5%; Business and Management: 5%; Engineering and Technology: 4%; Physics: 3%; Environmental Sciences: 3%; and Other: 4%.
The second impressive start-up was ZappyLab.
Another impressive start-up is Sciencescape (
Another interesting start-up is Hypothes.is (
The audience question and the responses for this session are as follows: Which of the following enhancements might be of greatest interest to the community of researchers served by your particular information product or service? Anticipatory discovery – 50.8%; Metrics of usage, impact, effectiveness – 25.4%; Ability to annotate content – 12.7%; Disambiguation of identity – 9.5%; Sharing of relevant content – 1.6%.
Information: Who pays for it??
The second day of the conference opened with a plenary session, entitled “Information wants someone else to pay for it: As science and scholarship evolve, who consumes and who pays?” The speaker, Dr.
The first trend he identified was in the authorship of content. From the 1930s through the early 1960s the average number of authors per paper was two. This grew to 4.5 in the 1980s and to 6.9 by the year 2000. Now, with crowd-sourcing projects such as Galaxy Zoo (see
The second trend has already been noted by other speakers, and that is the exponential growth in information with content filtering being more of an afterthought. And related to this trend is the fact that no institution can preserve all of the information upon which it relies.
The third trend is the diversity of information sources that have emerged as a result of information now being primarily in digital form: legal briefs, grants, software code, data sets, audio, video, etc. This diversity impacts information flow, curation, analytics, etc. Even education is changing with major institutions of learning offering “free” online courses to education seekers around the world.
Altman said that one economic principle is that everything gets to equilibrium in the long-run. But the fallacy here is that the “long run” can take a significant amount of time and that policies need to be made in the interim. He also noted that many things in the real world violate market assumptions. He offered no solution to the problem and stated the often-quoted line of “It’s tough to make predictions, especially about the future”. But did note that technology is not the answer – it is neither good nor bad, it is merely neutral [5]. Dr. Altman has written a very thoughtful article on the economics of the publishing market and why, despite the fact that the volume of content is increasing, the amount of competition has declined. It appears elsewhere in this issue and definitely is worth a read.
The audience question and responses for this session are as follows: Which of the following should rank highest as a priority for content and platform providers over the next three years? Fostering new content formats in support of more effective presentation and communication of research findings – 43.9%; Enhancing value via new platform functionalities and interfaces – 31.6%; Developing new or adapting their existing business models – 24.6%.
Building platforms for sustainable use
The second session of the morning continued the prior day’s discussion on building information platforms that engage and excite users. The first speaker was
Looking forward, Jacobson said that we can expect to see an even greater focus on the e-commerce user experience with ease-of-purchase and quick fulfillment/delivery and a continued focus on the 360° view of the customer with the implementation of a single sign-on and a social sign-on. He noted that users will continue to expect that the supplier will know about all their interactions with them – both online and offline – and will know their other online behaviors and interests; and there will be a continued focus on personalization – users like the supplier to recommend highly-relevant products and services based on that knowledge.
Mobile apps will also influence user expectations. The user interface is disappearing. Users string apps together to streamline their efforts and create custom work flows. Jacobson referred to this as the IFTTT (if this, then that: see
He said that is essential that content suppliers understand their customers’ other technology experiences and expectations. They need to “refresh” their technology about every three years so that they remain current and they need to be aware of how their internal systems interface with external systems and the Internet at-large. Some key technologies that they should be using are as follows: user identity management (e.g. SSO and CRM); marketing automation tools in order to identify traits across anonymous users; semantic technology to enrich content and user profiles; and recommendation engines. He closed with two principles: (1) become really agile and if you adopt a hybrid approach be sure you understand the impact of any compromises that you make; and (2) if your development problem is execution, changing methodologies and technologies won’t fix it.
The second speaker in this session on platform development was
The final speaker in this session was Dr.
The audience question and responses for this session are as follows: Which of the following influences do you believe has the greatest impact on user expectations of research information services? Products and services released by Google, Apple, Amazon or Microsoft – 68.6%; Workflow demands imposed by their institutions or professional peers within a specific scholarly community – 22.9%; Individual requirements for accomplishing research goal – 8.6%.
Incorporating new forms of content
The final session of the morning focused on the incorporation of new forms of content (video, audio, interactive content, etc.) into the research and educational environments. The first speaker was
He presented the following facts: one out of four U.S. physicians uses social media daily to seek out medical information according to a 2012 study published in the Journal of Medical Internet Research; About one out of seven physicians actually contribute content daily to a social media website while more than half stick to physician-only websites. Only seven percent said they were active on Twitter. In addition, 117 of 485 (24.1%) of survey respondents used social media daily or many times daily to scan or explore medical information. On a weekly basis or more, 296 of 485 (61.0%) scanned and 223 of 485 (46.0%) contributed. With regard to attitudes towards social networks, 279 of 485 respondents (57.5%) perceived social media to be beneficial, engaging, and a good way to get current, high-quality information. And with regard to usefulness, 281 of 485 (57.9%) of respondents stated that social media enabled them to care for patients more effectively, and 291 of 485 (60.0%) stated it improved the quality of patient care they delivered. The article concluded that while social media will never replace traditional channels of research and learning for the medical profession, it can be a valuable addition to a physician’s knowledge base – and a useful forum for discussion [7].
Camlek went on to describe some of the global social networks used by those in the medical profession, the content that they provide, how they monetize their efforts, and what risks they face, including patient privacy regulations, professional image protection, and rules of ethics and conduct. He concluded by saying that while these networks look promising, the major open question at this time is whether or not physicians will continue to have the time and willingness to participate. For more details, see Camlek’s interesting article that appears elsewhere in this issue.
Another interesting speaker in this session was
The audience question and responses for this session are as follows: Which of the following factors do you believe will play the greatest part in reshaping the scholarly record over the next 3 to 5 years? Global demand for access to information across institutional/geographic/national boundaries – 53.6%; Greater acceptance of new forms of output by the scholarly community, for purposes of awarding tenure and promotion – 42.0%; The need to constrain costs for all stakeholders – 4.3%.
The changing landscape of scholarly communication
After the morning session closed, NFAIS members attended a members-only lunch during which
Webster then went on to describe the evolution of libraries. He said that the first generation was collection-centric and the librarian was the in-house expert; the second generation was client-focused and the librarian’s main role was library instruction; the third generation was experience-centered and the librarian served as an information and technology specialist; the fourth generation was a networked library that provided a connected learning experience and the librarian remained the specialist; the current, and now fifth generation library serves as a collaborative knowledge, media, and fabrication facility with the librarian very much behind the scenes, working more in outreach efforts with faculty and students.
He went on to comment about other major changes in scholarly communication: the journal evolution from print to digital; the growth in scholarly output, not only in volume, but also in global source with China expected to outpace the United States in the near future; the growth in the number of authors per paper due to global scientific collaboration which is also impacting research workflows (a theme repeated throughout the conference); open access publishing; funding pressures, including publishing policies established by global government funding sources; open science, and so forth.
With all of these changes he believes that libraries will continue the migration from print to electronic and realign their service operations; that they will review the housing/location of their lesser-used collections; they will continue to repurpose the library as a primary learning space; they will reposition library expertise and resources to be more closely-embedded in the research and teaching enterprises outside of the library; and that they will extend the focus of their collection development from external purchase to local curation. He believes that librarians will be increasingly-embedded in research and teaching activities; that they will become campus specialists in areas such as e-science, academic technology and research evaluation; and that in these roles librarians will have meaningful impact. However, he noted that there are potential barriers to success, not the least of which is an unwillingness to change or lack of updated skills on the part of the librarians and the aforementioned perception of academics that librarians are not useful or credible partners.
With regard to science funding, Webster believes that the ever-increasing expenditure on healthcare in most nations will support continued expansion of the medical sub-segment of the STM market; that publishers will attempt to offset the decline in their print revenues through new solutions, for example by providing workflow solutions, analytics, and other cool ‘toys’; and that the R&D growth in Asia and the United States will continue to serve as the foundation for the STM market.
Webster’s presentation was excellent. In the absence of a full-text article, I strongly recommend that you take a look at his slides that are posted on the NFAIS website at:
The audience question and responses for this session are as follows: Which of the following do you anticipate as having the greatest impact on the success of your product or service in the next five years? Continual progress in technology, standards and infrastructure – 63.9%; Increasing public accessibility to research data and findings – 22.2%; Evolving nature of scholarly outputs – 13.9%.
Interacting with content in new ways
The afternoon opened with a session that was focused on how users interact with content. The first speaker was
She said that at EBSCO they have found that students approach their studies in the same way as they organize their lives. They look at priority (what deadline is first), take into account their personal interests (is the deadline in my major or for something else that is less important), and they look at the return on investment – are they getting the best results for the amount of time invested? She found that students find library websites to be a challenge. Too many choices are presented and they don’t necessarily understand the terminology. Students prefer Google and Wikipedia and they want answers, not links. And their use of the Internet is changing how they read and process information. The Internet is creating an “eye byte” culture. Online reading is non-linear reading and when working online there is offline competition for attention simply from the surrounding environment. Preparation for the SAT’s teaches students to skim and scan – there is no focus on “deep” reading. And it takes a bi-literate brain to switch between modes. She closed with the following recommendations for publishers and librarians when designing user interfaces:
Accept the influence of Google and Wikipedia and design for innovation that is rooted in familiarity.
Search Results is the page that matters. It should not be a pass-through, but a destination all its own.
Design for binary decision-making; users making a choice of Yes/No and Stay/Go on Search Results.
Create experiences that make students want to come back and make you a part of their own digital island infrastructure.
Kate goes into much more detail on why students prefer Google (it helps build their information-seeking confidence) and the lessons that she learned while working with the students in her article that appears elsewhere in this issue.
The second speaker was
The third speaker in this session was
For the past seven years a group of developers at Microsoft Labs have been building a semantic storage system around academic content. Starting with articles obtained from publishers, repositories and web crawls, they extracted semantic entities (e.g. authors, keywords, journal names, organizations, etc.) and built relationships between them. In addition, the information was enhanced with complementary content found on the web (photos, twitter tags, etc.). The system worked just as the developers had hoped, but unfortunately they had not taken into account how end-users would actually use the system and it was less-than-efficient. So they then went out to talk to users and looked at studies on user search behavior. One such study was an Ithaka faculty survey from 2012 (see
Microsoft has built, and continues to build, a knowledge graph that is the backbone of Cortana. Wade provided an example of a query which after typing in the single letter “h”, a series of possible search options were listed based upon his most recent prior queries that began with the letter “h”. He also gave an example of a “conversational” search session. His first question was “Who was in Guardians of the Galaxy?” The search results listed the cast of the movie. He then clicked on a specific actor and the search showed photos, news articles, etc. about that person. He then typed the question “Who is he married to?” And the result came back, with her name and other data. And he continued to drill-down, asking where she was born. The result came back with the city, state, and other data on that specific area – all very impressive.
Wade noted that academia is a subset of this knowledge graph. Microsoft has added twenty-five thousand academic journals, conferences, a comprehensive list of fields of studies, researchers, etc. and noted the relationships across all of this data. They have implemented author disambiguation and a pro-active discovery feature where Cortana notifies the searcher of events in their area based upon their specific preferences. They are doing beta testing with a slice of their user base in order to continue to refine the service. To see a video of Cortana go to:
The audience question and responses for this session are as follows: In five years’ time, which do you believe will be the most widely accepted mode of interacting with a mobile device? Speech (voice command) – 54.5%; Gesture (tap, wave, swipe) – 30.9%; Touch (keyboard input) – 14.5%.
Miles Conrad lecture
The final session of the afternoon was the Miles Conrad lecture. This presentation is given by the person selected by the NFAIS Board of Directors to receive the Miles Conrad Award – the organization’s highest honor. This year’s awardee was
The audience question and response for this session are as follows: In your personal experience, which of these aspects of advanced information services has most dramatically changed in the past 15 years (2000 – Present)? User expectations – 78.4%; Business model – 9.8%; Search experience – 7.8%; User interface – 3.9%.
Business models for collaboration
This final day of the conference opened with an interactive panel session moderated by
Cairns opened the session by asking each participant to talk about what they do, their business model, their customers, etc. Sam Molyneux began by saying that Sciencescape was created to help those in the biomedical field manage the literature in that domain, from learning the current happenings in real-time to delving into the archives of what has already been published. They plan future expansion into physics, chemistry and patents. They use social media, e-mail and speaking at conferences such as this to get the word out and also partner directly with publishers. When they first started they measured success by whether or not the technology they developed worked as planned, then moved on to content acquisition, user engagement and the number of users, and ultimately it will all be about revenue growth for sustainability.
Miriam Kesahani noted that Sparrho, founded in July 2013, is a recommendation search engine that focuses on recent research and what is emerging on the fringes of various scientific fields. The customers are researchers, post-docs, and even librarians – anyone who needs to keep up-to-date despite the volume of information that is being published. All of their technology was built in-house and is proprietary. They currently partner with the British Library who provides access to scientific information back to the 1890s. She noted that their measures of success are similar to what Sam outlined: validation of the idea, user growth (they have between eight hundred and one thousand visitors to their site each day), and ultimately revenue.
Kenneway of Kudos noted that the idea for her organization was formed ten years ago when she was working in marketing at Oxford University Press. It was around that time that pressure was forming for the development of article-level metrics for the evaluation of research. She knew marketing, but did not have the knowledge that her authors did regarding the value of their own research. Finally, when the field of altmetrics emerged she decided to develop a comprehensive tool set that would allow authors to promote their work. Their business model has two strands: (1) the tools are free for researchers; and (2) the revenue flows from the tools that have been built for publishers, universities, and funding bodies to monitor the publications that flow from their respective efforts. Kudos had revenue from the start so revenue remains very important to them as is registrations on their platform and user-engagement. Melinda noted that they are working with thirty-five publishers across the sciences, social sciences and the humanities. She said that the scientists are better at the sharing aspect and that those in the other communities are better at writing the abstracts and related materials to promote their work. She said that the driving factor for them is the fact that authors strongly believe that their work is inadequately promoted and that they are willing to take the primary responsibility for that promotion.
The final speaker was Lenny Teytelman from ZappyLab. He said that he was a geneticist and had no desire to be an entrepreneur. But his career was re-routed by a phone call from a friend who asked him about building an App. Lenny said that he would love to build one that would allow him to go through the lab protocols step-by-step so he could monitor his progress. They could charge four dollars an App to the millions of researchers who would use it and split the revenue with Apple. He would continue his university career with the benefit of an additional revenue stream. It was two weeks later that he thought that the App wasn’t enough – that there needed to be a way to capture and share changes to experimental procedures. Hence a new career began, both for him and his friend. They have four full-time staff and a team of eight students in Moscow, Russia doing Android and IOS development. They were able to get about $800K in seed and Angel investment funds and they invested $100K themselves, working from a basement to get started. Major pharmaceutical companies have expressed a lot of interest in the technology platform that they have built, but they do not want to sell it. They see a revenue model based on royalties from publishers and suppliers of chemical reagents. He noted that many people have tried to make this idea work and it certainly is harder than he thought. In fact the hardest part was making a commitment to the idea and walking away from the academic career that he was so very close to achieving. His measure of success is that scientists will use the system to promote the corrections/changes that they make to protocols and that as a result other scientists will not waste research time and dollars on experiments that are doomed to failure. If the only success is the technology in and of itself he will deem the initiative to be a failure.
Cairns asked how the companies went about growth. All said that they found raising funds to be more difficult than they thought, but that getting the user is the hardest part. Scientists are busy and you need their time – to register on the platform, to learn how to use the system, etc. It was the partnerships with publishers and other organizations that was the most successful. Having the partners link back to the company sites has worked well in promoting their products.
All of these startup companies are very much in growth mode, but I suspect you will be hearing more about them and I sincerely hope that NFAIS does a follow-up with them in a couple of years. It would provide a great case-study.
The audience question and responses for this session are as follows: In the context of working with start-ups, how would your organization prefer to introduce new technologies to an existing product or service? Working with a partner, by licensing use of the technology – 55.2%; Build internally to ensure seamless interoperability with existing systems – 32.8%; Acquire the start-up and build on founder’s vision and problem-solving approach – 12.1%.
User demand and policy
The second session of the morning was also an interactive panel discussion that was organized to take a look at some of the issues that concern content providers and librarians, such as the re-use, replication and sharing of data, social media, data mining, etc. Moderated by
The audience question and responses for this session are as follows: Which of the following do you believe is of greatest concern to adult users in adopting a new information application, technology or service? The extent of control exerted over user behavior by the provider of the application, technology or service – 52.5%; The level of protection surrounding the data gathered by use of the application, technology or service – 27.1%; The amount of data gathered through use of the application, technology or service – 20.3%.
Predicting the future
The conference closed with a keynote by
He noted that when the first U.S. government hearing on the Internet was held in 1988 no one showed up. Now the Internet is used daily around the world. In fact, the amount of data that passes over the Internet today is hard to get one’s head around and it is predicted that annual Internet IP traffic will reach the zetabyte threshold by the end of 2015 (see
Nelson noted that the Cloud Plus is enabling new startup ventures and that this “startup economy” is the fastest-growing segment of our overall economy. The cloud and the Internet allow industries to tap into a global workforce for problem-solving and outsourcing. The combination even enables the “the sharing economy” where underutilized resources can be repurposed, with Uber and Airbnb being the most notable examples.
He said that so far we have gotten the Internet right. It has been allowed to grow and remain unregulated. It was born global and remains global, providing a world-wide platform that offers interconnectivity and facilitates economic growth. But governments around the world are starting to rethink this as they consider how they can “listen in” on Internet transactions. Nelson said that we need a better vision for the future of the Internet. We need smart privacy, data and security policies that protect information while encouraging innovation and that companies should err on the side of transparency. Today, the most successful Internet-related books are pessimistic about the future. He, on the other hand, is optimistic about the future – the key is getting the policies right. In closing he provided a list of reading materials that can be accessed on the NFAIS web site at:
The audience question and responses for this session are as follows: Over the next five years, which of the following would you think might have the greatest impact on advanced information services? Shifts in funding allocations for R&D in the enterprise and education sectors – 51.1%; User concerns regarding widespread collection and subsequent use of all forms of data – 31.9%; Reclassification (and potential regulation) of the Internet as a public utility – 17.0%.
Conclusion
Without intentionally doing so, the speakers at the conference reinforced one another in the identification of a number of industry trends and issues: (1) Big data and data mining are the key drivers in accelerating the growth of knowledge and allowing us to uncover relationships and meaning that until now was undiscoverable. As I noted in last year’s conference overview [6] and as Mike Nelson noted during his presentation, the best book on this topic is Big Data: A Revolution that will Transform How We Live, Work and Think by Viktor Mayer-Schönberger and Kenneth Cukier that was published in 2013; (2) Current information policies are inadequate to allow librarians and content providers to meet user needs, but the policies, and the mindsets that created them, are difficult to change; (3) Publishers need to rethink their data: how can it be dissected into usable bits? How can those bits be recombined to add more value? (4) Interfaces to digital content must adapt to the user, not the other way around; (5) Collaboration and partnerships are the new normal – publishers and librarians cannot do everything themselves; and (6) the economics of publishing is in transition – the ideal business models have not yet been identified.
But for me the most dramatic change in the information industry over the past forty years is that the gap between the user of information and the traditional creators/disseminators (i.e. the publishers and librarians) has closed considerably. When I was a college student the physical library was where scholarship took place and the librarian reigned supreme. Only the publisher had the financial resources, skills, and technology to transform manuscripts into books and journals. Users really had no active part in the publishing process other than as authors and readers. Today, many students and young researchers are far more technically skilled than many of the staff in libraries and in publishing houses, and the resources required to create digital books, journals, datasets and supporting software tools are far less expensive than in the past. As noted by Judith Russell in one of the panel sessions, a lot of the users are quite innovative in terms of trying to find new ways to interact with librarians or to find other ways to get information if they believe that the library is inadequate in meeting their needs. In that same panel session it was noted that perhaps today publishers are now on the outside looking in because they tend to look at problems in traditional ways. Technology has closed the gap between the user and publishers/librarians. If Mike Nelson is correct in saying that in the next ten years we will witness as much technological change as we have experienced over the past two decades, perhaps twice as much change if we do it right – will the gap be closed? And if it is, what will the information publishing ecosystem look like and what roles will publishers and librarians play? I look forward to watching the future unfold.
Plan on attending the 2016 NFAIS Annual Conference that will take place in Philadelphia, PA, USA from February 21–23, 2016. Watch for details on the NFAIS website at:
Note: If permission was given to post them, the speaker slides are embedded within the conference program at:
Footnotes
About the author
Bonnie Lawlor served from 2002–2013 as the Executive Director of the National Federation of Advanced Information Services (NFAIS), an international membership organization comprised of the world’s leading content and information technology providers. She is currently an NFAIS Honorary Fellow. Prior to NFAIS, Bonnie was Senior Vice President and General Manager of ProQuest’s Library Division where she was responsible for the development and worldwide sales and marketing of their products to academic, public and government libraries. Before ProQuest, Bonnie was Executive Vice President, Database Publishing at the Institute for Scientific Information (ISI – now Thomson Reuters) where she was responsible for product development, production, publisher relations, editorial content, and worldwide sales and marketing of all of ISI’s products and services. She is a Fellow and active member of the American Chemical Society, and a member of the Bureau of the International Union of Pure and Applied Chemistry. She is also on the Board of the Philosopher’s Information Center, the producer of the Philosopher’s Index. She has served as a Board and Executive Committee Member of the Information Industry Association (IIA), as a Board Member of the American Society for Information Science & Technology (ASIS&T), and as a Board member of LYRASIS, one of the major library consortia in the Unites States.
Ms. Lawlor earned a B.S. in Chemistry from Chestnut Hill College (Philadelphia), an M.S. in chemistry from St. Joseph’s University (Philadelphia) and an MBA from the Wharton School (University of Pennsylvania).
About NFAIS
The National Federation of Advanced Information Services (NFAIS™) is a global, non-profit, volunteer-powered membership organization that serves the information community – that is, all those who create, aggregate, organize, and otherwise provide ease of access to and effective navigation and use of authoritative, credible information.
Member organizations represent a cross-section of content and technology providers, including database creators, publishers, libraries, host systems, information technology developers, content management providers, and other related groups. They embody a true partnership of commercial, non-profit and government organizations that embraces a common mission – to build the world’s knowledge base through enabling research and managing the flow of scholarly communication.
NFAIS exists to promote the success of its members and for more than 55 years has provided a forum in which to address common interests through education and advocacy.
