Abstract
The National Library of Medicine’s AIDS Community Information Outreach Program (ACIOP) supports and enables access to health information on the Internet by community-based organizations. A technical assistance (TA) model was developed to enhance the capacity of ACIOP awardees to plan, evaluate, and report the results of their funded projects. This consisted of individual Consultation offered by an experienced evaluator to advise on the suitability of proposed project plans and objectives, improve measurement analytics, assist in problem resolution and outcomes reporting, and identify other improvement possibilities. Group webinars and a moderated blog for the exchange of project-specific information were also offered.
Structured data collections in the form of reports, online surveys, and key informant telephone interviews provided qualitative feedback on project progress, satisfaction with the TA, and the perceived impact of the interventions on evaluation capacity building. The Model was implemented in the 2013 funding cycle with seven organizations, and the level of reported satisfaction was uniformly high. One-on-one TA was requested by four awardee organizations, and was determined to have made a meaningful difference with three. Participation in the webinars was mandatory and high overall; and was deemed to be a useful means for delivering evaluation information. In subsequent funding cycles, submission of a Logic Model will be required of awardees as a new model intervention in the expectation that it will produce stronger proposals, and enable the evaluation consultant to identify earlier intervention opportunities leading to project improvements and evaluation capacity enhancements.
Keywords
Introduction
The U.S. National Library of Medicine (NLM), a part of the National Institutes of Health, is the world’s largest medical library. The AIDS Community Information Outreach Program (ACIOP) was launched by NLM’s Specialized Information Services division in 1994 to provide the affected HIV/AIDS community with access to vital health information increasingly becoming available on the Internet [7]. Since that time, more than 300 awards have been made to community-based organizations and their partners for the purpose of enabling the acquisition of needed computer and communications equipment, facilitate user training in accessing HIV/AIDS information, and create locally meaningful and culturally relevant resource materials based upon the latest authoritative and evidence-based information available from NLM.
These Internet-accessible resources center around AIDSource (
A formal evaluation of the ACIOP by Columbia University colleagues in 2012 found that although most program objectives were being met, important deficiencies existed in a significant number of the awardees’ final reports that were studied. These were sometimes in the form of critical omissions of what was accomplished, often traceable to incomplete or limited project planning and/or skills needed for measurement of impact. This led the Columbia team to make the recommendation that NLM seek to enhance the evaluation capacity of awardees, particularly less experienced community-based organization. The intent would be to encourage the implementation of better project planning and evaluation by awardees, including a more thorough reporting and documentation of project objectives, measurement tools, and observed outcomes [9].
An invitational stakeholders Workshop was convened by NLM in late 2012 with several previously funded and other leading community organizations, and knowledgeable health leaders in HIV/AIDS who endorsed this key recommendation. They considered alternative mechanisms for improving awardees’ evaluation capacity. They agreed that this could best be accomplished by means of providing expert consultation that would advance the capabilities of local project staff with limited evaluation experience. Additionally, they urged that means be explored to make awardees’ project methods and results more transparent and sharable amongst all awardees by encouraging the use of listservs and blogs [9]. NLM resolved to take the proactive action of providing external technical evaluation consultation services that would be offered on an experimental basis to newly funded awardees, without cost to them, throughout the one year duration of an ACIOP project. Access to a blog and group webinars would also be provided.
The pilot experiment
In the following ACIOP funding cycle, year 2013, NLM made significant changes in ACIOP proposal preparation requirements, project reporting specifications, and the need for documentation of evaluated project results. In September 2013, seven ACIOP projects were awarded funding in the amount of $40,000 each for a one-year period. These projects constituted Cohort 1 of a planned iterative multi-year experiment in evaluation capacity building at the community level. Each cohort runs for one year; the first year is the subject of this paper. See Table 1 for project summary descriptions, including the geographic location of the projects and the target audience. The initial awardees included three projects whose proposals explicitly focused on the use of emergent mobile health technology that is of increasing interest to NLM as a means for disseminating HIV/AIDS prevention and treatment information.
2013 AIDS community information outreach project award descriptions
2013 AIDS community information outreach project award descriptions
NLM contracted with the Oak Ridge Associated Universities (ORAU) headquartered in Oak Ridge, Tennessee, and its Oak Ridge Institute for Science and Education (ORISE), to provide the services of an evaluation consultant to assist the leadership and staff of these newly funded ACIOP projects. The
Evaluation capacity is multi-faceted: Several operational definitions informed our goal of enhancing evaluation capacity at community-based organizations. The American Evaluation Association focuses on outcomes, and defines evaluation as a systematic process to determine merit, worth, value or significance www.eval.org [2]). The Centers for Disease Control and Prevention (CDC) focuses on methodology and program improvement, and defines evaluation as the systematic collection of information about the activities, characteristics, and outcomes of strategies (i.e., programs) to make judgments about the strategy, improve strategy effectiveness, and/or inform decisions about future strategy development (www.cdc.gov/eval [4]). NLM adds an emphasis on outcomes-based evaluation to determine what changes have been achieved and results accomplished, asking the question: Are you making a difference? [13]. When organizational evaluation capacity enhancement is the explicit goal, we can further specify a need to strengthen an organization’s ability to consistently measure success, make programmatic improvements, and provide funders with evidence of good fiscal stewardship [1].
Theoretical constructs help guide the experiment: The evaluation consultant framed the intervention as a technical assistance model grounded within the constructs of two behavioral theories. The first, Participatory/Community Empowerment Theory [6,18] holds that community-based organizations should be principal agents in the telling of their own project evaluation story. The second, Diffusion of Innovations Theory [12,14] sees evaluation as a new and valuable skill set to be disseminated, adopted, and used; an innovation that has the potential to enhance organizational effectiveness and impact project outcomes.
The following research questions were addressed:
What organizational and situational evaluation capabilities contribute to successful awardee project planning, implementation, and outcomes? How may these be facilitated and enhanced by the provision of technical assistance?
What constitutes an effective technical assistance intervention model? What strategies work well and those less so? Are there predictable differences in awardee organizational and/or project characteristics that affect the acceptance and utilization of offered consultation?
What are the characteristics of an effective evaluation consultant in this context? How were the projects positively impacted by the consultant? How did the consultant make a difference?
The focus of this paper is on answering these and related questions, thereby developing a better understanding of the evaluation capacity of community-based organizations, both initially and amenable to enhancement. Specifically, and of primary interest, is assessing the value of individualized evaluation consultation, as provided in this first year, and lessons learned that may lead to further improvements with ACIOP awardees in subsequent years.
Valuable insights were gleaned from a review of the evaluation research literature that helped establish operational benchmarks for identifying and applying evaluation best practices. These were informed by relevant theories from the disciplines of public health and communication research that suggested effective strategies leading to the adoption of these best practices by awardee organizations. ACIOP staff and the evaluation consultant developed, put in place, and pilot tested an Evaluation Capacity Building Model (The Model) to improve awardee project planning, implementation, evaluation, and documentation of outcomes. Interventions were strategically executed at key points in Cohort 1’s ‘one-year’ project timeline of September 2013 to September 2014.
The Model interventions are in reality a ‘mixed model’, comprising structured data collections (Quarterly Reports and Surveys); ad hoc follow-up questions and responsive information exchanges (Follow-up Telephone Calls and Key Informant Interviews); group instructional resources to improve project reporting and enhance evaluation capacity (Webinars and Blog); and one-on-one individualized evaluation consultation (
The Case Studies served as a comprehensive descriptive resource from which to identify and report the experiment’s main findings that are presented below in the Results section. They are derived from a qualitative analysis of each intervention comprising The Model. They also contain a plethora of detailed project-specific activity data, many quantitative measures, e.g., frequency counts of varying kinds, which are pertinent to documenting the performance of a project. The data are not reported here as they are not germane to the more general purpose of this paper, that of assessing the value and scalability of The Model that was tested in the pilot experiment.
In summary, The Model incorporates several structured data collections, administered iteratively at key points during project implementation, that provide feedback informing ACIOP program managers, the evaluation consultant, and project principals. They consist of periodic reports of project progress – quarterly and final; general purpose surveys administered online that include both open-ended and closed questions; these are subsequently followed by key informant telephone interviews with project personnel probing project progress, difficulties encountered and overcome, satisfaction with the technical assistance offered and received, and perceived impact of the interventions on evaluation capacity building.
Results
Overall, utilization and effectiveness of The Model and its interventions was positive but somewhat uneven, as described in the following summary and in the details that follow below:
The
There was a 90% rate of attendance in the group Webinars; however, vocal participatory contributions by the organizations’ representatives tended to be low. It was not possible to assess their utility in real-time, apart from the generally positive comments offered later in survey and interview responses. Five awardees reported accessing the Blog regularly; one organization said it did not find it useful, and only two organizations posted content. The Blog did not realize its potential of serving as a mechanism for individual projects to share and exchange information on the scale that was anticipated.
The organizations reported a positive effect of the interventions on their ability to effectively plan, collect data, evaluate and report their project activities and outcomes. The level of detail of the quarterly reports increased with each reporting cycle. The awardee organizations appear to have benefited from the additional structure of the reporting templates, aided also by helpful group and individualized instruction on how to provide the needed information.
Organizational and evaluation capabilities posited a priori by the typology, and found to contribute to successful project implementations and outcomes were: age and experience >10 years; clientele served >250 persons; a clearly defined mission that supports evaluation; evaluation champions inside the organization; many external strategic partners; leadership’s evaluation knowledge above average; evaluation techniques utilized often; dedicated evaluation resources and staff; low staff turnover; and staff open to new ideas.
Barriers to success in achieving project objectives included new staff not being oriented and trained properly; organizations not having a credible evaluation plan at the beginning, and organizations not taking advantage of the free
Six organizations reported successfully achieving their project and evaluation goals and objectives to their satisfaction. However, underestimating the difficulty of accomplishing project objectives can be problematic; for example, obtaining external technical support to build a new website that was beyond an organization’s internal skill set.
It is instructive to drill down within the individual Case Studies to assess The Model’s ability to enhance evaluation capacity, and to impact project planning, reporting, and outcomes. We begin with the findings of the four organizations that requested
Organizations that did not request
Discussion
The following research questions were at the center of this pilot experiment, and helped guide the interpretation of results:
What organizational and situational evaluation capabilities contribute to successful awardee project planning, implementation, and outcomes? Best practices supportive of situational evaluation capacity building are having staff that are properly oriented and well-trained; the existence of an explicit project evaluation plan; good communication with NLM and its consultants in identifying and resolving problems encountered; networking between awardee organizations and information sharing; and having evaluation champions in and outside the organization. NLM can potentially facilitate all of these situational factors through its evaluation interventions, but not those that are structural and are also associated with good evaluation outcomes (e.g., mature organizational age, substantial size, and established reach). However, NLM could assign greater weight to, and fund proposals from organizations that do evidence such characteristics.
How may the positive situational variables associated with good evaluation outcomes be facilitated and enhanced? We found that offering technical assistance where and when needed, in the form of individual
What constitutes an effective technical assistance intervention model? What strategies work well and those less so? A Blog appears to offer minimal value as a passive means to share project tools, or other materials that can enhance evaluation capacity. Organization staff may be too limited time-wise to engage in activities that offer little immediate prospect of benefit in the form of content that is applicable to achieving the project’s objectives or evaluation goals. On the other hand, mandating attendance at structured group Webinars is a relatively low-cost and easily scalable intervention that can impart general information that strengthens evaluation capacity. It is less effective in addressing individual project questions or concerns than may be provided in
What are the characteristics of an effective evaluation consultant in this context? The ability to form effective and trusting professional relationships with the principals and staff of awardee organization is clearly important. It is at once a means to encourage and sustain buy-in, and also reduce a tendency to perceive external evaluators as a potential threat rather than a source of new insights, problem solving skills, and evaluation capacity building. First-hand experience working with community-based organizations is an asset, as they are a major ACIOP awardee category with unique needs, strengths, and limitations that call for sensitivity and understanding. This is especially true for minority-serving organizations where socio-economic, cultural, and language differences can present formidable barriers to successful collaboration. The evaluation consultant should also be proactive and strongly motivated to make a difference helping an organization achieve its project objectives, and improve its evaluation capacity. While a solid skill set encompassing the traditional evaluation methodologies used in formative and summative program evaluations is necessary, so too is an awareness of and experience with so-called ‘alternative metrics’ (altmetrics) that are becoming increasingly commonplace in the evaluation of Internet-based social media usage and impact [17]. These statements are based on the self-appraisal of the evaluation consultant along with concurrence of NLM management.
Are there predictable differences in awardee organizational and/or project characteristics that affect the acceptance and utilization of offered consultation? At this juncture, this question cannot be answered with certainty. Additional research is needed. Of the four organizations requesting
How can a logic model become part of an intervention model to enhance evaluation capacity? We have begun to experiment with a requirement that ACIOP projects provide a logic model that is a visual representation of the project that illustrates how planned activities are linked to project results. Logic models come in many different formats, but they all present the shared perspective of an “if… then” statement. “If we obtain the necessary resources and conduct certain activities, we will achieve our desired outcomes” [13]. Eight Cohort 2 awardees for 2014 are currently being assisted by the evaluation consultant to use a logic model template that specifies project inputs, activities, outputs, outcomes and impact. Inputs are the resources needed, including people, time, money, materials, equipment, and technology. Activities are what you do (e.g., conduct training sessions, provide services) and who is reached (e.g., participants, agencies, community-based organizations). Outcomes are the results or benefits of the project, including short-term outcomes such as changes in knowledge; intermediate outcomes such as changes in behavior; and long-term outcomes or impacts such as changes in healthcare access, individual or population/community health status. An evaluation logic model is part of the project logic model, and describes how the project will be evaluated to support and document the overall project outcomes.
Conclusions and future plans
The Evaluation Capacity Building Model will continue to be the subject of research with new ACIOP awardees, leading to a critical mass of experience with more organizations and projects upon which to reliably judge the effectiveness of The Model and lessons learned. Modifications to the interventions will be introduced as appropriate. The introduction of a project logic model occurred in mid-year of the Cohort 2 award cycle, partly in response to the hypothesis that a more proactive intervention by the evaluation consultant may be needed to identify earlier potential problem areas in project planning and/or evaluation where individual technical assistance may be needed. NLM is also requiring that all new funding proposals for 2015 that will constitute Cohort 3 include a logic model as part of the proposal package. It is hypothesized that this will also encourage organizations to devote greater effort in advanced project planning and evaluation, thus producing stronger proposals.
The ACIOP funding solicitation for 2015 will also be modified conceptually to place a new emphasis on attracting proposals that target ‘men who have sex with men’ (MSM), an especially high-risk subpopulation of younger males that accounts for a preponderance of new HIV infections in the U.S. [3]. Additionally, there is renewed emphasis on attracting proposals that employ mobile health technologies and social networking services that are increasingly relied upon as easily accessible communications media and sources of health information [15]. Both changes will be reviewed at an invitational Workshop scheduled in Spring 2017, along with any impact on strategies for enhancing the evaluation capacity of organizations working with this target population and/or employing these rapidly proliferating and accessible communications media. NLM is also expanding its efforts in the application of responsive design and other means to effectively use these Internet-based platforms for information dissemination, and its brand as a trusted source of health information.
Limitations
Enhancing an organization’s evaluation capacity may be facilitated by addressing situational factors (modifiable) that are amenable to interventional change, such as training a stable and motivated staff. The presence of structural (fixed) weaknesses inherent, for example, in relatively young organizations of small size and limited reach are less amenable to change by NLM. The latter organizations likely were effectively selected out by the competitive ACIOP contract award process which now gives greater weight in its review criteria than in previous years to an organization’s prospects for at least a reasonably good chance to plan and evaluate its proposed project. This does limit our understanding in the present study of The Model’s robustness, as only organizations characterized as having above average or average evaluation knowledge and experience (characterized post hoc) were included; none being below average. Another limitation is a test of any model having only seven subjects must also be viewed as preliminary, as is the present pilot; a critical mass of additional projects is needed. It remains for The Model to be used with additional awardee organizations in future funding cycles that will further inform our understanding of its strengths and weaknesses relative to NLM’s ability to provide appropriate and cost-effective technical assistance. However, it will likely remain a practical limitation that organizations having below average evaluation capacity will continue to be underrepresented, as NLM’s ability to fund ACIOP project proposals is finite, and the overriding goal must be to continue supporting organizations that have a reasonably good chance to achieve their outreach objectives, and not necessarily to provide a comprehensive test of The Model’s robustness across a broad continuum of organizations possessing strong to weak pre-award evaluation capacity.
