Abstract
The success of development projects and evaluations hinges on having access to research protocols and methodologies that consider the needs and characteristics of stakeholders, subjects, and context while remaining rigorous and culturally sound. These efforts are often complicated by a dearth of tools that have been tested for validity and reliability in communities of interest and data collection environments that necessitate reliance on international and domestic partners with varying backgrounds to collect, process, and share data. Overcoming these challenges requires flexibility, intentionality, and continuous improvement across all partners.
This article describes lessons learned from a mixed-methods longitudinal evaluation of an educational intervention in post-conflict Somalia. It shares strategies devised and implemented by the research team for developing context-appropriate tools, effectively collecting data, training field staff, and cleaning and maintaining data integrity. Evaluators, researchers, and practitioners with a focus on fragile environments and multi-site, complex interventions will benefit from the best practices and tools shared here.
Keywords
Get full access to this article
View all access options for this article.
