Abstract
Agricultural research for development (AR4D) often relies upon a centralized and mechanistic model of social science research. This is a model in which supposedly unskilled field officers (FOs) are recruited to implement household surveys that have been designed by faraway scientists. We argue that such research practices not only impede data quality and analysis but also devalue the work of FOs. We describe this phenomenon as a process of deskilling: One in which research protocols seek to limit the need for FOs to be skilled and also actively obscure the skilled work that FOs nevertheless do in the field. We link this process to a pervasive conception of “scientific rigor” that is grounded in an ideology of science as impersonal, disembodied, and mechanical. Drawing on feminist science and technology studies (STS), we highlight how the ideology and practice of deskilled research perpetuate colonial hierarchies of knowledge. We outline possibilities for and barriers to achieving more equitable and more generative relationships between scientists and FOs in AR4D.
Keywords
A typical approach to social science research in agricultural research for development (AR4D) goes something like this: 1 A group of scientists conceives of a research project for understanding rural livelihoods and agricultural practices across a large, potentially transnational area. They find or design a set of research instruments focused on a household survey. The principal investigator—who holds a position in a North American or European university or international agricultural research center—has limited field experience in large parts of the area they propose to survey. So they collaborate with or consult counterparts at research institutions local to each of their study sites. This helps them to feel comfortable that the survey fits local cultural contexts. Having finalized the survey instrument, the scientific team recruits local field officers (FOs) to implement it. These FOs often have significant experience carrying out this kind of research. They are, nevertheless, not consulted or engaged with until after the survey instrument has been finalized. The instrument, moreover, is designed with an unskilled, novice enumerator in mind. To this end, the survey is made up of simple and discrete questions that leave as little room as possible for ambiguity or discretion in asking questions and recording answers. Indeed, the survey will be carried out on a tablet that forces enumerators to select from a predetermined set of possible responses. As a result, the only requirements of the enumerators are literacy in the survey language and attendance in a short one-day workshop that will teach them some basics about the content of the survey, a few tips on interviewing, and how to use the tablet.
Drawing on our experiences conducting ethnographic research with AR4D organizations and projects in China and East Africa, we critique the deskilling of research in the kinds of projects sketched above. There is of course a great deal of diversity within AR4D, but the above sketch indicates many features that are typical of the global development field and that are indicative of deskilling practices that we wish to challenge. Embedded in these research practices is an ideology and practice of deskilled data collection that undermines the valuation and realization of the actual and potentials skills of FOs. This has two important and interconnected consequences for AR4D: it severely limits the quality and scope of AR4D research, and it perpetuates colonial hierarchies of knowledge. We argue therefore that recognizing and fostering the skills of FOs (skilling research, if you will) could simultaneously enhance scientific understandings of agricultural development and redress inequalities embedded within scientific and development practice.
Within this analysis, we conceptualize deskilling as a multifaceted phenomenon that involves:
An ideology that fetishizes mechanical data collection as “rigorous.” A converse suspicion of any data collection or analysis that depends explicitly upon human skill and tacit knowledge. A resulting effort to minimize opportunities for human discretion within research protocols. The erasure and devaluation of the enormous amount of skill that FOs nevertheless rely upon in nominally deskilled data collection.
We introduce each of these facets in turn, but it is vital that they been understood as fundamentally interconnected. The social structure of research collaborations with FOs is, in this respect, inextricably connected to decisions and assumptions about epistemology and methodology. One cannot truly address the devaluation of FO labor without also rethinking dominant research methods and the understanding scientific rigor to which they are tied.
The ideology of mechanical methods as rigor
Deskilled research is built upon and must be seen in relation to several underlying logics of global development. Perhaps the most obvious of these is the privileging of quantitative measures and replicable approaches over qualitative insights and open-ended research practices. This is evident in the widespread use of indicators and indexes. Indicators offer proxies for specific qualities. To achieve this, they “convert complicated contextually variable phenomena into unambiguous, clear, and impersonal measures” (Merry, 2011: 84). Indexes combine indicators to offer a comparable evaluation of different initiatives. As an example, the World Bank's World Development Index (WDI) measures cereal yield, the proportion of land devoted to agriculture, and the proportion of GDP derived from different agricultural activities. The production of such indicators requires representation as a numerical value, whether that seems straightforward (as in the case of wealth or income) or requires conceptual sophistication and translation (as in the case of gender equity or environmental values). This faith in numbers as evidence of what works seems to have culminated in the use of statistics to enable causal inference within randomized control trials (RCTs). Although they have well-documented limitations (e.g. Adams, 2002; Cartwright, 2011; Kelly and McGoey, 2018), RCTs and quantitative social science more broadly have become the gold standard for knowledge about rural communities in AR4D.
This trend should be understood in relation to multiple further interrelated logics. One of these is the idea that a natural science laboratory experiment exemplifies a universal standard for good science. Experimental methods are considered more rigorous, more objective, more generalizable, and more representative. 2 They are presented as the model upon which social science should be based. This often belies flawed understandings of laboratory sciences. It emulates a problematic premise that mechanical, impersonal data collection is both possible and ideal. The terminology of a research “instrument” is indicative of this. Just like an instrument in a lab experiment, a household survey instrument is designed so that it doesn’t matter who is using or implementing it. So long as they follow the protocol and use the instruments correctly, then any enumerator will arrive at the same data and results. Field officers working with Ben offered the insightful critique that they are reduced to an “extension of the tablet” to which the survey instrument is uploaded. By design, the survey instrument contained in the tablet matters more than the person holding it. In fact, the only time the FO comes to matter is—as is imagined to be the case for the laboratory technician—when they make an error. Of note here is that it is not merely that FOs are incapable of anything more than mechanical labor. It is that FOs doing merely mechanical labor is what is desired. The best instruments—in the laboratory and the household survey alike—minimize opportunities for human discretion. Here humans—in contrast to instruments—are perceived as an inevitable source of error. 3
The logic of instruments and mechanical data collection serve imperatives for scale and scalability (Gedeon Achi, 2025). This can materialize in a variety of ways. First, in relation to the immense value attributed to individual research projects that span large, even transcontinental geographic spaces. One supposed advantage of deploying a single survey instrument across a diverse range of contexts is the claim that this can enable analysis of large-scale, even global trends, and can thus inform development interventions designed to operate on similarly large scales. A second way in which AR4D serves imperatives for scale is through the development of research “tools”—or sets of prefabricated research instruments—that can be adapted and deployed across multiple settings. So, for example, during his ethnographic research with AR4D scientists in China, Tim encountered a colleague who had developed a tool called the “Toolkit for Agroecological Knowledge” or “TAK” (McLellan, 2021, 2024). This toolkit was initially conceived because of this scientist's desire to understand farmers’ knowledge of agroecological systems in a particular part of South Asia. The toolkit, however, was subsequently developed into a set of instruments for data collection and analysis that could be used anywhere in the world. The developers of these tools present them both as a skilled-labor-saving device—the tools come pre-packaged with much of the skilled research design already done—and as a device for creating standardized and therefore globally comparable data.
One explanation for these trends is pressure from donors and development institutions for scale and value-for-money. Standardized formal research instruments promise potential for large-scale—or at least scalable, research—and they do so at a relatively low cost. It is important, however, to recognize that for many in AR4D, preferences for deskilled research are not simply “what donors want” (Eyre, 2021). They have become internalized as the gold standard for research. At times quite dogmatically so. In our ethnographic observations of scientists working in AR4D, there is a frequent conflation of “scientific rigor” with mechanical data collection, standardized formal research instruments, scalability, and quantitative methods. Conversely, anything that does not meet these characteristics—for instance, informal conversations or open-ended interviews with farmers—is often scorned as “merely anecdotal.”
Consider the designer of the TAK toolkit, a senior scientist called Josiah. Josiah explained in a seminar delivered to his colleagues that TAK uses formal surveys and data analysis methods that are greatly superior to what he dismissed as “anecdotal methods.” To illustrate his point, Josiah offered an example of a project where a team of scientists and development professionals had been trying to promote the cultivation of a particular tree species. Local farmers were refusing to adopt the tree, but Josiah and his colleagues could not fathom why. Only after they implemented his tool did the reason for the farmers’ refusals become apparent: “What TAK revealed,” Josiah explained, “was that this tree causes significant soil erosion.” As anthropologists accustomed to using less rigid methods that researchers like Josiah dismiss as anecdotal, it is hard to see why this discovery required a formal protocol. Indeed, ethnographic methods—or simply an informal chat with the farmers in question combined with some contextual insight—would surely be well positioned to answer the question: “Why don’t farmers want to plant this particular species of tree?” The dismissal of the anecdotal and the reverence for the replicable can, however, be understood in relation to the ideological association of mechanical methods and quantitative analysis with rigor (McLellan, 2021, 2024).
Lost in deskilling
The dismissal of so-called “anecdotal methods” is connected, moreover, to the relative absence of (or at least absence of appreciation for) the skills that more informal and open-ended approaches to social science would require. Let's imagine we want to understand the practices and knowledges of a particular group of farmers. Notwithstanding the ideological prejudices exhibited by Josiah, an obvious starting point for an anthropologist—and for many agronomists and agricultural development professionals too—would be to have an open-ended conversation with the farmers in question. Of course, such a conversation is less straightforward than it might first appear. A meaningful conversation will require you to have (or at least be able to quickly acquire) capacities to: speak a local language; behave and interact in culturally appropriate ways; build rapport and relationships with that community; devise questions that are appropriate to the context and that will elicit interesting and relevant responses; and to interpret those responses within a broader local context and cultural repertoire. It requires, in other words, a set of communicative and interpretative skills. These are skills, moreover, that are often context specific. Many of the skills that would make one a successful community-based agronomic or ethnographic researcher amongst Chinese rubber farmers would not, for example, be transferable to the context of dairy farming in Tanzania. Given the difficulty Josiah had making sense of the farming community he was working with in South Asia, one might surmise that he lacked the appropriate communicative and interpretative skills for that context—and, as importantly, lacked the time or resources to develop them. In this context, the appeal of a toolkit like TAK makes a degree of sense. It shortcuts the lengthy process of local communicative and interpretative skill cultivation that an iterative and open-ended approach might require. That is to say, the toolkit offers a deskilled alternative to the highly skilled work that “anecdotal” research would require (McLellan, 2021, 2024).
Rather than investing in the skills required for an open-ended conversation, AR4D toolkits and survey instruments elicit data through a standardized set of questions. Significantly, it is not normally senior scientists like Josiah asking standardized questions but a group of FOs. These FOs are usually presumed to be low skilled. More importantly, there is a belief that they need few skills. In this respect, as important as the much-celebrated work that survey tools and instruments do to standardize datasets across time and space, one of their key functions is to deskill the survey interview. This is evident in the preference for multiple-choice questions and numerical ranking systems. Crop adoption studies, for instance, often measure things such as farmer willingness to plant new crops and farmer beliefs about new crops using a Likert scale or dummy variables (e.g. yes/no answers). This style of interview question minimizes the skill required of the interviewer and creates a barrier between them and the interpretative endeavor required to make sense of the response. In stark contrast to the open-ended conversation or ethnographic interview, these questions require survey enumerators to read from a script and record a straightforward and discrete numeric or yes/no response. 4
As many have argued in agricultural and other development contexts, however, there are severe limitations to dependence on these strategies. Quite simply, an enormous amount is lost in the translation of respondents’ perspective and experience into discrete yes/no answers, multiple-choice selections, and ranking scales. Vijesh Krishna et al. (2020: 91; also Glover et al., 2016), for example, highlight that in crop adoption studies: Technological change is assumed …. [to be] a complete substitution of old inferior practices with new superior ones, thereby making [these studies] fundamentally incapable of addressing the processes of adaptation, creolization, hybridization, and incorporation.
The problem, moreover, is not just in the narrow framing of possible responses to questions, but the predetermination of the questions themselves. Deskilling the survey requires separation of things that skilled ethnographers would treat as inseparable: question conception, recording of response, and analysis of response. One reason they are inseparable is that they are part of an iterative process. Separating these dimensions makes for inflexible research and research that can only find what it anticipates finding. But the separation is an inherent and necessary condition of deskilling survey instruments. So long as the skilled work of question formulation and data analysis is performed by a different person and at a different time to the supposedly unskilled work of question asking and data recording, these different components must be separated.
Of course, we are far from the first to raise issues with the oversimplifications common in contemporary development research. As Krishna et al. (2020: 95) highlight, this is in part the result of desires to make everything “amenable for econometric analysis.” Here we emphasize the connected but less commented upon logic of deskilling data collection. When AR4D professionals commit to the idea of FOs as low skilled—and conversely academic instrument designers as the truly skilled practitioners—our repertoire of possible research strategies is limited.
Deskilled labor is devalued labor
Even within the limited practices of deskilled data collection, it is crucial to recognize that FOs exercise a great deal more skill and expertise than they are given credit for. For all the efforts to deskill survey instruments, their implementation in fact remains a highly skilled process. As much as actually side-stepping the need for skilled practitioners to implement surveys, deskilling involves the obfuscation of the essential work that FOs do. We observe at least three areas where FOs perform crucial and underrecognized skilled scientific labor:
Navigating access and building rapport. Interpreting and brokering. Improvising when the script is not appropriate.
During fieldwork in a rural village in southwest Tanzania, Ben witnessed this labor in action when an FO called Paul passed through conducting research on behalf of an international applied research center. Arriving in the village, Paul found that a funeral was underway—an event he knew would likely take several days. Paul had been referred by the local government livestock extension officer to a man called Mwaitege who was regarded as a livestock expert. Paul explained briefly to Mwaitege that he hoped to conduct research about livestock keeping and climate smart husbandry practices for a major international research center, which had an office in Dar es Salaam around 1000 km away. He asked for help in conducting the research. Mwaitege nodded “Yes, I can help, but we are mourning.” Understanding that any attempt to collect data that day would be both disrespectful and hopeless, Paul did not raise his research again that day. Nevertheless, he remained for more than an hour eating and chatting with people at the funeral as a sign of respect.
The next day Paul arrived early in the morning. Again, he sat down with Mwaitege and accepted kande (mixed corn and beans) as well as fried bananas and a soda. Again, he started to chat with people, making no apparent effort to rush Mwaitege into helping him with his research. After around two hours, Mwaitege indicated for Paul to follow him back to his cattle shed. The first part of the research protocol that Paul had been given involved watching Mwaitege feed his cattle. Later, Paul sat down with Mwaitege to implement a detailed questionnaire instrument. Written completely in English, which Mwaitege did not speak, Paul translated it with ease. Writing on the stapled sheets of paper, he also made telling notes. For example, next to the question “Does the respondent make and/or use hay,” Paul wrote “no” in the box provided, but he also added in the margin, “Does not know what hay is.”
Paul returned again on subsequent days, conducting the same survey questionnaire with Mwaitege helping introduce him to around 10 people to interview each day. A striking aspect of Paul's research practices was how much research he did beyond the scope of his relatively narrow survey instrument. On the long walks between houses, he diligently learnt greetings in the local language (specific to different times of day, and to specific activities). Locals would ask him questions about their cows, and he would attempt to provide advice, for example on the merits of certain feed supplements. He would, moreover, inquire about and show interest in people's lives, even when he did not have the survey in hand.
Ben was never sure if Paul wrote up these observations, or who might read them. Paul himself explained that part of the motivation for his friendly manner was to help people open up before he subjected them to the more abrupt questions demanded by the survey instrument. It was also clear that Paul did a considerable amount of improvised research work. He gained insight into mourning protocols (and their impacts on agriculture), into how people greet one another (reflecting attitudes to certain forms of work), and into local knowledge of foreign farming techniques such as hay and silage production. But it seemed clear to Paul that his employer had no interest in this data. Much of it remained unrecorded, while he was unsure if his marginalia was appreciated or not. Nevertheless, this knowledge allowed Paul to contextualize and interpret his interlocutors’ responses to the survey. Perhaps more fundamentally, it allowed him to ask the survey questions in the first place. Both in the sense that it helped him to phrase questions in ways that made sense in a local context. And in the sense that introducing himself through friendly open-ended conversations enabled Paul to build relationships and rapport with communities like Mwaitege's. Watching Paul carry out research in that village, it was evident that his methods were anything but mechanical. Paul relied upon a wide-ranging skillset that he had cultivated through many years of conducting surveys with rural communities across the region, and of which he was rightly proud.
By treating the survey as if it were a mechanical process, Paul's employer fundamentally devalues his labor. This is not to suggest that Paul's employer or the scientists who designed his survey are mean-spirited or intentionally disrespectful to FOs. Indeed, many AR4D researchers do want to value and respect their FO colleagues. The devaluation of FOs in research is, however, inherent in the conceptualization and practice of survey instruments. As soon as one acknowledges the skills that FOs like Paul use to improvise and innovate interviews, one admits things that the ideology of mechanical survey instruments do not allow: the contingencies, tacit knowledges, imperfect translations, and subjectivities that inescapably and fundamentally shape data collection. To acknowledge the role of FOs in improvising interview tactics and questions, or in analyzing their interlocutors’ responses would, moreover, imply that FOs are involved in roles that mechanical rigor demands be both separate from data collection and reserved for the scientists: research design and data analysis. In this respect, Paul's employers may even recognize, on a personal level at least, the highly skilled labor that Paul does. But the ideology of mechanical methods demands that his skills are erased from the research record. The survey must be made to appear as if it were the result of a mechanical—and therefore rigorous—process.
Skilling research, democratizing science?
The mechanical model of survey instrument design and implementation is inherently hierarchical. It places skilled international scientists in relation to unskilled local FOs in ways that echo familiar colonial hierarchies (Biruk, 2018). 5 In Tim's ethnographic research, he nevertheless observed an AR4D colleague called Bob presenting deskilled research tools as something that might democratize research (McLellan, 2021, 2024). Bob himself had developed a research tool made of numerous survey instruments capable of assessing a range of factors relevant to agricultural development—gender equality, cropping decisions, and food security, for instance. He was inspired in part by his frustration working with data from a poorly designed survey instrument. The problem, as he saw it, was that the researcher who had designed the survey didn’t have the expertise or training necessary to do so. He suggested, moreover, that this kind of poorly designed instrument was a common problem for AR4D where colleagues—especially in regional offices and national partner organizations—often lack the capacity necessary to design high-quality survey instruments. To be sure, Bob is supportive of strategic efforts that would build research capacity at national and local levels (e.g. Leeuwis et al., 2018). Nevertheless, his approach in the meantime is to create research tools that might be used even by the least skilled of his colleagues. To create a tool that, as he put it, “anyone could use.” The democratizing logic here is that it takes so little skill to use the tool, anyone can adapt and use it to their purposes. But this approach unintentionally risks intensifying the trend of deskilling research and therefore devaluing the knowledge producing labor of FOs.
Skilling is an alternative approach to democratizing research that we think has more transformative potential. Rather than making instruments so simple that anyone can use them, skilling would begin from the premise that FOs and other people outside elite academic research networks have—and have the capacity to further develop—sophisticated sets of social science research skills. Skilling would mean improvising research methods that make the most of those skills. Given that research skills are necessarily locally specific and locally grounded, following the skilling of research through to its logical conclusion would necessarily require a shift away from standardized methodologies to locally grounded ones.
Ben is attempting a version of this with colleagues in East Africa (Eyre et al., 2024). They are developing a way of working tentatively described as “citizen ethnography.” It brings together academics with people who have lived experience of the research topic but no prior training in anthropological methods. They then collectively pursue ethnographic research which includes: conducting participant observation (together or separately), discussing observations and interpretations, and comparing direct experiences with relevant literature from different disciplines. Rather than merely training people in a new participatory method, citizen ethnography involves an attempt to dissolve the distinction between data collector and analyst. It recognizes and aims to cultivate the kind of insights briefly alluded to in the case of Paul above that seem important at ground level but often do not seem to fit the survey. It relies on and benefits enormously from the analytical capacities of people who do not have advanced university degrees. The shift this implies away from standardized to locally specific research questions and methods has clear implications for the much-fetishized goal of scalability. Given the grounding of skills in local contexts, a skilling approach would be difficult and perhaps impossible to reconcile with demands for scalability and standardized global datasets. Just as deskilling complements scalability, so skilling would undermine it. This is not to say learning across borders would be impossible. But transnational learning and comparison would need to look very different if grounded in local rather than universal methodologies.
Related to this, but perhaps more fundamentally, moving away from mechanical methods towards skilling research entails a challenge to the assumption that mechanical methods are inherently rigorous. As noted above, mechanical survey instruments are modeled on a particular ideology of natural scientific methods. Although natural science experiments are often written up as if they were the result of a mechanical process, studies of natural scientific practice have shown how they in fact rely fundamentally on a deeply intuitive feel for the organisms, systems, and environments that they study (e.g. Keller, 1983; Myers, 2008; Vaughn, 2017). 6 Something that scientists develop through years of interaction with them. Forest ecologists, for instance, develop a feel for forests not just through numbers on a spreadsheet but by spending extensive time in forests. Analogously, one could say that Paul gains a feel for farming communities through the time he spends with them. But Paul is not considered to be a social scientist: he is merely the FO and the survey instrument that he must implement leaves him no way of transmitting the intuitive feel he has developed back to the social scientists who run the project from afar. We argue that recognizing and valuing the tacit feel and research skills that FOs rely on in the field would actually bring AR4D social science more in line with the practice (if not necessarily ideology) of the natural sciences.
Recognizing and embracing the ways in which (natural and social) science is grounded in tacit and embodied knowledge practices is one way to undermine the hierarchical opposition that colleagues like Josiah draw between “anecdote” and “data.” Not by saying that anecdotal knowledge isn’t fundamentally subjective but by disposing of the fiction that survey data (or indeed any scientific data) can be the product of a truly impersonal, disembodied, or mechanical process. As feminist STS scholars have long argued, the sciences’ fictions of impersonal and mechanical knowledge production create and perpetuate gendered and racialized conventions within and beyond the sciences. These are conventions that include and empower some while excluding and disempowering others. The FO in AR4D is one especially stark example of the kinds of people and the kinds of knowledges that these conventions exclude and disempower. Beyond illuminating the inequalities and injustices perpetuated through scientific practice, a major goal of feminist STS scholarship is to craft alternative repertoires of rigor. Repertoires such as “situated knowledges” that are conceived in opposition to colonial hierarchies of knowledge and that seek to generate space for knowledges and practices that are excluded from the dominant, mechanistic model of science (e.g. Haraway, 1988; Liboiron, 2021; Roy, 2008). 7 Such scholarship would be an important starting point for conceptualizing a version of scientific rigor that embraces the possibilities of skilling research.
On top of transforming often dogmatically held ideas about scientific rigor, truly recognizing and empowering FOs as highly skilled partners would require shifts in funding and employment structures. In this respect, epistemological and institutional hierarchies are clearly interconnected. The impossibility of equality between collaborating partners is undermined not only by ideologies of mechanical rigor but simultaneously by funding structures that put Global North academics in charge. That is to say, Global North-based academics might (and indeed should) try to treat local FOs as equal partners, but such efforts at a project level are inherently limited by funding infrastructures that privilege Global North academic and by development institutions that afford the former significantly more secure and better-paid employment than the latter.
Thus, while there is radical potential for skilling not only to enhance the quality of AR4D research but also to redress the poor treatment of FOs and colonial hierarchies of knowledge, a truly radical skilling agenda clearly faces significant barriers in AR4D, global development, and the social sciences more broadly. Indeed, the incompatibility of skilling with contemporary development institutions is both what makes it potentially transformative and what makes those institutions such a barrier to skilling research.
Conclusion
Deskilled data collection is grounded in and perpetuates colonial hierarchies of knowledge. We propose skilling research as an alternative set of practices and ideas, but we are nevertheless uncertain about the potential for realizing such practices within contemporary AR4D. In important ways, deskilling is thoroughly embedded in crucial assumptions and ideas that predominate not only in AR4D and global development but also in the sciences more broadly. Most significantly, deskilling reflects and perpetuates culturally and historically specific beliefs about what constitutes scientific rigor and who counts as a skilled expert. Emerging work with citizen ethnography is nevertheless an example of how space for skilling might be crafted within existing institutional arrangements. And we see hope in such projects for incremental change in the epistemic and structural hierarchies that characterize AR4D's relationships with FOs. The degree and depth of the transformations that project might achieve remain to be seen.
Whether one pursues a radical or incremental strategy, however, we wish to underline the fundamental connection between research methods and the valuing of FOs. The ideology and practice of formal, mechanical research methods place FOs as workers who need not be skilled. Their interchangeability as generic relatively low-skilled workers is a necessary methodological and epistemological requirement of the conventional household survey's inherently hierarchical and centralized mode of research (Biruk, 2018). Research methods are, in this respect, not merely reflective of social hierarchies, they are productive of them (Haraway, 1997: 23–39). Iterative and open-ended research methods such as ethnography are not inherently more egalitarian. The history of anthropology—not least the erasure of anthropologists’ own research assistants from the ethnographic record—attests to this fact. Nevertheless, we argue that it is possible to innovate methods such as citizen ethnography that attempt to foster, recognize, and value the skilled work of FOs. Such innovations are not merely about generating better data but ultimately aim to shift our understanding of what counts as good data, rigorous research, and objective knowledge. Skilling research can simultaneously be a means to richer data and analysis, and to redressing some of the inequalities and injustices that are pervasive within global development research collaborations. In this respect, decisions that AR4D project leaders make about whether to deskill or to skill research—and they are always making this decision even if they don’t recognize it on these explicit terms—should be understood simultaneously as decisions about how to collect and analyze data, and how to structure (in)equalities within research collaborations. In this respect, valuing and respecting FOs and their skills is much more than a matter of being nice to FOs on an interpersonal level: it is about making methodological and epistemological choices that consciously build space for their knowledges and skills as an inherent component of research.
In this spirit, we conclude with some questions we now try to ask ourselves, and that AR4D projects might ask when embarking on research design:
Do my research practices rely on the deskilling of FOs? How might my research practices recognize and empower FOs as skilled partners in knowledge production? What social and epistemic hierarchies do my research practices reinforce and perpetuate? What social and epistemic hierarchies can my research practices challenge and disrupt?
Footnotes
Acknowledgments
Tim thanks colleagues at IFF for being such generous hosts during his time in China. Ben would like to thank Sharon Acio Enon, Dorah Adoch, Vicky Alum, Sarah Amongin, Joel Ekaun Hannington, Ann Gumkit Parkaler, Ben Jones, Ezra Okello, Robert Oluka, and James Opolo for their work developing citizen ethnography together, as well as Mario Schmidt with whom they are now working on the next phase. We both thank Jim Sumberg for the invitation to participate in this special issue.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: McLellan's work was supported by National Science Foundation Grant No. 1357194, as well as by various grants from the Cornell East Asia Program, the Cornell Mario Einaudi Center for International Studies, and the Cornell Society for the Humanities. Eyre's work was supported by the Economic and Social Research Council (Award Reference: ES/J500094/1) and a Leverhulme Trust Early Career Fellowship.
