Abstract
This article argues that translating social research findings into policy recommendations may pose a significant methodological and practical challenge. Due to the current emphasis on the ‘third mission’ of universities and the ‘relevance’ of scientific knowledge, it has become more common for sociologists to engage in projects that include the development of social-research-based recommendations. This article analyses empirically an example of such a project – an extensive, European Commission–funded study about the role of men in gender equality in Europe – and shows that developing recommendations may receive lower priority than producing findings, be unguided by any specific method or approach, and operate within an ‘aura of evidence’ rather than a ‘hierarchy of evidence’. Discouraging an all-too-easy criticism, this article argues for more reflection, frameworks, and methods that could support sociologists in the development of research-based guidelines for policies.
it is hard to get an ‘ought’ from an ‘is’
Introduction
In recent decades, sociologists in academia have increasingly been expected to produce knowledge of high policy relevance and ‘social impact’. A vital part of such engagements is often to propose some kind of research-based advice, recommendations, and guidelines for policy or practice. However, the very process of moving from social research results into such ‘prescriptions’ receives little attention, both scientifically and politically, as if it was just an unproblematic translation. In this short article, we question this assumption by taking a closer look at the process of developing recommendations in an extensive, pan-European, and policy-oriented social research project. Our findings show how challenging, fragile, and substantially unprotected by the sociological repertoire of theories and methods such an endeavour can be.
It seems justified to say that since its beginnings sociology has aspired to offer practically useful knowledge on ‘the social’ – the one capable of informing and even guiding reflection, policy, and action. There have been many ways in which sociologists have engaged with their audiences – as public scholars (in the vein of Charles Wright Mills (1959)), advisors for policymakers, members of international organisations, experts, and activists in the non-governmental sector, or even consultants for the business world. There is also a long tradition of the schools and approaches of practically oriented sociology, like clinical sociology, different kinds of action research, and public sociology, to list only some prominent examples. There has also been a growing awareness that challenges – such as climate crises, inequalities, technological advancements, and health risks – cannot be reasonably handled without taking into account their social aspects (Buyalskaya et al., 2021; Flecha et al., 2015; Van Langenhove, 2012).
Nowadays, however, there has been increased pressure coming from the outside of the discipline (mostly from governments and societies) to prove the ‘usefulness’, ‘relevance’, and ‘social impact’ of knowledge produced by social sciences and humanities (Bastow et al., 2014; Brewer, 2013; Holmwood, 2010; Komp, 2017). Accordingly, sociologists have increasingly been required to deliver knowledge that is practically relevant, and this characteristic has repeatedly been stated to be at least as important as rigour, robustness, and validity (British Academy, 2008; Irwin, 2019; Oliver and Cairney, 2019). The emphasis on the ‘third mission’ of the university has often been coupled with the proliferation of the ‘audit culture’, embodied by the so-called ‘impact case studies’ in the Research Excellence Framework in the UK, and the development of similar audit regimes in other countries like Australia, Canada, the United States, the Netherlands, and Poland (Boswell and Smith, 2017; Smith and Stewart, 2017). These transformations have often been interpreted as distinctive to the current forms of bureaucratic rationality, the expansion of ‘managerialism’ and the shift ‘(. . .) from long-term planning to maximising short-term gains’ (Connell, 2019: 116). It has been criticised as contributing to the erosion of public university, the strengthening of ‘academic capitalism’ (Jessop, 2018) and even the emergence of ‘dark academia’ (Fleming, 2020). Even in a less critical view, the systemic character of these shifts seems evident, significantly altering the institutional context in which sociology is situated. To put it simply, as the modes of knowledge production have become more application-oriented (Nowotny et al., 2003), the external value of sociology has become increasingly dependent on its ‘practical relevance’ and ability to formulate guidelines for action. This, in turn, often puts sociologists in the pursuit of relevance and 'impact' through large grant-funded, policy-oriented projects.
Such engagements, however, may be problematic for many reasons, such as complex relationships with audiences, value-laden choice of policy goals, problematic access to resources, dependence on political context, and the necessity to perform credibility work (Geiger, 2021). Especially in the administrative types of research (those commissioned to large teams of scholars and offering substantial funding from a public body), there are varied risks, including the emergence of power hierarchy within the research team, heavy reliance on existing networks (Duke, 2018), and aligning research results with the agenda of the funding institution. 1 We argue that among these complexities there is also a largely unnoticed challenge of how to move from the descriptive and explanatory knowledge on ‘the social’ that sociology routinely produces to the prescriptive knowledge on what ‘should’ or ‘could’ be done in order to accomplish socially desirable (however defined) aims.
It seems that the silent yet prevailing assumption has been that ‘adequate’ and ‘good quality’ knowledge would somehow lead to viable guidelines that could then be implemented in policies and regulations. In line with this view, it has been traditionally believed that providing knowledge that meets specific epistemological standards is essential yet also problematic given the complexity of the social world (Stehr and Grundman, 2001). However, there are already studies pointing out that the epistemological ‘excellence’ of knowledge does not necessarily guarantee its practical ‘relevance’ (Paine and Delmhorst, 2020) and that the usefulness of knowledge relies not on its theoretical refinement or robustness but on the ability to guide and justify actions (Rudnicki, 2023; Stehr and Grundman, 2001). Furthermore, there are many areas of sociology – such as gender studies, energy and sustainability research, migration studies, disability research, media consumption research, and public deliberation studies, to name just a few (Blue et al., 2021; Konopásek et al., 2018; Shove, 2010; Stevens, 2021) – with abounding evidence and direct relation to pressing social challenges yet still lacking a straightforward translation into specific policy or practical recommendations (not to mention the implementation of them).
Here, we aim to explore the process of developing policy-oriented recommendations based on social research results more deeply and empirically. In this endeavour, we draw on the tradition of Science and Technology Studies (STS), which is a field that refrains from the assessment of knowledge according to methodological standards and focuses instead on studying down-to-earth practices in which knowledge is made and claimed to be ‘truthful’ and ‘objective’. Among the most popular and influential research in this field are the ‘laboratory studies’ (Sismondo, 2010) that have observed the scientists in their daily, mundane activities and showed how ‘scientific facts’ become constructed in their course, often with the help of devices, materials, and objects. While STS has a long-running bias towards studying the production of knowledge within natural sciences, the scholarship focusing on social sciences from the STS perspective is growing (Camic et al., 2011). In our research, we adopt the STS approach to study micro-processes of making policy recommendations based on social research results.
As our empirical case, we chose a large, international, European Union (EU)-funded, and knowledge-based project on the role of men in gender equality. The examined project was officially commissioned to provide a reference point for developing gender equality policies in the EU. It was conceived as a research-based endeavour involving dozens of academics (primarily sociologists) and research-trained experts and extensively drew on up-to-date scholarship. The planned and essential component of the project was to develop a series of recommendations that were to inform and influence the actions of the EU, national institutions, and a diverse group of stakeholders. As the project was completed around a decade ago, we tried to reconstruct – through interviews and secondary data – the process of producing research-based policy recommendations as it happened in this project, assuming the STS-inspired perspective of studying the micro-processes of knowledge production. Although our results are preliminary, we think they are sufficient to highlight some critical aspects of the problem and hopefully draw more sociological attention.
Our approach and method
Before moving on to the analysis of the examined project, it is important to note that while the assumption that ‘solid’ knowledge is a precondition for reasonable interventions in practice may be quite conventional, it is far from evident in some domains. To start with, there is a long tradition of action research (including but not limited to its participatory and critical forms) that distances itself from the universalist, objectivist, linear, and expert-led modes of knowledge production and intervention. The action research proposes a collaborative, participatory, and reflective process that ‘embraces both problem posing and problem solving’ (Cohen et al., 2018: 440). This is classically conceived as a cyclical process involving planning, acting, observing, and reflecting, assuming no rigid separation between research and intervention, knowing and acting, or participants and experts (Kemmis et al., 2014). Outside sociology, there are other domains, such as creative problem-solving, where problem-understanding is just a part of the process of arriving at a solution and is essentially followed by the idea-generation and idea-acceptance parts (Heijne and Van der Meer, 2019). Notably, various specific methods (like brainstorming or rapid prototyping) may be employed at different phases of such a problem-solving process to find, develop, and evaluate ideas for actions and interventions. Also, it has been noticed that producing ideas for practical solutions relies analytically more on abductive than on inductive and deductive reasoning, the latter two being so fundamental in discovering and verifying scientific ‘facts’ (Dorst, 2011). In all these approaches inside and outside sociology, it is recognised that gaining a thorough understanding of the problem is just part of solving it; moreover, the solution-finding processes are routinely guided by specific frameworks and methods.
Inspired by the distinction between problem-understanding and problem-solving, in what follows we study the process of moving from the former to the latter. We adopt the STS approach of looking inside the micro-processes of knowledge production to explore what happened when a research team attempted to produce research-based recommendations for interventions in a complex social field. As it is conventionally done within STS-oriented research, we put aside some of the political and institutional contexts that might have impacted the analysed project, such as the institutional framework of the EU (including the selection of policy objectives or the role of scientific legitimation of policies). Instead, we focus only on what happened on the micro level – that is, inside ‘the lab’ and within the research team. In doing so, we want to focus on mundane, routine practices of knowledge-making, hopefully supplementing the existing institutional accounts with an account of the more finely grained phenomena.
As our empirical case, we have taken the project ‘XXX’, which was conducted between 2010 and 2013 and financed by the European Commission as a part of the European Union Programme for Employment and Social Solidarity (PROGRESS). It was the first pan-European project where the results of systematic quantitative and qualitative studies on the role of men in gender equality were analysed for all the EU member states at the time (n = 27) and four associated countries of the European Free Trade Association (EFTA). The official aim of the project was to provide ‘the scientific background for research-based policy formulations and implementations’ (Scambor et al., 2013: 6), and the development of research-based recommendations (on both national and EU levels) was a significant factor in the project tender. The project team involved several dozen experts with scientific backgrounds, many of whom held academic positions at the time of the project. Importantly, as a pan-European project on gender equality, the project required handling the considerable complexity surrounding cross-national differences and intersectional matters. In our minds, all of these factors make the project an interesting case to analyse. The second author of this article was a member of the project’s research team and one of the authors of the final report. It made contacting the project team members more manageable, yet we stress that our examination was not previously planned (before or during the analysed project). It was also not commissioned as a sort of external, post hoc evaluation, and when and how we carried it was entirely independent from the project-funding institution.
Our primary data come from expert interviews. So far, we have managed to collect data from a relatively small (n = 10) but diverse group of mostly sociologists or sociologically trained experts who took different roles in the project (project managers, final report editors, advisory board members, national experts, and research-policy transfer experts). Our sample has consisted so far of six women and four men, residents of Austria, Germany, the Czech Republic, Slovenia, and the UK. The fieldwork was conducted between October 2021 and May 2022, with both authors participating in data collection. Expert interviews have been conducted via ZOOM in English, with an average duration of around 1 hour. The interview questionnaire was open-ended, and during interviews, we discussed issues concerning the role of the interview partner in the project, the process of developing recommendations, the ratio between scientific and policy-related outcomes, the project’s main challenges, and the significance of each project’s deliverables and outputs.
Apart from the interviews, we have also used secondary data. Here, our main source has been the final 285-page report XXX, published in 2013 (Scambor et al., 2013). It encapsulated research findings and recommendations, was accepted by European Commission representatives, and in the minds of our interviewees, it constituted the project’s main scientific ‘output’. As supplementary material, we have also used the project’s available working documents, such as minutes from team meetings, reports from the project’s events, and earlier versions of the report.
Regarding anonymity issues, all experts and the organisations they represent will remain nameless in this article. However, as the project was very specific, they might be subject to some identity exposure; they all were fully aware of and agreed to that.
Results
In what follows, we outline our findings that show the problematic – in our opinion – aspects of the translation of research results into policy recommendations in the analysed project.
Knowledge made in between
The first problematic area was how the ideas for recommendations were found and validated. According to our interviewees and as reflected in the composition of the final report, the project’s emphasis was on synthesising the existing knowledge to provide a thorough account of the social situation of men in the EU. In line with this assumption, significant effort was put into a thorough analysis of the relevant and existing scholarship and data. It was also reflected in the composition of the project team that gathered some of the leading scholars in the field of men and masculinity studies and a large group of national experts (as a way to preserve the sensitivity to national contexts).
Nonetheless, while the attention to the scientific ‘quality’ of the descriptive knowledge (i.e. the situation of men in the EU) seems natural in such a research-based project, we found it in quite an imbalance with the effort put into the making of prescriptive knowledge (i.e. what could be done to improve gender equality). While the final report encompassed a significant number of recommendations, no distinct ‘work package’ or other specific component was explicitly devoted to their generation or assessment. In retrospect, even the very distinction between the production of research findings and the production of recommendations was difficult to draw, as some of the recommendations were proposed at the early stages of the project, while others appeared through further discussions, working seminars, or thanks to advisory boards, and some of them took shape during the final consultations on the almost-ready report among the small group of projects coordinators and EC representatives. Thus, making recommendations was not a distinct part of the project but, as one of our interviewees put it, ‘a parallel process, from the beginning onwards’ [R4].
No other resources like money, techniques, and expertise were explicitly allocated to producing recommendations. While the project involved using multiple quantitative and qualitative data sources, no identifiable method, procedure, or approach for finding ideas for recommendations or evaluating them in terms of their reliability, applicability, or feasibility was employed, at least not in a fully conscious or systematic way. The only major strategy we identified was consulting the recommendations during three workshops and a final conference with stakeholders, EU representatives, and practitioners from outside the project. It involved the presentation and discussion of findings and recommendations, although the meetings were rather loosely structured, looking more like an open deliberation than something organised according to any specific approach. The general assumption seems to have been that prescriptive knowledge would somehow emerge through generating descriptive knowledge, rather than be produced with the use of any specific framework. 2
Loose articulation
The final project report comprised four main chapters and a concluding one, each appended by two- to four-page-long sub-chapters entitled ‘recommendations’ (Scambor et al., 2013). These sub-chapters encompassed 11 to 26 individual recommendations divided into several thematic sections. All the recommendations in the report were formulated as bullet points, mostly articulated as statements in the imperative, such as ‘make benefits of men’s care involvement visible’, ‘develop violence prevention work in childcare centres and schools’, and ‘support campaigns and research (. . .)’, often accompanied with a sentence or two of additional explanation. Overall, the report included 125 individual recommendations, taking up 16 of the 285 pages of the final document.
While how the recommendations were formulated seems in no way unusual for this kind of project, we think it may nevertheless be consequential. It seems significant that recommendations as ‘objects’ in which knowledge was encapsulated were presented in a way not organised according to any specified hierarchy (like ranking or priority levels); only sometimes was a particular recommendation emphasised linguistically as being more general or essential than the others. It is also not entirely clear whether all the recommendations were supposed to be treated as a set of obligations of equal importance, a repertoire of suggestions to select from, or something in between. Furthermore, there is little in the recommendations to specify the implementation process, like formulating concrete objectives, defining promising paths of accomplishing them, or setting key results, milestones, or roadmaps.
These observations are not meant here to suggest that such recommendations should, in principle, be narrowed down to a set of specific objectives organised in a coherent structure. We only point out that this way of formulating recommendations – perhaps regarded as self-obvious – impacts how they are or could be used. In this argument, we draw on the long line of STS studies that point out that scientific ‘objects’, whether material or textual, play a significant role in the production and usage of knowledge since they afford or enable certain forms of seeing, understanding, and – importantly – acting (Coopmans et al., 2014). From this perspective, the open form of recommendations and their loose structure may mean that they require further ‘processing’ – such as interpreting, selecting some of them, translating them into more concrete actions, or preparing implementation plans. It thus raises questions like: Is such a form of articulation intended and reflected upon by researchers involved in these kinds of projects? Are any other forms of recommendations possible and more desirable? How could such recommendations be adopted by their receivers?
Aura of evidence
Another quite fundamental aspect of the process of producing prescriptive knowledge on ‘the social’ is how recommendations are linked with the findings. For example, the prominent ‘evidence-based’ approaches (Likens, 2010) promote developing practical guidelines by relying on scientific findings that are ordered according to a ‘hierarchy of evidence’: from meta-reviews of randomised and double-blind clinical trials to non-controlled cohort studies to clinical case studies. The ‘hierarchy of evidence’ concept has been extensively used within the paradigm of the so-called evidence-based medicine to rank the studies according to their methodological design, to assess the effectiveness of the medical procedures they examine, and thus to guide decisions on their implementation (Knaapen, 2013). In other words, the ‘hierarchy of evidence’ is a systematic way of linking research findings with recommendations. While we share the scepticism towards the evidence-based perspective (like prioritising quantitative data, low external validity, or problematic assessment of what counts as evidence) – we still find it interesting to ask how recommendations in the social research projects are connected with ‘evidence’, especially in the scarcity of findings that would be regarded as ‘strong’ by the advocates of the evidence-based approach.
Exploring the linkages between findings and recommendations in the analysed project, we also found that the recommendations in the report were presented as ‘emanating’ from research findings preceding them, but they rarely included explicit connections with particular research results. It was thus difficult to identify which findings supported a particular recommendation, trace back the chain of translation going (supposedly) from findings to recommendations, or reconstruct the underlying discussions and consensus-finding processes. It was also not clear how strongly particular recommendations were supported by findings or whether some alternative recommendations were put aside as less reliable. It was instead taken to mean that all the findings behind a given section of recommendations constituted sufficiently solid support for it. Echoing the notion of the ‘hierarchy of evidence’, we call this way of linking recommendations and findings the ‘aura of evidence’. Using this term, we mean that the connection between specific findings and recommendations is not explicitly stated and that the process of establishing a consensus behind a particular recommendation is not traceable. In other words, recommendations are presented as stemming from findings but not in an analytically transparent way.
Trying to reconstruct the thinking processes leading to particular recommendations, we also found examples of heuristic reasoning, that is, cognitive ‘shortcuts’ in taking the step from findings to recommendations. The examples of these forms of reasoning can be seen as the heuristics of – as we call them – ‘star’ and ‘single best solution’. In the former, the situation in which a regulation or action already existing in one or few of the studied countries is treated as a ‘star’, a solution worth implementing on a cross-national level. Serving as an example here is the paternity leave system in Iceland (in the form of a ‘three-thirds system’, with one-third to be taken by mothers, one-third by fathers, and the other at the parents’ discretion). As the interviews revealed, Scandinavian countries were treated by many experts as particularly ‘advanced’ in gender equality and as role models for all other European countries. In turn, the ‘single best solution’ is a heuristic in which a particular action was proposed as a way with a maximum desirable effect on the problem at hand. For example, a general recommendation for ‘increasing men’s share of the care of small children’ was regarded as a ‘most central single variable’ and expected to translate most effectively into strengthening men’s caregiving roles overall (Scambor et al. 2013: 102).
Undoubtedly, from the perspective of the ‘evidence-based’ policies, using such heuristic reasoning as a form of evidence would be regarded as problematic. For example, a particular parental leave system may work very well in Iceland, but exporting it to Germany, Slovenia, or Spain (countries varying in terms of size, culture, and institutional systems) may yield divergent results. However, we would like to point out the somewhat fuzzy status of such heuristics as they seem to be treated both as a source of inspiration and as a form of evidence without full consideration of the analytical ambiguity involved. Also, it seems that presenting recommendations in ‘the aura of evidence’ may open up many questions, such as: How can or should the relationship between findings and recommendations be represented in these kinds of reports? What to do in the absence of ‘strong evidence’ yet under a practical obligation to propose some forms of action? How is consensus about recommendations and guidelines reached, both within the group of knowledge-producing experts and in the relationship between experts and the recipients of knowledge?
Conclusion
In our view, the project we analysed was highly challenging as an intellectual and organisational endeavour, perhaps especially in the aspects related to developing specific recommendations for such a diverse and complex area. We treat our material as reflecting how an experienced team of social scientists handled this challenge, utilising ways that may be found across many similar social-research-based and policy-related projects. What our short empirical illustration mainly shows is, as we believe, that the move from findings to recommendations is more like a leap over the abyss than a series of steps protected by a well-defined approach and a widely approved set of methods.
In saying so, we retain a dose of scepticism towards the evidence-based approach and doubt the feasibility of systematic methods of generating recommendations. However, our case shows that translating social research findings into policy recommendations may be challenging and that there is a potential imbalance between problem-understanding and problem-solving in such projects. It is perhaps not that surprising. After all, sociological education emphasises utilising a broad set of data-gathering and interpretation methods, following the ‘scientific’ standards, and reflecting on researchers’ situatedness. However, much less is devoted to processes and methods (even those existing within the sociological tradition) that could help sociologists produce the prescriptive knowledge that is to guide and inform policy and practice.
We believe that ‘the leap’ is especially problematic when it is not fully reflected upon and acknowledged as a potentially problematic part of the knowledge-making process. Such an observation supports the view that sociology could benefit from ‘embracing the concept of epistemic humility’ and, in some way, accommodating the flows of knowledge resulting from the discipline’s practical engagement (Meghji, 2023). However, it is also important to note that – given the transformations of the institutional context in which sociology is situated – large, international, grant-funded, and policy-related projects are becoming increasingly common, and sociologists routinely participate in them. Our analysis suggests that such engagements require the way of work and the kind of competencies that, as of yet, are not necessarily in the possession of sociologists. This is another reason for which the ‘pursuit of relevance’ is problematic. Finally, there is also the question of the public understanding of social science and the differences in the kinds of solutions that sociology is able or willing to provide (Lewis et al., 2023). The scarcity of common frameworks, procedures, and techniques for formulating recommendations and guidelines makes these processes more sensitive and exposed to different pressures and biases.
This realisation also suggests that more empirical research should be conducted on how prescriptive knowledge on ‘the social’ is produced in different areas within and outside academia. We strongly believe that such research should follow and entail the development of standards, methods, techniques, and codes of conduct that would lend a helping hand to sociologists involved in practice-oriented projects. To leave this question unproblematised would be of little help.
Footnotes
Acknowledgements
We are deeply indebted to the members of the CERGU working seminar and the STS work-in-progress seminar (both organised at the University of Gothenburg) as well as two anonymous reviewers for their insightful and helpful feedback.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The work behind this article received financial support (internal grants) from two institutions: Centre for European Research (CERGU) at the University of Gothenburg, and Faculty of Humanities at the AGH University of Science and Technology.
