Abstract
This paper considers whether the impact agenda that has developed over the last decade in UK universities is likely to help create the conditions in which critical educational research makes a more visible difference to society. The UK audit of university research quality (the research excellence framework (REF) now includes an assessment of impact. Impact pathways are requirements of both national and European Union research funding bodies and the Australian Research Council. Issues in the assessment of the social impact of research are explored by the European projects Evaluating the impact of EU SSH, social science humanities, research (IMPACT-EV) and ACcelerate CO-creation by setting up a Multi-actor Platform for Impact from Social Sciences and Humanities (ACCOMPLISSH). For many UK researchers the institutional focus on influencing the world outside the academy has brought welcome support and resources to engage with society and may appear to bring universities back to something approaching their original civic identity. However, evidence from across the academy suggests that impact as depicted in REF impact case studies does not accurately represent the experience either of the academic research endeavour or of impact as it may be more broadly construed. Analysis reported here of 85 highly rated impact case studies in the education unit of assessment of the 2014 REF suggests there is a risk that the REF impact process will embed a shift against qualitative and theoretically driven methodology that is often found in socially critical educational research. Impact is postured as neutral, hiding the neoliberal drive towards research models based on implementation, evaluation and policy. There is a need to create spaces in universities for rethinking of the impact agenda, perhaps looking at value or social creation instead of, or as an integral aspect of, impact.
Introduction and context
Analysis of the relationship between socially critical research in education and policy often leads to calls for action from the educational research community, such as to improve relevance or communication clarity (Francis, 2011; Gardner, 2011; Hillage et al., 1998). Given the increasing focus on impact and associated institutional support in many UK universities, one might expect that educational researchers are now able to make more difference to society. Our own experience as researchers was central to the motivation to write this paper. We have accessed and valued both the more strategic institutional approach to impact and the associated increased resource. However, we have at times found little congruence between the need to produce an impact case study (ICS) for the former and likely next research excellence frame work (REF) and our efforts to make a difference to society. Policy influence that seemed to us highly valued in the 2014 REF is beyond our direct influence and the inspiring anecdotes that arise from long term relationships in the field are not those that would have counted as documentable evidence of impact in the context of the last REF. We wondered whether we, as critical educational researchers working in the area of social justice, would be able to make the difference that we ourselves valued, whilst at the same time finding currency in the next REF. The assessment of the social impact of research is of growing concern across Europe and is explored through two European projects, Evaluating the impact of EU SSH, social science humanities, research (IMPACT-EV) and ACcelerate CO-creation by setting up a Multi-actor Platform for Impact from Social Sciences and Humanities (ACCOMPLISSH). We were intrigued therefore that Colley looked at ‘What (a) to do about “impact”’ (the title of her paper), and concluded that ‘academics need to find collective ways to resist [the] “research ‘impact’ imperative” due to its encroachment on academic freedom’ (Colley, 2014: 660). It was in an effort to tease out our puzzlement and find what we, and indeed the wider academic community, could do about impact, that we wrote this paper. We build on Colley’s arguments by looking at analyses of highly rated impact case studies from the 2014 REF from a range of subject areas, including our own detailed analysis of those from the education unit of assessment. We extend Colley’s use of Bourdieu’s illusio to an understanding of the impact agenda in the light of our analysis.
In terms of improving impact we suggest there is much evidence that educational researchers are on the case: they are constantly developing new ways to engage beyond the academy, including much use of social media (see blog of the British Educational Research Association 1 ). Indeed, the relationship between critical educational research and policy or practice has always been complex, non-linear and contested and therefore always the focus of scrutiny as recognised by scholars internationally (Edwards, 2002; Edwards, 2000; Fine et al., 2012; Lubienski et al., 2014; Mortimore 2000). Academics have contributed to the development of national policy for decades through a range of structures. This has included membership as long ago as the 70s of the 1978 Warnock committee of enquiry into Education of Handicapped Children and Young People, or the Schools Council abolished in 1984, or giving evidence to the 2016/17 select committee in education on such varied topics as mental health, grammar schools and multi-academy trusts. 2 At a more local and regional level, academic research has contributed to the decision-making of schools, local authorities and related agencies through direct engagement as consultants delivering evaluations (Cummings et al., 2011; Mazzoli Smith and Todd, 2016). Academics have influence by supporting the research of teachers themselves in many ways, such as through postgraduate training or initiatives such as Campaign for Learning-funded Learning to Learn (Hall et al., 2006) or the Oxford University education deanery supporting local school teachers’ engagement in and with research. 3
What has in the past been lacking for educational researchers has been the strategic support for impact from many universities. Research production is complex, one aspect being the institutional culture within universities, and this has not always encouraged engagement with society. Such a situation is in contradiction with the origins of many UK universities in the 19th century in response to the challenges of industrialization and urbanization. For example, Newcastle University has its roots in a School of Medicine in 1834 as a civic university, in response to the regional demands of an emerging industrial economy that included shipbuilding, mining, heavy engineering and agriculture. A civic focus was eroded over the 20th century due to national agendas to prioritise research and teaching at a national rather than local level (and within this, theory over practice) and national over local student recruitment (Goddard et al., 2016). Impact came to be understood in terms of reception within the academy (i.e. measured by journal citations). A more limited civic role remained in many universities, such as a commitment to professional training (i.e. medicine, engineering, architecture, teaching and a role to play in the regional development agencies in the enhancement of industry). Individual academics continued to sit on government commissions and committees and evaluate local authority provisions.
Although the 20th century saw a detachment of universities from place, in the following decade universities re-engaged with different versions of making a difference to society. The impact agenda as we now know it emerged in the late 2000s in response to national drivers to demonstrate the value of universities. Scholars note the ‘engulfment of HE by neoliberal doctrine’ (Chubb and Watermeyer, 2016: 9) and associated structures of marketization, individualism and competition with national and global league tables (Giroux, 2002; Glenna et al., 2015; Shore, 2010). UK university funding started to be linked to an assessment of research impact that was added to the periodic national research audit (held roughly every five/six years). Impact is defined by REF guidelines as ‘an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia’ generated by excellent research (Higher Education Funding Council for England (HEFCE), 2012: 48). It accounted for 20% research income, at least £1.6 billion, and has implications for university league tables. Excellence was graded in terms of ‘reach’ and ‘scope’. The REF made a clear break with the previous focus on engagement by calling for evidence of impact that went beyond having civic society passively receiving research. Subject areas in universities were required to produce impact case studies (ICSs). These were four to six-page documents that described research that was at least recognised internationally, and was carried out at the institution between 1993 and 2013 (for example, see Supplementary Material). The case studies were to describe impacts that happened between 2008 and 2013, and for which up to ten pieces of evidence were provided, such as email testimony and the reference of research in policy documents. In addition to the REF, criteria for successful funding from the research councils evolved to require the demonstration of pathways to impact. This was prospective impact rather than the retrospective impact of the REF. The European policy for Responsible Research and Innovation (RRI) emerged at the end of the first decade of the millennium, broadly concerned with Europe’s ability to respond to societal challenges. RRI was a ‘re-evaluation of the concept of responsibility as a social ascription in the context of innovation as a future-oriented, uncertain, complex and collective endeavor’ (Owen et al., 2012: 757). The Framework Programme 7 EU-funded project IMPACT-EV focused on the development of indicators and standards for measuring impact, and generated debate about the limitations of dissemination in terms of achieving knowledge application, and transfer in terms of achieving improvement (Aiello and Joanpere, 2014). The ACCOMPLISSH project funded by the EU Horizon 2020 programme builds on the work of IMPACT-EV, by bringing together academics with representatives from industry, government and societal partners to develop an interdisciplinary co-creation approach to impact generation.
The role, scale and mission of higher education in the UK has evolved over the last few decades and is still evolving in relation to a myriad of national and global drivers including ‘understanding of knowledge and modes of creation and dissemination, and societal and labour market requirements’ (Goddard et al., 2016: 3). Many universities nationally and internationally are developing a range of institutional architectures for engaging with societal challenges that reflect in complex ways both neoliberalist influences and civic university origins (Ball, 2012; Goddard, 2009; Goddard and Vallance, 2011; Goddard et al., 2016). What this means for individual disciplines or for socially critical educational researchers is not easy to predict.
The impact of the impact agenda
The 2014 REF produced over 6679 case studies of the impact of higher education research on society. Never before had there been an attempt on such a scale across the academy to assess its value to society. Given that impact will continue to play a key role in a future REF, analysis of this bank of case studies seemed crucial to consider the possible affordances of the impact agenda for the academic community. Kings College carried out a digital text mining and qualitative analysis of all 6679 non-redacted ICSs, suggesting a diverse and significant contribution of higher education research to society both in the UK and to all nations globally. There were, for example, 3709 unique pathways to impact. Interestingly small institutions made a disproportionate contribution (King’s College London and Digital Science, 2015). Here we review analyses of ICS research from a range of disciplines before discussing research examining ICSs from the education unit of assessment.
Analysis across the academy reveals an overall picture of the opening up of opportunities to make a difference that now have institutional support, but risks to the academic endeavour. A quote from the study of Edwards and Stamou (2017) of 13 social scientists – all at one university, all successful at achieving impact – that took place just after claims for impact had been submitted to the national REF assessment echoes some of our own sentiments: for others the demands allowed them to develop aspects of themselves as researchers that had long been important. ‘Yes it felt like a marginalised activity and now it’s no longer marginalised, it’s part of what everyone is trying to do. So for me it’s a huge relief in terms of being allowed to do it publicly … it was a kind of undercover activity before’(Edwards and Stamou, 2017: 273). revealed ambiguities in researchers’ takes on impact: while all of them were willing to articulate a clear sense of societal value at the heart of their work (and of universities’ mission), this sense was broader and more fluid than the notion of impact underpinning current assessment processes (Oancea, 2013: 248).
The main claim from this research into ICSs is that the impact depicted does not accurately represent the experience either of academic engagement or of impact as we would conceive of it in our everyday work, and that this is potentially hazardous for the academy. Most analyses have highlighted the limits of linear, unidirectional, immediate and short-term conceptualisations of impact (Greenhalgh and Fahy, 2015; Khazragui and Hudson, 2015; Oancea, 2014; Shortt et al., 2016). Khazragui and Hudson (2015) evaluated a number of REF case studies, and suggested that the use of commissioned research for case study impact might lead to less radical academic, less fundamental research since initial ideas come from commissioners. The methodologies for capturing impact were often found not to fit in terms of validity or reliability expressed within a discipline such as clinical medicine (Ovseiko et al., 2012). In the discipline of public health, Greenhaugh and Fahy (2015) analysed all 162 ICSs and asked the authors of four ICSs for narratives of impact efforts. The impact captured documented changed practice, guidelines and policy; in other words, it was one step removed from patient outcome. Shortt et al. (2016), also analysing public health ICSs, found little of the non-linear and non-immediate nature of impact of social science research that is often referred to in the literature, such as demonstrated by work on alcohol and tobacco environments in Scotland. Shortt et al. (2016) give the example of their own work on alcohol and tobacco environments in Scotland which sparked debates and conversations, such as national public radio debates, but did not operate in a linear manner in terms of having an impact.
The other main concern documented in research literature is that the research that leads more easily to the kinds of impact foregrounded in the REF does not reflect the full range of research from different subject areas, and that this is likely to encourage certain kinds of research and inhibit others. Colley (2014), looking at education, suggests that the REF impact agenda has institutions encouraging researchers to focus on research reception rather than research production, with the danger that we might produce what receivers want to hear, rather than robust critical social science that might challenge the receivers. Her views are echoed by others (Fielding, 2003; Knowles and Burrows, 2014). Parker and von Teillingen (2012) have similar concerns in the area of social work research: ‘should the conceptual, theoretical and critical “thorn-in-the-side” research be marginalised or disappear it is likely to make the research landscape less vibrant, anodyne and potentially much less useful’ (p. 50). One of the few studies with contrasting findings is the content analysis of 25 of the 160 ICSs from 14 institutions submitted for the library and information management unit of assessment. This study found the most common methodologies were consistent with those used more often in the discipline: theoretical work, literature review and qualitative interviews (Marcella et al., 2016). On the other hand, Greenhaugh and Fahy (2015) in clinical medicine found qualitative and participatory research designs, common in the research outputs, were rare within ICSs. They also note that multi-stakeholder research collaborations such as UK National Institute for Health Research Collaborations for Leadership in Applied Health Research and Care are built on non-linear models of impact, which are not reflected in ICSs. Consequently we must examine whether the REF may encourage academics to work in linear ways that are more likely to lead to the evidencing of some kinds of impact, but which do not actually promote the most meaningful or effective models of impact.
Some document the effect over time on a whole discipline, the discipline of anthropology, suggesting that pure long-term field anthropology is being eroded for short-term commercial applications of anthropology. They claim “knowledge economy is reshaping anthropological research and popular understandings of ethnography” (Mills and Ratcliffe, 2012: 147). Others suggest dangers for the academy as a whole. Chubb and Watermeyer (2016) interviewed 50 senior academics in the UK and Australia between 2011 and 2015 in the arts and humanities, social science, natural and life science and physical sciences, from two research-intensive institutions. They concluded that ‘the hyper-competitiveness of the HE market is resulting in impact sensationalism and the corruption of academics as custodians of truth’ (p.4). The impact agenda is linked to the neoliberalisation of higher education ‘through the rapid and relentless spread of coercive technologies of accountability’ (Shore and Wright, 2000: 57). It therefore not surprising that for Colley (2014), the effect of forcing the REF version of impact on the academy is an encroachment on academic freedom.
Analysis of high quality education impact case studies
The dangers of the REF impact agenda for the academy have been consistently argued. However, ICSs did not skew the kinds of impact that are significant in all disciplines, as in the case of library and information management (Marcella et al., 2016). As researchers, it would be contrary to our values to make assumptions about the implications of the REF impact agenda for critical educational research. REF impact is here to stay, and our interest is in whether there is a way that it can be shaped so that critical educational research might make a more visible difference to society.
Our analysis comprised all ICSs that gained high REF ratings (3*/4*) for impact, in order to make inferences about the kind of impact that was valued in the REF process, and the kinds of research leading to such impact. To be eligible for the REF, ICSs had to include research that was at least recognised internationally (i.e. outputs of at least 2* quality) and carried out at the institution between 1993 and 2013, and to describe impacts that happened from 2008 to 2013. They then received an ‘overall’ grade for a university education group’s ICSs and for their general impact strategy. Since ICSs were not graded separately, in order to include only those graded highly, we selected all the ICSs from universities that had received over 100% of their overall impact grade within 3* or 4*. By making this selection, we were not able to include any 3*/4* ICSs where some of the overall impact grade was 2* or 1*. It is therefore likely that there were other 3*/4* ICSs that we excluded from our analysis that may have, had we included them, led to different conclusions. Similarly we omitted from our analysis any education ICSs that might have been presented in other units of assessment such as geography and sociology. In our analysis 85 ICSs were identified and included from 21 universities: Sheffield, Durham, Nottingham, Newcastle, Cardiff, Glasgow, Stirling, Belfast, Ulster, Manchester Metropolitan, Manchester, Loughborough, London Metropolitan, Edinburgh, York, Southampton, Institute of Education/University College London, Oxford, Bristol, Queen’s, Kings College London. We made an exception to our criteria to include University College London/Institute of Education which curiously had 3.5% of its overall grade in 2* and the rest in 3*/4* as it seemed unlikely that this would apply to a whole ICS.
Drawing on the research literature quoted already in this paper of analyses of ICSs in other disciplines, we were interested both in the kinds of research that appeared in these ICSs, and the nature of the impact. A coding framework was produced with reference to this literature and to our own ICS on extended schools (Newcastle University (2014) the Supplementary Material), and a read though of a sample of the 85 ICSs. The framework consisted of asking the following questions of the 85 ICSs in order to structure our analysis:
What kinds of impact are found: impacts directed to policy, professional practice or the lives and experience of children/young people/parents, or other? Linked to this, who are the main beneficiaries? Does the impact have mainly a national or international reach or both? What is the range of sources of evidence – and what trends can be identified? On what kinds of research is impact claimed? Quantitative/mixed methods/qualitative/theoretical? Over what time period was the research carried out? Was research carried out by a team or a sole researcher? Is there a clear lead researcher? What links can be seen between the ICS and current UK education policy? To what extent are ‘negative’ findings and impacts (e.g. which critique a current policy directive) included? What other trends are observed? What questions occur?
We made judgements about aspects that were not clear cut, such as the type of research methods used, and we could only draw conclusions based on the information given in the text provided and the associated outputs. A subjective aspect to the analysis was impossible to avoid given the subjective nature of the terminology employed in the ICSs and the positive spin they necessarily adopt, as picked up by the King’s College analysis (King’s College London and Digital Science, 2015: 7): ‘While there was a large amount of numerical data included in the case studies, the way in which different numerical values were presented meant that the data could not be easily synthesized’. Nonetheless coding was carried out by one of the paper authors, and another carried out a moderation of the coding using a sample of 25% of the ICSs. This demonstrated a high level of coding agreement, with disagreements resolved through discussion. Where codes were difficult to decide this led to the generation of interesting issues such as identifying impact that was counter to policy. This and other issues are discussed in the following section.
What kind of impact and research was found in 85 education ICSs?
Our first impression reading all 85 ICSs was of a vibrant, varied and highly influential national education research academy in the UK (at least from 2008 to 2012), supporting the overall findings of the Kings’ College research (King’s College London and Digital Science, 2015). Our judgement was supported by the REF panel report that ‘the case studies overall were extremely impressive’ (Research Excellence Framework, 2014: 111). Impacts included institutional changes, international policy and governmental shifts. Other impacts were profound, particularly in overseas contexts where academics held influential positions. Most focused on research that had been carried out for many years. For example, 25% resulted from research conducted for 20 years or more, 40% from research conducted over 11 to 20 years, 24% from research conducted from between 6 and 10 years and fewer than 10 were conducted over a period of up to 5 years. Most (75%) appeared to be a team effort and less than half seemed to have a clear individual named lead. A small group of two or three researchers were involved in 20% of cases.
The main beneficiaries were policy makers, children, students, pupils and practitioners but also non-governmental organizations, trades unions, families, the labour market and society as a whole. The reach of the research seemed to be national and international in most, but we judged that 13 had either mainly or only UK impact. The sources of evidence were varied and included web downloads, social media outputs, resource uptake percentage, workshop attendance, use of websites, quality marks, testimonies of policy and practice change. There were also awards included in the evidence that were largely from within the academic community. This suggests that verification of evidence is through self-validation and we may question whether this provides legitimate sources of evidence of such ICSs.
Pathways to impact and intentional strategies that led to impact were not referred to in many ICSs, so judgements about these were not easy to make. The main mechanisms by which impact happened were reports or briefings, training materials, but also workshops, online toolkits, products to buy and national standards. However, our own experience of compiling what became a 3*/4* ICS (Supplementary Material) was the importance of a combination of serendipity and strategic endurance: working on the same area of research and engaging with partners over a long period of time, carrying out research on government policy that was also funded by the government, and engaging with stakeholders who put additional resource into engagement.
Most ICSs drew on mixed methods research, some used solely quantitative methods. Most had the model of initiative design and/or implementation and/or evaluation leading to improvements to attainment or influence on policy or curriculum. It was notable that very few ICSs were based solely on qualitative research. There seemed to be only one that was a solely theoretical analysis, but more that combined a conceptual analysis or theory driven approach with other methods.
We classified the ICSs in terms of impact on policy, practice and behaviour, our analysis necessarily making a judgement call based on what was presented in the ICSs in terms of how far behaviour is impacted. In only six ICSs was policy impact absent. In some, behavioural change was independently verified through an evaluation, but more common was secondary endorsement of behavioural change through stakeholder practice change or feedback. This leads to a number of questions. How is the value of independently verified behavioural change versus secondary endorsement of behavioural change assessed? There were many examples where the stated aim to improve attainment was not documented, unlike the prior step of changing practitioner behaviour or practice. Policy impacts which were causal and discrete (e.g. a policy change or new practice) were easier to evidence than changes in pupil behaviours. Occasionally a discrete causal link was provided in the ICS using quantitative data reported in outputs.
It was rare for impact to constitute a ‘critique’ of current education policy in the UK. However, this was not a simple judgement for us to make as researchers, since the development of policy might implicitly constitute a critique at some level. Large numbers of ICSs fell in current high profile areas in the field of educational research, which included assessment, school effectiveness, science, technology, engineering and mathematics subjects, early years and ‘closing the gap’ in attainment between rich and poor. We estimated that congruence or alignment with current educational policy could be identified for all but five ICSs, and for these, the ICSs ran counter to current policy. However impacts which ran counter to current policy were mentioned by HEFCE, who encouraged submissions of impact relating to the prevention of undesirable outcomes, acknowledging that research in the social sciences may hold authorities to account and result in a proposed change not taking place. The sub-panels welcomed the submission of such case studies and it is noteworthy that they were able to evidence impact and achieve the highest quality scores. (HEFCE, 2015: 19)
Discussion
Most education ICSs could be construed as instrumental in producing more efficient or higher-quality and effective educational practices in line with extant educational policies. We would claim that impact is more readily demonstrated in this way. The legacy is not only these improved processes and practices, but also new methodological and analytical tools and instruments which are more likely to be taken up and utilised. However, there was much less emphasis on a legacy of critical or theoretical work, as presumably impact is less immediate and tangible. This leads to the question of where this leaves the socially critical educational research that is largely sociological and often theoretical. Will the lack of readily accessible evidence of impact devalue such research, at least within the discourses of impact? The REF impact assessment process could be read, with few exceptions, in terms of the embedding of a shift towards ‘robust’, ‘evidence-based’ and large-scale mixed or quantitative research with tangible evidence of impact in visible ways, and away from qualitative or theoretically driven work that may be more concerned with impacts at the socio-cultural level and over the longer term. There are dangers too for socially critical educational research that is counter to prevailing practice or policy. Although we found a small number of ICSs that had evidence of such impacts, it is difficult to see how producing these will ever be anything other than more difficult within the current framework, as was the experience of Colley (2014).
That some kinds of impact are inherently easier to evidence than others needs addressing, or there will be an inevitable turn towards this kind of research in the allocation of funds given the Research Councils UK focus on pathways to impact. It is also interesting to wonder about internal inconsistencies in the REF process itself. We are interested in how it is possible for the REF to accept ICS claims of change (of behavior, attainment etc.) when the quality of such judgements inevitably falls short of the rigour of sophisticated analysis that is required in research papers discussing similar changes in 3*/4* educational research outputs.
Colley (2014) draws on Bourdieu’s neglected concept ‘illusio’ that calls us to examine the ‘stakes’ that matter in the field of educational research. The REF is a bureaucratic system that has changed the stakes of educational research whilst ‘avowing disinterest through claims that the “impact” agenda is solely acting for the public good (i.e. value for money use of public funds for research)’ (Colley, 2014: 674–675). Bourdieu writes that ‘illusio is the face of being caught up in and by the game, of believing the game is “worth the candle”, or more simply that playing is worth the effort . . . . It is to recognize the game and to recognize its stakes’ (Bourdieu, 1998: 76–77). Colley describes illusio as our ‘investment in the game’ (p.669), what matters for us in conducting educational research, and suggests this is rendered illegitimate when one does not play according to the high stakes rules of the REF, i.e. by producing theoretical or qualitative critical research. Writing about illusio in respect of neoliberalism, Rowlands and Rawolle (2013: 269) say ‘we are both playing the neoliberal game and inadvertently demonstrating our belief that it is a game that is worth being played’. This analysis in terms of illusio is helpful with respect to the impact agenda also.
We suggest that impact, positioned as neutral and for the public good, in practice furthers a very specific neoliberalist ideology which has been much written about in higher education (Mudge, 2008). We take heed of the concerns of Rowlands and Rawolle (2013) who argue that the undefined, generalist application of the term neoliberalism can further entrench its discursive hegemony rather than disrupt it. Our solution is to demonstrate, as we are doing in this paper, the linkage between the particular criteria and possible ramifications for research of the REF impact process, and a neoliberal turn in higher education, seen in concepts such as the knowledge economy, where cost–benefit logic informs, at least in part, how research funding is apportioned. This logic is dependent on quantifiable and evidenceable impact and hence the particular shaping of the impact agenda to date.
There are other stakes in the broad impact agenda, other illusio, than that of REF ICS impact. Brewer (2013) talks about the value of the social sciences, referring to the way that an education in the social sciences enhances the life of a student, and also the way that social science reveals evidence about society and how society needs to adapt to deal with the complex problems of the 21st century. We would advocate remembering that one of the most significant contributions of the impact agenda to universities has been the development of ideas about what a civic university can be and its role in the agendas of today. Internationally we have improved understanding of the role of academics in society (Aiello and Joanpere, 2014; Brewer, 2013; Shucksmith, 2016), of the history of the civic university and its expression globally (Goddard and Vallance, 2011; Goddard et al., 2016). Many universities see themselves as anchor institutions within their local community, working with local and regional partners to promote economic, social and cultural regeneration. In the past social mobility has been analysed at national or individual institutional level with response tending to follow these polarities. However, increasingly the focus is on regional responses, with universities working with partners in their regions to develop sustained initiatives that align with broader agendas. (Universities UK, 2016: 24, para. 56).
There is appetite for change from academics themselves. Oancea (2013: 248) found that ‘academics saw in the current context an important opportunity to debate and reconceptualise “impact” and its relevance to accountability processes, and to re-calibrate assessment methodologies’, from her research of accounts of interpretations and practices of research impact in six case-study disciplines (from the social sciences, humanities and physical and engineering sciences) in one research-intensive university. Colley (2014) quotes Delamont (2010) on the need for researchers to claim their own understandings of and values about impact. Shortt et al. (2016) quoting Weiss (1977) suggests an ‘enlightenment’ model that would ‘involve recognising that our research is just one small part of wider understandings and whilst we can contribute to change, this often takes time and research may not represent the defining determinant of change’. She suggests ‘we should reward academics and departments that engage with external partners, the general public and interested parties, but we should not expect that all of this will lead to demonstrable change in the short to medium term’, and we should alter the focus of REF impact from: demonstrable impact to demonstrable knowledge exchange and public engagement. It might also involve, for example, doing more to reward researchers (or knowledge brokers) who undertake synthesising type roles, collating and reviewing large disparate bodies of academic knowledge for non-academic audiences (as opposed to the current approach which largely encourages researchers to try to achieve impact for their own research). Such a focus would include more nuanced metrics of participation, involvement and action rather than ‘change’. (Shortt et al. 2016: 271). We made use of the report in agreeing on assessment judgements in the areas of environment and esteem as well as in assessing outputs. Thus the publication made a significant difference to assessment judgements, including encouraging the agreement to a different set of values from that used in 2001 in order to give greater respect to applied research. These judgements in turn of course affected research funding of education departments from 2009 onwards, favouring those with a greater proportion of applied research rather more than previously. (Chair of the 2008 RAE sub-panel, Oxford University, 2014b).
We conclude by drawing on Oancea once more, with thanks for the title of our paper from the final sentence of this quote: For impact indicators to be an adequate proxy of research value, they need not only to be technically refined, valid measures, but also to be pitched at the right level, so that they can function as catalysts of higher education activity rather than destabilising it. To do this, they depend on a healthy ecology of higher education, which in turn requires intellectual autonomy, open debate, financial sustainability and insightful governance. Without these preconditions, the high-stakes assessment of impact might fail to reflect and support ongoing research value, and end up simply capturing assessment-driven hyperactivity: in other words, it might end up hitting the target, but missing the point. (Oancea, 2013: 248).
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Notes
Supplementary Material
Supplementary Material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
