Abstract
In this article we introduce the five papers published in this issue of the International Journal of Chinese Education (
Clarifying Evaluation Complexities
This issue of the International Journal of Chinese Education (
The four conference-related papers in this issue each tackle large questions. What, in essence, are the shortcomings of prevailing ways for understanding and reporting university characteristics and performance? What evaluation mechanisms would better help to progress universities and students, and thereby the contributions the sector makes to broader society?
Reviews of relevant higher education policy in recent decades (see: Hazelkorn, Coates & McCormick, 2018; Cantwell, Coates & King, 2018) reveal two distinct positions with respective to such inquiry. These positions are underpinned by substantial institutional, professional and commercial interests. They are characterised epistemologically for current purposes.
On the one hand, myriad ratings, reviews, rankings and classifications have been promulgated in recent decades, reflecting efforts to understand and lead a sector undergoing substantial transformation in both substance and scale. Such mechanisms have flourished by building reductive information architectures around very complex situations. They have filled a void and sated people’s need to make reasonably simple sense of out of what universities do and achieve.
On the other hand lie persistent arguments about the irreducibility of higher education. Higher education is indeed a ‘complex system’ (Cairney, 2012) which thwarts quick reductionism. Already on the second glance, for instance, even seemingly simple information relating to graduate employment unfolds into complexities around qualifications and skills, and the changing characteristics of contemporary professional work.
It is difficult or perhaps too soon to tell if such positions are anachronistic, enduring, conflicting or even relevant. They do help derive implications, however, which aid ongoing inquiry into higher education evaluation methodology. For instance, there are likely no simple or static answers to the large questions posed above. Hence there is value in ongoing reform of evaluation mechanisms. Often controversial simple indicators can play a powerful role in guiding practice. However, to ‘cut-through and shape practice’ evaluation mechanisms should resonate with both positions. The dialect revealed by these two positions carries important implications for policy and practice, hence for evaluation research.
Papers in This Issue
Each of the four papers on higher education in this issue of
In the first paper, Van Damme tackles tensions between the homogenising trajectory of internationalisation and the need for more regionally nuanced conceptualisations of skills. Rather than focus policy on the production of qualifications by universities, economies need to focus on producing a “well-balanced, harmonious portfolio of knowledge and skills”. The specification of required skills needs to be read in relation to employment contexts pertinent to countries, regions or industries. Evaluation, consequently, needs to consider not just education processes and qualification outcomes, but also the extent to which universities function in broader innovation ecosystems to contribute required scientific and professional skills.
The next paper by Ramakrishna and Sachsenmeier adopts an institutional perspective to review examples of the kind of bibliometric reports which have sprouted over the last twenty years. It identifies limitations of such reports and discusses options for development. It argues for investment in information and reports on how universities work beyond campus walls to impact employers, and other institutions and countries. International connectivity, for instance, is articulated as a key measure of universities’ evaluation. In little over two decades bibliometrics has grown from being the quiet technical work of librarians into a multi-billion-dollar global enterprise carrying material implications for innovation systems. Similar growth is possible for the emerging mechanisms discussed in this paper.
Cantwell, next, looks at how evaluations might be designed to take account of disciplinary and organisational complexities inside the academy. He untethers higher education from prevailing status-based evaluations and instead examines prospects for rebuilding evaluation around capabilities. The paper gives life to this approach, looking at examples in terms of student learning, research performance and revenue generation, and unpacks the potential multilevel application within universities and within academic units in particular. The paper concludes by exploring the application of this methodology to create higher education systems that meet social demands.
The paper by McCormick delves into the need for evaluation systems to incentivise evidence-informed improvement efforts. Looking from a policy perspective, McCormick reviews the rise of performance indicators and accountability mechanisms. The paper reveals how these mechanisms run into complexities associated with inherent norms, structures and practices of higher education. As a result of such misalignment, the mechanisms have a limited capacity to promote performance improvement. McCormick advances an alternative perspective which re-positions evidence-informed improvement in a primary rather than subordinate or derivative role. Educational change requires improvement, not just the collection and reporting of information.
The final paper by Mok continues
The editors are very grateful to the authors for the contributions, and to the peers who reviewed and gave helpful comments on earlier drafts. Each paper clarifies a unique response to the large questions framed above. We hope that each paper contributes insight into the ongoing construction of effective evaluation mechanisms for future higher education.
Editorial Changes
We report changes in
In 2018, Jinghuan Shi (Tsinghua University) assumed the role of
The
The editorial team has clarified
Please engage with
