Abstract
It has been widely noted that creative practice research agitates in positive ways certain attitudes, understandings and/or practices towards ‘knowledge’ and ‘research’ within the university context. Not only this, or partly because of this, creative practice research complicates established scholarly protocols of assessment and evaluation. This has been a topic of concern and debate for the field for the past 30 years (since the emergence of the field as such). This research was conducted in the Australian context where the government research evaluation program has been under review since 2021. The Australian Council of Deans and Directors of Creative Arts (DDCA) felt a duty to step into this context and facilitate a sector-wide series of activities that would address the main issues for the field. This was done with the view to writing a set of key recommendations pertinent to the sector which would enable a new and fit-for-purpose approach to creative practice research assessment and evaluation. This article details the seven key recommendations that resulted from these activities.
Keywords
A note on authorship
The Australian Council of Deans and Directors of Creative Arts (DDCA) is a representative body for the creative arts in Australian universities and other Higher Education providers. It aims to advance the culture of scholarship in the creative arts, offer strong leadership and advocacy for strategic development across Higher Education and advocate more broadly for the role of the creative arts in society. Its membership base includes approximately 40 institutions from around the country–predominantly universities with a handful of private providers–as well as the four main disciplinary peak bodies: ACUADS (Australian Council of University Art and Design Schools), ASPERA (Australian Screen Production Education and Research Association), AAWP (Australasian Association of Writing Programs) and IDEA (Interior Design and Interior Architecture Educators Association). The organisation is funded entirely by membership fees. It convenes a range of programs that work toward the stated aims of the organisation. The DDCA has also been involved in making submissions to the Australian Government for recent consultation papers that affect the creative arts within the Higher Education sector, including research. One primary way the DDCA facilitates debate, and models best practice on key issues is via its online publication, Creative Matters (formerly known as NiTRO, 2016-2022). There is no other body in Australia that brings the community of creative practice researchers together across its full range of disciplines. The Council is administered by an Executive Board that is elected annually for a term of 2 years (up to three terms per member). The Board which has supported the work detailed in this article was constituted by the following members: Beata Batorowicz, Craig Batty, Samantha Bennett, Rick Chew, David Cross, Kim Cunio, Jane Davidson, Kathryn Gilbey, Donna Hancox, Meghan Kelly, Mia Lindgren, Sally-Jane Norman, Verena Thomas, and Jessica Wilkinson.
The authors of this article have been associated with the DDCA as President (Batty, 2022-25); and editor of the DDCA publication, Creative Matters (Glisovic, 2023-current). Collectively, they have experience in undertaking creative practice research, reporting on it, and assessing it. Batty was the inaugural creative practice research leader at RMIT University (School of Media and Communication, 2013-16); Chair of the Faculty of Arts and Social Sciences (University of Technology Sydney) creative practice research evaluation committee (2019-20); leader of a purpose-built creative research output submission, review and discoverability platform at the University of South Australia; and advisor on creative practice research evaluation across many universities. Glisovic has been in a creative practice research advisory and support role at various Australian universities over the past 15 years. They have both been involved as experts on assessment panels; led the drafting of internal guidelines for the reporting and evaluation of creative practice research; participated in the redesign of reporting platforms and assessment processes (including the platform at University of South Australia); published on the topic; and continue to research in this area.
Background
A series of political, social and economic events coalesced across the US, Europe and Australia (at slightly different moments but not too far apart) which would see a rather sudden move to fold the creative arts into university systems which would, by proxy, demand that creative practice became: creative practice research. That these movements happened in different contexts around the globe has inscribed each of these developments with variances, including, what we call the thing we do: creative practice research in Australia; research-creation in Canada; artistic research in Europe. These variances in terminology are not insignificant, they describe slightly different practices, slightly different foci, and different expectations of how the research object ought to function across which contexts. This is not surprising for a number of reasons: the establishment of the field did not arise with intentionality i.e. out of a need arising from within the practice (they were external forces that almost ‘by accident’ led to the institutional demand on the practice). This is not to say that it is an entirely fabricated discipline, art practice has been embedded in ways of knowing the world since the earliest civilizations, but, the formal institutionalisation arose not out of this recognition, but was more or less a political maneuver. Which leads to the second reason, and that is that these politics played out differently in different parts of the world at different times. And, thirdly, creative practice research is indelibly connected to our artistic histories and industries, so interlinked they are with cultural values that these different histories undoubtedly shape attitudes and orientations toward creative practice research, what it is and what it should do. Already, from this brief historical picture we can recognise that these variances and complexities have implications for the way we assess and evaluate creative practice research around the world. To look at the detail of these differences is a worthwhile project, however, in this article, we focus on issues (and solutions) arising within the Australian context only, as the conversation must also be particular to specific historical, social, political and cultural contexts.
The history of how creative practice research came to be instituted in Australian universities–the political maneuvers and consequences for disciplines and individuals–has been told well in the literature (see Davidson, 2024; Schippers et al., 2017; Wilson, 2017). In the near-30 years since this institutionalisation the literature in the field of creative practice research has worked to cultivate strong epistemological ground that validates artistic practice as research (see Barrett and Bolt, 2007; Haseman, 2009; Nelson, 2022; Smith and Dean, 2009). That the presence of creative practice research in the university agitates in positive ways certain attitudes, understandings and/or practices towards ‘knowledge’ and ‘research’ has also been well-argued in the creative practice research literature (Butt, 2017; Carter, 2004; Oliver, 2018). Not only this, or partly because of this, creative practice research complicates established scholarly protocols of assessment and evaluation.
Creative practice research outputs were first included in the national evaluation exercise, Excellence in Research for Australia (ERA) in 2010, which marked an increase in recognition and validation of this type of research. Getting to this point was hard-won–compromises, negotiations and challenges to extant frameworks had to be made by all parties (Strand, 1998; Wissler, 2004; Seares and Schippers, 2005). Creative practice research had to, in part, try and accommodate itself into structures already established in academia, whilst some established norms had to become more capacious to include creative research practices that departed from those norms. This has been an ongoing process in the research evaluation space, which continues to transform as national agendas shift and fields of research evolve and adapt to contemporary environments and demands.
Positions around the nature of the relationship between extant frameworks for research evaluation–such as H-indexes; journal rankings; research funding income–and the appropriateness or otherwise of these frameworks for dissemination practices in creative practice research, are not unanimous, as we discuss below. With each iteration of ERA since the 2009 trial (2010, 2012, 2015, 2018, 2023 (scheduled and consequently cancelled)) the guidelines for reporting and evaluating different types of outputs have been through some revisions but substantively remained the same (see ERA submission guidelines 2009-2023 (Australian Research Council, 2025b)). Generally, the guidelines the Australian Research Council (ARC) set for creative practice research were rather broad and required a large amount of interpretation–both by universities putting together their ERA submissions as well as ERA assessors (who are, in essence, the same group of people). Whilst this approach may seem sound–broad guidelines allow for greater diversity in practices to be expressed within those guidelines–the detail of what was left up for interpretation and how, was ultimately a disservice to the creative practice research field and created unwelcome disparities at numerous levels.
One of the most comprehensive reviews of the reporting and assessment processes for creative practice research in Australia was conducted by Jenny Wilson (2017), editor of the DDCA’s former publication, NiTRO (2016-2022). Wilson conducted extensive interviews with creative practice researchers as well as research administrators and Deans/Heads of creative arts. In this study, Wilson notes that: There was little awareness [amongst the interview subjects] that there were different perspectives or approaches taken by other institutions, nor of the autonomy that institutions possessed to internally respond to government priorities and direction. It is also suspected that those in positions of institutional management may also be unaware of approaches and models that are adopted in other institutions to ameliorate unfavourable national research policy decisions... Greater sharing of this information, including the various suggestions and solutions to improve management processes that had been made by individual artistic researchers to their own institutional management (whether implemented or not) would give institutions and their artistic researchers a wider context and range of exemplars to improve their own operations. (144)
The type of environment Wilson describes shows how disparities can arise in relation to a wide range of things, from the platforms used to report research (which in some instances have been a serious impediment to assessing the research properly); to the actual metrics used to evaluate excellence. The disparities in approaches to research evaluation of creative practice research abound. In many disciplines the trajectories in research–from the areas of inquiry, the methodologies used and modes and places of dissemination–have often already been set by scholarly traditions (made over time) in those disciplines. This is not so for creative practice research. To mention just two likely reasons is that creative practice research as a field is young compared to other fields (with only a 25-30 year history within the university); and there is an inherent (and some may argue, necessary) diversity in practice borne of art’s tendency to continually reinvent and seek to be outside of established boundary lines (Bolt, 2016; McCormack, 2008; Steyerl, 2010).
The intentions, subject matter, audiences, beneficiaries, theoretical frameworks, communities of practice and dissemination pathways and platforms are various (can be almost anything, in fact). For example, a creative practice research project could look something like this: a group of theatre scholars are interested in inverting the ‘one person show’ and/or the ‘monologue’ by creating a work where the six researchers write and perform a piece for an audience of one. They are interested in how this change in dynamics affects the communication and exploration of a particular theme, say, isolation. Another example could be: a filmmaker is interested in understanding how particular formal choices in their filmmaking arouse large, audible group responses when viewing the film collectively, i.e. they would be seeking to show the work to very large audiences in order to test their theory. No doubt the research design in these projects have many flaws, but if the reader can overlook these and notice that to standardise measures of quality, esteem, excellence according to things such as audience size, subject matter, modes of enquiry and other, would not be appropriate.
To unpack a little the list of features, above, that can potentially vary across different research outputs which make extant evaluation frameworks unsuitable: the subjectivity inherent in engaging with creative artefacts; creative practitioners use a range of methodological approaches which are often emergent rather than prescribed (Knowles and Cole, 2008; Grierson et al., 2009; Hubner, 2024) and/or operating in the space of ‘methodological abundance’ (Hannula et al., 2014); there are disciplinary differences under the umbrella of creative practice research; research dissemination happens across a wide array of platforms, venues and/or publications and these are not able to be standardised so as to serve as metrics for excellence; the types of data that might aid in evaluation are often not collected or available as standard practice; the disciplines do not (largely) operate on a citation model.
Maintaining the diversity of the disciplines and the practices within those disciplines is critical, but that diversity makes it very difficult to establish a one-size-fits-all approach and even more difficult to align itself with approaches in other more established disciplines and research practices. To do so would seriously risk incapacitating creative practice research to thrive in precisely the ways that can most benefit society. However, it is important to say that research evaluation is not only troublesome for creative practice research. In recent years we have seen movements to transform established norms and practices across all disciplines (see Collini, 2012; CoARA, 2022), but this is a topic we will leave to one side in this article.
Current Australian higher education context
Another important contextual point is the transformation of the Higher Education sector as a whole. In the past few years, Australian Higher Education has been marked by a number of reviews: Review of ERA Engagement and Impact (EI) (Australian Research Council, 2021); the Universities Accord (Australian Research Council, 2023); Review of the Australian Research Council Act 2001 (Australian Research Council, 2023). These have all put into question research agendas and the social contract universities have with their publics.
Each of the above reviews included sector consultations on the basis of which reports were released by the ARC which highlighted key areas of concern for the sector as well as specific concerns pertaining to particular disciplines (ERA EI Final Report, (Australian Research Council, 2021); ARC submission to the Interim Report of the Australian Universities Accord (Australian Research Council, 2023). A key theme that emerged specific to the reporting and assessment of creative practice research confirms the vital importance of maintaining the diversity so critical in creative practice research (i.e. that a framework of evaluation does not inadvertently homogenise or sideline this diversity but supports it) and, relatedly, that because of this diversity and lack of appropriate evaluation measures in the past, certain unhelpful disparities have emerged within creative practice research and in relation to other disciplines which the sector asked to be resolved. Just some examples from the pool of submissions are: ‘Peer reviewers need to be trained (including to ensure they have a thorough understanding of the discipline as a whole rather than just their particular field) and careful scrutiny of the peer reviewers is necessary to ensure that any biases are removed.’ ‘The ranking of journals has resulted in an undervaluing of smaller and emerging sub-disciplines and Australian publications as well as disciplines that don’t place emphasis on citations. Some of the ERA requirements are counterintuitive in terms of the EI outcomes. For example, if Australian stakeholders are more likely to read targeted Australian research publications, then emphasising international publications may reduce national impact and benefit to Australia.’ ‘With regard to NTROs, the 30-megabyte limit on research output samples is restrictive, as it requires splitting video outputs into multiple files for example, or just submitting one brief excerpt of a larger work. The result is that different institutions deal with this limitation in different ways, meaning that reviewers can be presented with a wide range of volumes and types of evidence for a given discipline.’ (ERA EI consultation submissions, (Australian Research Council, 2021)
For research evaluation in particular, the Minister for Education, Jason Clare, in 2022 released a ‘statement of expectations’ of the Australian Research Council requesting that they ‘discontinue preparations for the 2023 ERA round and commence work to develop a transition plan… to establish a modern data-driven approach informed by expert review’ (Clare, 2022). While the pause in the 2023 ERA round was not entirely unexpected (we might even say it was welcomed as a chance to rewrite the way forward), the focus on data-driven approaches alarmed those in creative disciplines: ‘we would be concerned if there were attempts to streamlining or automate data collection at the expense of rigour’ (Australian Academy of Humanities submission to the ERA EI review). Though this approach might seem like a ‘natural’ step in context of other societal and technological developments (such as AI, for example) the implications for creative practice research are potentially dire–not only do we not have any reliable datasets we do not as yet know what types of data would be useful or meaningful for the field.
The ARC formed the ERA Transition Working Group who were to lead a response to the concerns outlined by the Minister and to lead the development of a new approach to assessment and evaluation of research across all disciplines. To be noted is that the working group did not have representation from creative practice researchers. It did feature several research leaders from the humanities and social sciences but these disciplines, whilst broadly aligned, are methodologically entirely different and therefore this does not serve as effective representation.
As this article is being finalised, the ARC has released the first outcome of the review: a one-page ‘Consultation Draft–New Australian Research Council Research Insights Capability’ (Australian Research Council, 2025), issued by its recently appointed new CEO and Board. The new stated aims are: The ARC’s Research Insights Capability will deepen Australia’s capability to understand the research ecosystem, to highlight strengths, identify needs and future opportunities. The ARC will not rank, score or rate institutions but will openly share knowledge, highlight key issues and successes across the sector, and produce annual assessments through State of the Research Environment Reports. The new capability will promote positive practices, strengthen connection between researchers, government and end-users, and ensure that knowledge is accessible and actionable. (ARC, 2025: np)
At the time of writing it is unclear what the trajectory of the above intentions of the government will be, though it appears that the competitive aspect of research evaluation–rating and ranking institutions–has been rescinded. If this is the way forward, it has the potential for a profound change in the culture of research (evaluation) as a whole, and would be a welcome development. Given the nascent stage of the ARC proposal, we will not comment extensively on the draft document which is, at the time of writing, out for consultation with the sector. But we will say that the implications of a public and comprehensive platform that makes research open and accessible is an exciting move. There is very good potential for this type of platform to encourage good research practices that orient themselves toward their social license and to share that research directly with all beneficiaries. For creative practice researchers who typically navigate a range of public, industrial, scholarly and other spaces, visibility and celebration of this work would enable greater understanding of what this type of research is and what it has the capacity to do.
Hiatus period
During the hiatus period between the ERA Review (2020-2021) and the ERA ‘pause’ (2022) there was a certain mood arising in the creative practice research sector which left some colleagues reporting feeling ‘directionless’. These feelings were given space for airing in the DDCA publication, Creative Matters (Valuing Artistic Practice for a New ERA 1, 2023 (Creative Matters, 2023)). Further, the DDCA identified its role and capacity to facilitate a series of activities to bolster the sector and do the groundwork it thought needed to be done ahead of any ARC proposal on new directions in assessment and evaluation of research. The authors of this article have led this series of activities on behalf of the DDCA, given their extensive experience in the subject matter. Whilst the authors represent the DDCA and see themselves as facilitators of a process for the sector, they also acknowledge their own positions and preferences which have developed over years of conducting creative practice research and research about the field, which are being expressed in this article.
A fast developing field
The field of creative practice research has been a rapidly growing and changing one, compared to other instituted fields and disciplines it is new and people have worked hard to establish it as a field in the last 30 years. Over this period there have been a number of elucidating studies which have traced attitudes via surveys. In these surveys, and across time, we can see researchers sitting at different points on the spectrum of how they understand their responsibilities toward university expectations and their artistic responsibilities as margin-dwellers who challenge those same expectations. From this view the field can appear fractured–different researchers argue for different protocols and understandings which they feel will better support their work.
Smith and Dean, writing in 2009, (before the first ERA round in which creative practice research was included), surveyed eighteen creative practice researchers from Australia, the UK and the USA, noting that ‘Amongst the more discursive and distinctive answers, most people tried to balance both the reciprocity and the independence of research and creative work’ (16). To look at the less discursive responses, however, one might come to a rather different conclusion (17). On the subject of assessment of research, they stress that peer-review is only the first stage in the process of evaluation… even though peer-assessment in the creative arts is something of a minefield because of the highly subjective element in judging artistic work, and the tendency for ground-breaking work to be greeted with opprobrium rather than praise. (Smith and Dean, 2009: 26)
And though they were writing at such an embryonic time for the field, they were already thinking about impact and qualifying that impact on the public held some importance, but it did not equate with ‘cultural value’. Already they were identifying the tension between values as they are expressed in different contexts in which the artwork/research artefact circulates.
One of Wilson’s (2017) conclusions following her study captures a still-relevant issue: Where artistic researchers have been less successful is in reassuring the university sector of the rigour and reliability of their disciplinary peer-review process and agreeing appropriate measures that can demonstrate quality in artistic research. At present, many performance measures, such as the size and prestige of the venue or audience numbers, capture success applicable in a commercial setting, reflecting the satisfaction of audience taste, rather than the innovative or intellectual standing of the work. Artistic researchers themselves need to determine the most appropriate measures that demonstrate how their work can be evaluated within the context of research or risk ongoing institutional application of measures that reinforce that art is valued by reference to personal opinion rather than expert evaluation. (Wilson, 2017: 190)
In the intervening years some of the attitudes expressed in Wilson’s study have evolved, we suspect there would be less extreme positions that pitch art against research in quite the same way; but there are many elements that have not changed at all alongside the developments in attitude and for this reason some recommendations Wilson makes toward a change for the sector are still pertinent and repeated in our current sector conversations.
In 2017 the Australasian Council of Deans of Arts, Social Sciences and Humanities (DASSH) commissioned a survey which sought to understand how Australian ‘experts’ and ‘stakeholders’ in creative practice research disciplines assessed and evaluated their peers’ outputs. Broadly, what this study demonstrated was a significant and worrying level of disparity in approach.
Whilst the analysis was that the large number of respondents who chose ‘it depends’ as answers to the survey questions were a sign of ‘divisions in the community’, our own analysis is that this data may indicate the large number of variables that need to be taken into account in order to provide a position on ‘quality’ or ‘impact’ of a creative practice research output. Writing about possible reasons why the ERA codes belonging to creative practice research had been in decline (which is a counter statistic to that of other disciplines who show an improvement in their ‘ranking’): It might be hypothesised that the failure of these codes (12 + 19) to follow the trend of other codes could be related to a lack of a clear, national, sector-wide understanding of the ways in which assessors are conceptualising the academic quality of NTROs. (McKee, 2020: np)
The DASSH survey also revealed that the stakeholder group was divided in its approach to where the ‘success’ of the work was to be found. Comments range from: ‘A work can “fail” as a creative piece but it can be extremely important if it generates and “illustrates” a breakthrough in understanding’; to ‘the opinion of a community of practice is the most important element in the assessment of an artwork’, and that ‘no discipline would describe the research as being successful if the outcomes were poor’. The moderate view was that the success of the artwork was ‘important but the audit should be based on the success of the work as a research inquiry not solely on practice-based criteria’ (np).
In response to the above report, Webb and Gibson (2019) made their own suppositions as to the apparent ailing of creative practice research (according to ERA): We suggest there are three impediments that lead to this situation: Failure to address the case for research; Evaluation measures remain science focused; Overly critical assessment by discipline. (Webb and Gibson, 2019: np)
These debates have also been taken on by discipline and field peak bodies through special events and publications. For example, in 2018 the Australian Screen Production Education and Research Association (ASPERA) research sub-committee produced a report, ‘Measuring Excellence in Screen Production Research’ (Batty et al., 2018), which sought to categorise markers of excellence for filmmakers and screenwriters working in the university. Australian Association of Writing Programs (AAWP) consistently visits these concerns at its annual conferences; and has produced a report specific to examining doctoral degrees in the creative arts (2013). The Australian Consortium of Humanities Researchers and Centres (ACHRC), in collaboration with the DDCA, published the ‘Manifesto for the Future of Creative Research Excellence’ (Cooke et al., 2023), based on a research panel event in late 2022. All of these efforts, however, have had a questionable influence on any policy, not formally instituted across the country or at the level of ERA, and certainly no work has been undertaken to align this work across the disciplines which the creative practice research umbrella encompasses. One difficulty here is that the disciplines have fared differently over time (see Goldson, 2020 on one aspect of this around how the presence, or not, of a theoretical discipline in relation to the practice has influenced this various development).
In the current moment, looking to the literature alone will not offer up the most current state of things in terms of how people are practicing, what the attitudes are toward that practice and what kinds of futures they envisage. Whilst creative practice researchers have cultivated this field with very good literature, there is still a whole body of research about the research that has not been undertaken. There is no recent and comprehensive study that brings contemporary insights together into a coherent picture. For this reason, the DDCA sought to bring the community together and gauge the most current state of play–best practice examples of how individual institutions were moving forward with reporting and assessing creative practice research; how researchers understood themselves within the broader research ecologies that they worked in; which areas we could find consensus in as a field and which areas needed to remain indeterminate to allow for diversities; and which areas discipline peak bodies were best placed to lead for their disciplines.
This article details two DDCA Online Forums (Australian Council of Deans and Directors of Creative Arts, 2024), (Australian Council of Deans and Directors of Creative Arts, 2025a) and their outcomes towards establishing sector-endorsed recommendations for a new evaluation framework fit for purpose for creative practice research. The guiding principle for this work has been to create a framework in service of the growth of the field and its disciplines, rather than designing an instrument of measure that rates university performance. The work has oriented toward celebrating and ‘uplifting’ the field for all, not for individual practitioners or universities; and this extends to the external stakeholders or beneficiaries of such research. As such, any evaluation framework needs to be supportive of the ways in which creative practice research contributes to society, rather than merely responding to political agendas. Having said that, our intention was that these guiding principles deliver a framework of evaluation that directly serves more bureaucratic requirements of the university and government as well so that the activity is meaningful at multiple levels: for the individual researcher; for departmental, faculty and university research ecologies; for government priorities that seek to serve societies and fulfill the social contract of a university.
Sensing the field: DDCA National Online Forum 2024
In 2024 the DDCA held a National Online Forum which invited creative practice researchers, research administrators and government bodies involved in research evaluation to be in attendance and contribute to the conversation on research evaluation of creative practice research outputs. The DDCA recognised that, whilst there is a wealth of knowledge and experience in the creative practice research community, there is a dearth of formal resources readily available to the ARC which can properly inform the architects of any new model for evaluation and assessment of creative practice research. The DDCA created a cross-disciplinary, cross-institutional space which also welcomed government and research administration stakeholders from across the country to discuss the recurrent theme of ‘parity’ which had arisen so strongly in the ERA EI review of 2020-2021.
As we had identified the uneven development of the disciplines within the field of creative practice research, within institutions and amongst individuals, the DDCA prepared Pre-Forum Reading Material which would establish common ground from which to begin our conversations on the day. There were over 200 registrants to the Forum with over 100 attendees on the day, and 155 views of the recording posted on YouTube after the live event. These statistics show that a need was being addressed and the sector was keen to participate in resolving issues collectively as a sector.
The outcomes of the debates and conversations at the Forum were compiled for a special issue of Creative Matters, where additional contributions were made by attendees (see Creative Matters 7, 2024). The event and consequent publication revealed that there was wide sector agreement on many points, but not all; that the sector was keen to work together to create shared standards, guidelines and approaches; and some issues needed to be dealt with at a discipline-level. The dimensions of the problem, the questions and considerations that needed to be addressed, surfaced in this event and detailed in the publication (Creative Matters 7, 2024).
Putting up the scaffolding: DDCA National Online Forum 2025
Building on insights from the 2024 Forum and Creative Matters special issue, the DDCA held a second National Online Forum in 2025 with a view to drafting key sector-endorsed recommendations that would be recognised by government and implemented by institutions. The intentions were that the recommendations would support creative practice research as a coherent field and enable the disciplines in that field to contribute in the most meaningful and powerful ways to society by being recognised and evaluated in ways that were most appropriate and fair.
Pre-Forum reading material was circulated ahead of the 2025 Forum (Australian Deans and Directors of Creative Arts, 2025), detailing a summary of the 2024 Forum as well as a set of key areas which emerged as priorities and concerns on which the sector wanted unified agreement. The topics were: nomenclature; proper representation; appropriate approaches to Indigenous Knowledges; defining the ‘peers’; assessment criteria; the research statement; reporting platforms and datasets; reconsidering the ‘world standard’ measure; revisiting measures of scale and impact (Pre-Forum Reading Material, (Australian Deans and Directors of Creative Arts, 2025b).
Nearly 200 people registered for the 2025 Forum, with over 100 in attendance and 72 views of the recording to date. Following the event, the DDCA drafted a set of recommendations based on the discussion that took place, which went out for consultation to all registrants and included a Google Form for feedback on each of the items. We also created a dedicated page on the DDCA website to house resources that inform the sector in their feedback to the recommendations.
In what is to follow, we set out the recommendations that were endorsed by the sector as a result of the Forums and consequent survey, including the underpinning rationale for each of them. These seven recommendations arise as foundational points that cohere creative practice research as a field, making distinctions that are important to make in relation to other fields whilst at the same time allowing the diversity within creative practice research to coexist. This is the starting point for equitable and appropriate evaluation.
Basic foundations and structures
The first of the seven recommendations–
The second recommendation–
Some decades hence, it seems it is no longer the case that creative practitioners in the university identify as ‘artists’. A book-length study currently in progress (Glisovic and Batty, forthcoming) which interviewed 20 researchers around the world who work under some of the above umbrella terms, revealed that very few of the interviewees relate to the word ‘artist’. These producers of artefacts–from podcasts, to games, to novels, to interactive museum displays–understand themselves to be researchers who are not necessarily, or in all instances, producing work for artworld contexts. This is a significant shift in the way creative practice researchers make, collaborate, relate to audiences, and self-identify.
The conversation at the 2025 DDCA Forum confirmed this as an unfolding space and one where consensus would not be easily reached. For some in the creative practice research community, terms such as ‘artist-academic’ and ‘artistic research’ are important as an identifier–a way to make this distinction in terms of art practice being able to generate distinct kinds of knowledge. For others, the use of ‘creative’ also serves this purpose. Others still preferred not to have addendums to ‘research’ as a step toward parity with other researchers.
Given these differences in ‘self-identity’ the nomenclature we choose needs to be capacious enough to include the diversity in that category of difference. In other words, there needs to be room for people to call themselves what they feel most important and suitable: practitioner, creative practitioner, artist-scholar, artistic researcher, researcher, and so on. At the same time, all of these identities and practices need to be held together by a common term that distinguishes this type of research from other types of research.
At the Forum a recent survey (not published) conducted with the Australian National University (ANU) staff revealed that their preference is to use Practice Research to encompass all and any research that does not produce a scholarly article or book. Whilst this type of taxonomy can be used as a higher-order distinction, the DDCA still recommends that creative is an important distinguisher that points to three key points relevant to all creative practice researchers (and not necessarily relevant to all practices in research): (1) aesthetics and poetics are inalienable aspects of the research; they play an integral role in the insights gleaned and the types of knowledge contributions and impacts the work can have; (2) these types of artefacts typically require an
These three aspects are not typically an embedded concern for researchers in other research fields–even if they produce artefacts where they are typically instruments rather than aesthetic objects that produce aesthetic experiences for the people that encounter them.
For this reason, we favour the continued use of creative practice research as it accurately reflects that research produced under this title is not always ‘artistic’ (though it can be); that the researchers do not always identify as ‘artists’ (though they may); and that the works are not always operating within artworld contexts (though they may be). Appropriately, though, the term does signify that aesthetic (creative) practice is key to the methodological and epistemological ground of the research, which we argue is a critical feature and distinction that needs to be clarified by the terminology. Creative practice research allows people for whom ‘artist’ and ‘artistic’ is important to maintain this identity, to continue to use it under the umbrella term of creative practice research. At the same time, those not operating within artworld contexts (for example ‘applied’ or ‘community-engaged’ work), and for whom their creative practice speaks to other contexts and audiences, can also be represented by this term.
The concern around the use of NTRO (non-traditional research output) as the name for the work resulting from creative practice research is not a recent one. The designation NTRO as it refers to creative practice research has been a contested term for some time: non-traditional is a definition already situated in opposition to what is ‘traditional’ and is thus marginalised, with a starting point that is in deficit. The marker ‘non-traditional’ is intended to distinguish the outputs from ‘traditional’ research outputs (TROs). According to the ERA guidelines (2018), ‘traditional’ research outputs included: books; chapters in research books; journal articles; conference publications. Non-traditional research outputs included: original creative works; live performance of creative works; recorded/rendered creative works; curated or produced substantial public exhibitions and events; research reports for an external body; portfolio. To remove the use of NTRO implicates the removal of the use of TRO. We deem this to be an appropriate course of action as reference to ‘traditional’ is somewhat selective of certain histories and excluding of others (see Knowles, Creative Matters 1, 2020).
As the ANU survey shows, the question around the types of artefacts produced as part of research projects has become a pertinent point for all disciplines at this time–the types of research outputs that are being produced in universities are diversifying and the two binary options of ‘traditional’/‘non-traditional’ are no longer accurate, relevant or useful. The increasing emphasis on interdisciplinary collaboration, as well as research that speaks directly to non-academic communities, also points to the unhelpful way the designations ‘traditional’ and ‘non-traditional’ divide research teams and their collective work, as well as a potential shift away from the peer-reviewed journal article as gold standard, towards other types of ‘output’. But we argue that where a distinction does need to be made is between objects that are instruments and artefacts (including ephemeral works) which are aesthetic experiences.
To reiterate, artefacts that result from creative practice research are particularly concerned with aesthetics; audiences; and where and how the knowledge can be encountered. These distinctions are important to make and a creative practice researcher requires the opportunity to articulate these aspects via a research statement.
The third recommendation– The requirement that artistic research is accompanied by a text component ‘implicitly characterises the work itself as not research’ (Marshall and Newton, 2000, p. 1). For Lesage (2009, p 8), ‘to impose a medium on the artist is to fail to recognise the artist as an artist’ (Wilson, 2017: 52). Artists pursing their research within a traditional research measurement framework must be mindful that it does not distort the qualities that make the outcome valuable in artistic terms, leading to ‘dull, process-led art, illustrative in the worst way of concepts and arguments’ (Jewesbury, 2009, p. 2). Bell (2006) observes that; ‘many scientific research projects end with the conclusion that the original hypothesis was incorrect … it is theoretically possible to produce truly terrible ‘unsuccessful’ artworks, which form a most excellent piece of research’ (Bell, 2006, p. 90). As Jewesbury further explains: An evaluative approach can only measure the efficacy of a research process and can’t ascertain whether that process has produced good, bad or stolidly mediocre art. And of course, we have the corollary of the artist pressured into a research agenda which is the academic trying to pass off research – which may be very thorough, rigorous research – as art’ (156).
The field has moved on from some of these positions. As noted earlier, the ‘artist’ is no longer only an artist, they are a researcher when practicing in the university context. One’s job as an artist practicing research in the academy is to interrogate the relationship between the materiality of the art practice and the materiality of language, of concept, of abstraction and of theory. This relationship ought to be consistently interrogated by the creative practice researcher, it is part of the practice and where the ‘research’ is often to be found. The researcher’s task is to find ways in which language does not limit their process or lead them to make ‘bad art’, but rather is a productive meeting. The argument around thinking/doing, the binary that if I’m thinking I’m not ‘intuitively making’ is a myth, there is very good literature that argues this point cogently (Barrett and Bolt, 2007; Borgdorff, 2012; Carter, 2004). We endorse the position that: We cannot afford to dispense with the most basic (and moral) of research intentions: put simply, it must be for the benefit of others apart from the researchers themselves. Artistic insight is not necessarily a research outcome. Neither is communication through a work of art the same as research communication. (Trimingham, 2002: 54)
Both types of ‘communication’ are important, and we need clear, unambiguous communication for the purposes of effective evaluation of research. Writing in relation to a creative practice work is a particular kind of genre. The relationship is not one of conflation or sameness, and the requirement does not confront the old adage ‘if I could have said it I wouldn’t have made the art’. The creative practice research artefact will always be in excess of anything that can be said about it but there are certain things that can be said about it, things that need to be said about it for the purposes of fulfilling the contract of research communication. Whilst the knowing cannot in totality be reduced to concepts or articulated in language, language can help guide the experience for an audience. As Nelson ((Nelson, 2022): 27) says, whilst ‘a research inquiry can be evident in the practice, it is not typically self-evident’. This is where the written, discursive component comes into play.
Previous ERAs required a 2000-character statement (circa 300 words) under the headings Background, Contribution and Significance. The general approach and areas of address are sound, though what we know from experience is that many misunderstood what was meant by ‘Background’ (e.g., research background vs general project background or the researcher’s background); and the distinction between ‘Contribution’ and ‘Significance’ was a difficult one for many to grasp. What we also know is that 2000 characters felt, for many, too short to be able to do the work of effective research communication. The result of this is that evaluating these outputs as research was largely inconsistent because the research statements often did not include the detail required for fair and proper assessment.
‘Exposition’ is the term and practice taken up predominantly in northern Europe to describe the genre and purpose of this type of writing. With the notion of ‘exposition’, we wish to suggest an operator between art and writing. Although ‘exposition’ seems to comply with traditional metaphors of vision and illumination, it should not be taken to suggest the external exposure of practice to the light of rationality; rather, it is meant as the re-doubling of practice in order to artistically move from artistic ideas to epistemic claims. ((Schwab and Borgdorff, 2014): 15)
This articulation usefully operates outside of the art/research binary where ‘research’ stands in for ‘rational knowledge’. Other models have been proposed in the literature around what elements should be addressed in a written ‘statement’ by the author of the creative work in aid of this ‘re-doubling of practice’ which serves the practice itself. Whilst they use various headings to guide researchers (see Biggs and Buchler, 2008; Borgdorff, 2012; Emmerson, 2017), they all broadly note that the elements a researcher ought to ‘expose’ are the intentions of the researcher; their understanding of the field they are contributing to, and the needs of that field; the processes undertaken, or the methodological approach and methods; the relationship between form and content, or the way the materiality of their work functions in the insights gained; and how these insights were disseminated.
Including these elements in a written form can do the work of addressing aforementioned issues of appropriateness of criteria for creative practice research, including issues around whether the work is ‘successful’ as an ‘art object’ or not. Rather than pitch one against another and presume a difference between ‘art’ and ‘research’, could the emphasis be on what the function of the practice is in the research? This has to do with the research design and how it takes into account the practice, the research intention and desired impact. This necessarily requires discursive content from the researcher in order for an assessor to ascertain how successful the researcher has been in their intentions. In a sense, what this would mean is that the ‘rubric’ is set but the criteria as such are determined by the researcher. This would mean self-selected markers of esteem and impact–a work assessed in the context of the work’s aims and setting.
Based on the discussion in the two Forums, and based on our collective experience of assisting hundreds of researchers to articulate their creative work as research, we come to recommend the following areas be addressed in a research statement of 400-600 words in length: • Context (disciplinary; social, political etc.; and if relevant, the researcher’s broader program of research); intention; process (methodological approach); • Form and content (the significance of this relationship to the knowledge gleaned); • Insights and knowledge gained and shared (from process, to formal innovation, to subject matter); • Potential for relevance; • Relative scope and/or scale; • Relevance of dissemination platform; • List of references (theoretical/community of practice); • Evidence/supporting documentation (the details of which to be determined).
The fourth recommendation–
The next two recommendations–
We know of instances where universities have created their own indicative ‘venues’ lists, much the same way journal rankings function. These venues rankings have been used to evaluate where a work sits according to ‘world standards’ where, in Australia, this has historically been graded by ERA as ranging from 1: well beneath world standard, to 5: well above world standard. For example, we know a certain department lists the TATE (museums) as ‘world standard’ venues whilst regional Australian museums are classified a 2 or 3: below world standard or world standard, respectively. Some universities have allocated particular durations to determine the ‘size’ of a work and therefore its ‘heft’ and presumably potential for impact. Other universities do not have these guides. Perhaps the logic is around how long a particular project takes to complete, or the ‘labour’ required.
These types of markers have been too narrow to equitably and accurately stand in for measures of excellence, quality, engagement and impact or indeed their standing according to international practices. Standing of museums and galleries, for example, also changes and is not always reflective of the types of measures one would heed in the scholarly context. It also unfairly compares creative practice researchers to researchers in other fields and their publication practices (where these types of measures are more appropriate). Consider, for example, that the percentage of Australian scholars who would exhibit at the TATE would have no resemblance to the percentage of Australian scholars who publish in a ‘world standard’ journal in the STEM disciplines, making this a very tenuous proposition.
Compare, for example, the production of a 10-min fiction film (typically requiring a crew and budget) with that of a dancer choreographing and performing a 30-min performance (the maintenance of their body), and the sole researcher authoring a journal article (either based on data, observation or analysis). These are very difficult to equate along the lines of the labour and time that goes into producing the artefact. Indeed, do labour and time have anything to do with potential impacts, significance or excellence of the research?
What of the theatre work written to be performed to one person at a time (as became a popular movement in the 2000s)? Would this be a ‘minor’ contribution even if this aesthetic choice was connected to the types of insights the work offered? Size can matter, and it can be directly related to quality as well as significance and impact. And sometimes the ‘minor’ is very significant indeed. There is much more to this conversation than we can address here, but we will note that using a value such as size or scale (which implies a ‘bigger is better’ mindset) to assess a work is particularly provoking for a group of researchers often working against dominant paradigms.
A single, one-size-fits-all approach cannot work because, to go back to the 2020 DASSH survey, ‘it depends’. One has to understand the context of the work, the intentions of the creator, and their audiences in order to make this assessment. Some narrative around this is necessary from the practitioner in order for a fair assessment to be made, which could be achieved by the for-purpose research statement detailed above.
Further, whilst the general inquiry as to how Australian research compares to research around the world is reasonable, how this is determined in creative practice research has so far either been unclear, or the determining markers have not been appropriate. To this end, we also recommend a reframing of this marker as international benchmarking. This would enable cultural differences and the diversity of dissemination pathways in creative practice research to be properly recognised.
The final recommendation resulting from the DDCA activities of 2024 and 2025–
Looking to the future
We now need a more effective ongoing data collection mechanism, such as a clearinghouse of international, national and institutional models and practice, to support reform. (Wilson, 2017: 199)
Whatever measures of assessment we agree on, we need to make sure there are effective ways of capturing that information and presenting it in a way that allows for proper and fair evaluation. Aside from being able to present the work in such a way to make the evaluation process fair and easeful for the assessor, this is also an opportunity to support the development of the field as a whole by gathering and analysing valuable data and finding ways of effectively disseminating that data.
Early findings of a study in progress (Glisovic & Batty) show that researching creative practice research using scholarly methodologies such as Systematic Literature Reviews or Scoping Literature Reviews are ineffective for our disciplines because there are no standard repositories where one might find this literature, and the issue is compounded when trying to research creative practice research artefacts (not to mention ephemeral works). Arguably, this fact alone impedes the development of the disciplines–it is much harder to have a sense of ‘the field’ when the research is so dispersed and not very visible, prohibiting truly systematic and exhaustive research. For this reason, creative practice researchers are far less likely to reference one another’s research given the inconsistencies in search options and methodologies.
The recently released ARC Consultation Draft–New Australian Research Council Research Insights Capability (Australian Research Council, 2025a) proposes an initial aim of using ‘available data’ to ‘deepen Australia’s capability to understand the research ecosystem’. The draft is too nascent to properly comment on it in any depth (see the DDCA submission to the consultation, (Australian Deans and Directors of Creative Arts, 2025c), aside to say that we do not think there is ‘available data’ of creative practice research and we are very interested in collaborating with the ARC and data experts to ascertain the types of data we can begin to collect and the methods for that collection. Only this, we believe, would enable us to develop the capability to be able to map and understand what is out there, in much the same way that journal data aggregators can quickly and systematically gather data on authors, key words, research fields and citations.
Whilst the intentions of the ARC to institute a data-driven evaluation process will likely alleviate peer-review fatigue, which is a serious issue across the sector (including our industry colleagues), there are some aspects of assessment and evaluation that are not suited to this treatment for the creative practice disciplines. The now-well-known phrase ‘it depends’ needs to be taken seriously because it is an accurate reflection of the need for individual assessment of artefacts in context of a number of factors. There needs to be careful consideration of how data and peer-review are used in concert with one another specific to creative practice research for fair and accurate assessment and evaluation.
Creative practice researchers in collaboration with data experts and technologists can start to think about what type of data might be helpful and to what ends; and how we might collect that data (i.e. what type of platform/repository can maintain records that are useful?). We are in a new era. We need to respond to the developments in the field, as well as larger research culture, with new approaches to evaluation which enable the field, which support its ongoing innovation toward meaningful contributions to society.
Footnotes
Acknowledgments
This article has been authored on behalf of the DDCA who steered the projects discussed in this article. The DDCA Board at the time of writing included the following members: Beata Batorowicz, Craig Batty, Samantha Bennett, Rick Chew, David Cross, Kim Cunio, Jane Davidson, Kathryn Gilbey, Donna Hancox, Meghan Kelly, Mia Lindgren, Sally-Jane Norman,Verena Thomas, and Jessica Wilkinson.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
