Abstract

An interview with Anthea Rutter
Introduction
The Australian Centre for Evaluation (ACE) was founded in July 2023. As an innovative new function in the Australian Public Service, it was felt that it would be informative for Australian Evaluation Society (AES) members and others interested in evaluation practice, if there was greater understanding around the Australian Centre for Evaluation’s purpose and desired outcomes. Above all, what does it mean for evaluation practice in Australia and what influence does the Australian Centre for Evaluation have or want to have on evaluation generally, not just in the public service.
The meeting between Anthea Rutter, AES Fellow and Eleanor Williams, Managing Director of the Australian Centre for Evaluation took place in Melbourne, and a number of topics were explored.
The first question explored the role of Managing Director for the Australian Centre for Evaluation, as well as asking what attracted her to the position?
I was appointed Managing Director of the Australian Centre for Evaluation in October 2023, joining the initial establishment team that was put in place in July 2023. The Centre then recruited our ongoing team and has been fully staffed since early in 2024.
The Australian Centre for Evaluation’s mission is to work across the Australian Government to put evaluation at the heart of policy design and decision-making. We do this through two units in the Australian Centre for Evaluation: the Evaluation Leadership, Policy, and Capacity Unit, and the Impact Evaluation Unit.
The Evaluation Leadership, Policy, and Capacity Unit works to encourage the take up of evaluation and support a system-wide evaluation capability uplift, with our overall objective being to increase the volume, quality, and use of evaluation in the public service. This encompasses all those enabling factors that help to establish the policy, the guidelines, and the supporting toolkit, as well as facilitating collaboration through communities of practice and professional networks. The Evaluation Leadership, Policy, and Capacity Unit is also partnering with the Australian Public Service Commission to support the development of an Australian Public Service evaluation profession.
Our delivery arm, called the Impact Evaluation Unit, is focussed on delivering high-quality impact evaluations and demonstrating good practice, with a particular focus on experimental or quasi-experimental design. This focus on impact evaluation is for two reasons. The first one is because those designs are generally underutilised in the Australian Public Service; the second is because it’s a particular passion of the champions who led the development of the Australian Centre for Evaluation. These are the designs that can help answer the questions of: “are our investments working and are they achieving their purpose on the ground, and if so, by how much?” Questions like these are best answered through impact evaluations, so that was the driver for this particular focus in the Australian Centre for Evaluation.
What attracted me to the position? I have been working in and around the public sector for 20 years, and I have had a lot of roles which straddled the Australian Centre for Evaluation elements. My early career was in strategic policy, then evaluation and research in government, and these roles have really sparked this passion to get more evidence to show that policy is working.
Probably the most pivotal for me was the Victorian Department of Health, where I helped to set up their Centre for Evaluation and Research Evidence. It really grew – when I came in there was a small team of 8 people, now it has a team of 30–40 staff. They do a huge amount of internal delivery of evaluation. That was a real buzz. So, when I saw the opportunity in the Australian Centre for Evaluation to take it to the next level and contribute to good practice across Australian public sector agencies it really was a unique opportunity. It was a particularly exciting time because there were passionate champions in the system, notably Assistant Minister Andrew Leigh. Our Assistant Minister was previously a professor of economics at The Australian National University and is really committed to finding out the impact of the policies and programs that government delivers, and how can we drive the practice of high-quality impact evaluation across the public service, so he argued the need for the Australian Centre for Evaluation.
Also, there are a number of department leads who are interested in the issue of building the evidence base for policy. Across all of the main agencies, there are really passionate leaders focussed on getting evidence into their decision-making. Overall, it is a unique time to get the momentum up for evaluation in the public sector.
Could you give me a picture of the Centre in terms of personnel and geographic spread?
In the Australian Centre for Evaluation, we currently have a team of 17 people based in the Australian Treasury. It was established in the 2023–24 Budget, $10M over 4 years, with ongoing funding. The team is split across Melbourne, Canberra, and Sydney offices. In getting established we spoke to a number of entities in terms of good working practice for remote teams and part of making this work is having regular times to get together, so we do this quarterly. We will come together in Canberra in June 2024 for a showcase event that we are hosting, and then again in Melbourne in September 2024 for the AES Conference.
As you well know, evaluation offers us a range of tools which we incorporate into our evaluations to help us make judgements about the value of whatever it is that we are evaluating, that is, activities, strategies, projects/programs, policies, organisations, or systems. We see evaluation as a profession incorporating its own theories and models, philosophical issues, standards, ethics, and a practice involving a wide range of methods, techniques, frameworks, and competencies.
When you consider all of the above tools and your own personal experience, which ones have significantly influenced your approach to implementing an Australian Public Service wide system for improving evaluation quality?
My personal orientation is that I am an evidence omnivore – I can see value in a range of evidence sources coming together to respond to complex problems. I am really passionate about the value of mixed methods in evaluation, because we often need to answer questions about both what has changed and why. Those questions require a lot of different tools to get to the right answer.
In terms of what has influenced me, I undertook the Masters in Evaluation at the University of Melbourne after I had been involved in commissioning and delivering evaluations for a few years. I felt that was very influential in terms of my approach. The subject around mixed methods was powerful, because it talked about not only mixing methods, but how you combine them and sequence them. That encouraged me to build my own practice in that space.
The Australian Centre for Evaluation itself was established to promote high-quality impact evaluations and particularly experimental and quasi-experimental design, as these are powerful in answering questions such as: Did this policy or program create change as expected? If it didn’t, then we need to draw on different methods to understand why not? We have built a team who bring a broad toolkit, so we can bring a range of tools and skillsets to bear. We are committed to delivering randomised controlled trials where appropriate and feasible, and recognise their power is further enhanced when we complement quantitative analysis with qualitative insights. Our Assistant Minister has a lot of experience in delivering randomised controlled trials and has rightly proposed that we can use them more, particularly when answering questions on causality – experimental trials are a great method for that.
Overall, our approach is always about fit-for-purpose tools, that is, what is the question we are asking and what is the best way to answer it.
The Independent Review of the Australian Public Service (2019) called for a much stronger focus on research and evaluation in order to identify emerging issues and evaluate what works and why it is required. Research commissioned for the review found that deficiency in the approach to evaluation is not necessarily due to a lack of skills, but rather it is a product of cultural practices that have evolved within the Australian Public Service, and of the environment in which the Australian Public Service operates. So, in light of the comments above, can you tell me what you see as the primary cultural, political and technical obstacles to establishing a robust evaluation system, and could we touch on how Australian Centre for Evaluation hopes to make a difference to those cultural practices?
Ever since the Australian Centre for Evaluation was established our plan was for a multi-pronged approach, not just one technique for changing culture. There are actions that need to happen, at multiple levels across multiple portfolios, in other words, a matrix of actions are required to achieve change. In terms of obstacles, some are practical like the need for more time and resources to be invested in evaluation, and there is also the issue of mixed capabilities and capacity across agencies that were identified in the Independent Review of the Australian Public Service. To add to that, the formal requirements to evaluate established through the Commonwealth Evaluation Policy and the Budget Process Operational Rules, both of which were introduced in response to the Independent Review, have only been in place for a few years. We know that evaluation requires a long-term view and a strong commitment from the centre of government – it will take time to rebuild evaluation capabilities across the system and to translate the principles-based requirements into consistently high standards of good evaluation practice. These are things we need to grapple with simultaneously. It is not as simple as just using carrots and sticks, and I really feel that cultural change is about winning hearts and minds.
I think there would be very few public servants out there who would not want to know whether their work is making a difference. That for me is an easy win – public servants really want evidence to know if their programs and services are working. So, evaluation is at its best when it’s seen as a process of continual learning and continuous improvement. We really want to motivate people to evaluate and learn, and to avoid dwelling on the deficits and apportioning blame.
We need to concentrate on how we can do more, and better, evaluations. So in terms of things we are trying to do, that’s where the multi-pronged approach comes in. We have a toolkit and we are working on continually enhancing it to provide a complete suite of guidelines, tools, and templates. We want these to be practical resources that people across the public service want to engage with. Stakeholders want worked examples to show how they can do evaluation in practice in specific contexts: that is one of our areas of focus.
We are also looking at a multi-tiered training strategy. There are a range of people in the system who have different needs and have differing skills. People are involved in multiple levels of evaluation development and use: leaders, commissioners, and evaluators. We are anticipating that the training strategy will come into operation later this year. We are also looking at how we can continue to build evaluation into the budget process in partnership with the Departments of Finance, Prime Minister and Cabinet, and our colleagues in the Treasury.
The other thing to note is that the Australian Centre for Evaluation is working on a hub and spoke model with departments and agencies. In most of the big Departments there is an evaluation function, and we are encouraging and supporting those departments to strengthen their practices. We have a vibrant community of practice, about 700 Australian Public Service staff from over 70 Commonwealth entities. They are really active and engaged, they provide feedback on our strategies, and regularly attend events like our lunch and learn series. That is a real positive and a huge asset. So, we are building the army of evaluators within the system.
Evaluation has been designed as a ‘Craft’ within the Australian Public Service and is one of the core capabilities to deliver great policy and services. According to the Australian Public Service, ‘Robust evaluation is critical at all stages of policy development to achieve desired strategic outcomes’. Further, outcomes must be reviewed to ensure they are on track to being achieved. The Australian Public Service must also learn from previous evaluation findings and make adjustment to improve or stop activities which do not lead to desired outcomes.
Could I ask you to outline your vision (or the Theory of Change) guiding Australian Centre for Evaluation’s efforts to promote and embed evaluative thinking throughout the policy lifecycle?
We do have a formal Theory of Change which we are working on with our stakeholders. It revolves around four areas which we anticipate will lead to cultural change. The first one is evaluation leadership and promotion: how we shape the authorising environment and stimulate engagement in evaluation across the public service.
The second one is around delivery of, and support for, impact evaluations: how to demonstrate good practice so we are building partnerships with departments to deliver some great impact evaluations in the first instance, as well as providing technical support and advice for randomised controlled trials and quasi-experimental evaluations across a range of policy initiatives.
The third is to do with evaluation planning and use, which relates to embedding evaluation into policy and program design processes from the outset, and offering practical templates, tools, and guidelines that help to feed this into the budget and cabinet process to support decision-making.
Then the fourth is capability building to bolster the evaluation skill sets in the public service.
This is not something that the Australian Centre for Evaluation will do alone – it will be a real collective job, but we will use our influence as a central agency to bring people together to share resources, skills, and experience. Those are the 4 streams that we hope will move us towards that big picture cultural change and assist in driving that use of evaluation. With evidence, it is always a matter of supply and demand, so we are really interested in how to stimulate both sides.
Thinking of the Australian Evaluation Society, how can they best support the mission of Australian Centre for Evaluation to advance high-quality evaluation across the Australian Public Service?
We feel that we are working in a really complementary way. Our basic aim is working to improve the volume, quality, and use of evaluation, and advance fit-for-purpose evaluation. It does carry across sectors with the overall goal that our collective work is meaningful and produces evidence that gets used.
AES plays a central role in the evaluation community in Australia. Particularly in the conferences and the workshops, which is where good practice is shared, and the AES has been very generous in letting us showcase our work. We are very confident that the relationship will continue to grow and evolve. A few of the Australian Centre for Evaluation participate in AES committees as well. For example, I am on the Victorian Regional Network – I have also previously sat on the AES Board, which was a great experience, but I am doing a PhD and felt I needed to reduce the number of my activities for a while!
Being aware that some government created entities can have a short life span, has Australian Centre for Evaluation been designed with future proofing in mind, and if so, could you elaborate on this?
The Australian Centre for Evaluation was funded in a transparent way through the budget papers. We have four years of funding to get established, then ongoing funding after that at a slightly lower level. We think having an ongoing function established at the centre of government within the Australian Treasury is the best future proofing there is. We intend to use those first four years to become embedded in the system, to firmly establish evaluation within the processes, practices, and core values that uphold government decision-making, to get right into the heart of all of the other policymaking processes. We are working with all the other partners to support public sector reform and to get more evidence for policy. There are a number of partners within the Australian Public Service who are deeply committed to strengthening evidence in policy, through enhanced use of research, evaluation, data, and digital technology. So we have already got this growing ecosystem around evidence-based policy and we hope to become a critical partner and an evaluation voice in all of those processes.
We are lucky as we have team members who know how the Australian Public Service works across different agencies, so they understand how to support longevity. In the budget and cabinet process, we will continue working to elevate the status of evaluation so that we get the cut through. We are working on how to influence that space so as to weave evaluation into the fabric of how thing are done in government.
Are you envisaging any strategic partnerships, say with universities, for example, and any thoughts on how Australian Centre for Evaluation would want to structure those, as well as any thoughts on some possible outcomes for the partnership?
Our initial partnerships are targeted at government departments. We have a partnership with the Department of Employment and Workplace Relations, where we are running a series of trials. We are also in the process of finalising partnerships with the Departments of Health and Aged Care and Social Services. That’s where our initial focus is in demonstrating good practice. At this stage, this is our first ‘cab off the rank’, then we will move onto other departments and agencies.
In terms of universities, the Australian Treasury has a long history of working with academia. A partnership model in relation to evaluation is still emerging, and we see huge value in bridging the gap between the public service evaluators and academics. As part of this, we are building a network of practitioners who work on various types of impact evaluation. At the moment, there is not a home for that, so people in public health, economics, psychology etc. do not have anywhere where they come together, so we are keen to build this network.
At our showcase in June, we will focus on questions of how we can bridge the gap, work together, and get impact evaluation into the hands of the people that most need them. We intend to showcase good practice, and expect to have a broad audience, about 900 registrations at the moment. This will be in Canberra, plus lots of people will participate online as well.
For individuals considering a career in evaluation, what advice would you offer, whether they aim to work internally or externally within the field?
I have had a hard think about this. I have worked as an internal as well as an external evaluator. I can see the value in starting a career on either side and would recommend people to do that formal training early on. You can underestimate the value of the professional training. It is easy to say yes, I know qual and quant, and I am good at report writing and how to communicate findings. Then you realise that you also need the theory and the tools. There are lots of people at the Australian Centre for Evaluation who are doing some subjects at Melbourne University, and some of the team at the Department of Health and Aged Care went through the master’s program. Every evaluator can benefit from lifelong learning, as it gives depth in terms of your practice. There is huge value in undertaking formal learning with practice to become an evaluator.
On the broader question of whether it is better to start your career in an internal versus external role, I personally feel there is huge value in building internal evaluation units within organisations. There is something really powerful in cultivating internal capabilities to think through how you measure success, how you define the strategic objectives of an evaluation, how you monitor in practice, and how you will deliver and manage the evaluation, noting that you need to have the right level of separation between program delivery and evaluation units. These are the independence concerns that need to be managed, thinking carefully about reporting lines and how things are signed off. But equally, the contractual relationships between an organisation and external evaluators also have their own ethical challenges. There are of course many examples of good long-term relationships between organisations and both external and internal evaluators. A critical ingredient is ensuring the evaluators have the right skills and credibility to do high-quality work!
Have you any further comments on other issues of relevance?
In communicating how the Australian Centre for Evaluation wants to work, we are deeply committed to a collaborative approach. We are really looking forward to our upcoming Impact Evaluation Showcase and AES workshops. At the AES conference, I think we will be involved in quite a few presentations, so most of our team will be there, and we are really keen to be deeply connected with the whole Australian evaluation sector. As I mentioned, I think this is a job of an army of evaluators, so we need all groups on board to achieve the big scale change in evaluation for Australian Government decision-making and beyond. It requires a lot of capability and resources to change things, so we are keen to engage with the whole evaluation community.
Australian Centre for Evaluation will also be monitoring its own performance. We are in the middle of developing a monitoring plan for the Australian Centre for Evaluation. These kind of enabling functions are generally quite difficult to evaluate so this will be a challenge. We were very influenced by the UK evaluation taskforce and other international approaches and will be taking a very pragmatic approach. Our Assistant Minister Andrew Leigh was also inspired by the UK model in the development of the Australian Centre for Evaluation. The UK taskforce started in 2021 so they have a couple of years of experience, so we have learnt from them. They have shared some of their models which have been useful.
