Loading…
This event has ended. Visit the official site or create your own event on Sched.
Conference registrations have closed | Presenter and chair briefing notes and other information available here.
Self-guided historical walking tours: These walking tours are accessed via the Sydney Culture Walks app and highlight Aboriginal history, heritage & culture: https://www.sydneyculturewalksapp.com/barani-redfern 
https://www.sydneyculturewalksapp.com/barani-warrane
Intermediate [clear filter]
Monday, September 16
 

11:00am AEST

Logic is the beginning of wisdom, not the end of it
Kale Dyer (Family & Community Services)

Program logics provide a framework for a systematic, integrated approach to program planning, implementation, and evaluation. They foster a shared understanding of how a program operates by clearly articulating program activities and desired outcomes, and clearly illustrating the change processes underlying an intervention.

This presentation will demonstrate an extension of program logic focusing on better integrating evidence, making explicit the mechanism of change, and embedding the NSW Human Services Outcomes Framework into program design and evaluation. A distinguishing feature of the approach is the inclusion of sections that articulate the research evidence and mechanisms of change for the program. The approach includes the evidence base for how and why the core components and flexible activities that make up the program are expected to achieve the proposed outcomes. The ability to generalise program findings is improved by identifying core components and flexible activities. These evidence extensions highlight why components of the program are likely to be effective, and links client needs to intended outcomes. This clarification facilitates improved commissioning of research and evaluation, embedding evidence in programs, explicit discussion of mechanisms of change, and a client centred approach to achieving outcomes.

Discussion around the benefits and challenges of implementation of this extended program logic model in a government agency will be provided. Benefits include how it has facilitated more effective program evaluations by identifying areas of focus, informing the development of meaningful evaluation questions and identifying relevant client centred measures to address those questions.


Chairs
avatar for Squirrel Main

Squirrel Main

Research and Evaluation Manager, The Ian Potter Foundation
Dr Squirrel Main is The Ian Potter Foundation's first Research and Evaluation Manager and she co-chairs the Philanthropic Evaluation and Data Analysis network. Squirrel completed her Masters at Stanford University in Evaluation and Policy Analysis (with David Fetterman--hello David... Read More →

Presenters
avatar for Caroline Anderson

Caroline Anderson

Senior Evaluation Officer, NSW Department of Communities and Justice
Caroline is a Research and Evaluation specialist with expertise evaluations of system-level reform, as well as program and project level evaluations. For the past fifteen years, Caroline has worked across State Government, not for profit organisations and academia. Caroline has a... Read More →
AK

Alice Knight

Manager of Evaluation, Department of Communities and Justice
Alice Knight has over ten years experience working in the government, academic and not-for-profit sectors, with academic and policy expertise in the health and human services sectors. Alice currently works at the NSW Department of Communities and Justice (DCJ) where her priority is... Read More →


Monday September 16, 2019 11:00am - 11:30am AEST
C2.5

11:00am AEST

Integrating Behavioural Insights into Evaluation
Georgia Marett (ARTD Consultants), Jack Cassidy (ARTD Consultants)

This presentation shares insights into how behavioural economics and Behavioural Insights (BI) are used in program and service design and explores ways in which evaluation can and should take BI into account. A critical concept discussed in this paper is cognitive load. Research shows that cognitive overload can negatively impact decision-making and lead to more shallow processing of information and poor information retention. One method by which BI improves program decision-making and evaluation quality is by increasing the cognitive capacity of individuals.

We illustrate how service design can take cognitive load and BI into account and what might happen if BI are ignored when designing programs. Then, we examine and explain how to evaluate programs which have incorporated BI (including how cognitive load can be incorporated into a logic model, monitoring and evaluation frameworks and/or key evaluation questions). Finally, we conclude with a discussion about whether evaluation effectively uses the cognitive capacity of its stakeholders and practitioners.

This subject is important because while BI is a hot-topic in general and in evaluation, there is a lack of understanding about the ways in which it can be applied and how to evaluate those applications. Cognitive capacity is less well understood but is vital to understanding how to craft effective services, evaluate these services and conduct evaluations regardless of whether BI are included in the target of the evaluation. We will tie this into a realist perspective of evaluation through a discussion of how BI differ in effectiveness between people and situations.



Chairs
avatar for Eleanor Williams

Eleanor Williams

Assistant Director, Centre for Evaluation and Research, Department of Health and Human Services Victoria
In my current role as Assistant Director of the Centre for Evaluation and Research at the Department of Health and Human Services, I am working to build an evaluation culture across a large government department of over 12,000 employees. My team and I aim to improve the use of data... Read More →

Presenters
avatar for Georgia Marett

Georgia Marett

Consultant, ARTD Consultants
I have many and varied interests when it comes to evaluation. I work across a variety of sectors including education, health and disability. I have a masters in Behavioural Economics so I am always keen to talk about the latest innovations in this space. I also have an interest in... Read More →
avatar for Jack Cassidy

Jack Cassidy

Consultant, ARTD
I mostly work on evaluations of large and complex programs across a range of human services sectors. As a former psychology major, I'm interested in policy informed by the behavioural sciences, particularly behavioural economics, as well as drug & alcohol and mental health responses... Read More →


Monday September 16, 2019 11:00am - 11:30am AEST
C2.2

11:00am AEST

The un-boxed game: Snakes and Ladders for illustrating the variability of evaluation projects over the career of the evaluator
Anne Markiewicz (Anne Markiewicz and Asssociates Pty Ltd), Susan Garner (Garner Willisson)

We are going to un-box an interactive game designed by two experienced presenters. The game will be Snakes and Ladders adapted to illustrate the ups and downs in the trajectory and life of the evaluator. Well designed assignments with realistic terms of reference and expectations and good stakeholder engagement will push the evaluator upwards in the game whereas ill-conceived, unrealistically scoped and politically challenged projects with hidden agendas and questionable stakeholder engagement will push the player downwards.

Topic for discussion: The fluctuating trajectory and experiences of the evaluator in conducting evaluation projects. This presentation has been influenced by the key note speaker from AES 2018 Karol Olejniczak 'Transforming evaluation practice with serious games'.

Purpose: To use game theory to illustrate how some evaluation projects go well and the success factors involved while others projects do not.
Participants should enjoy the interactive session and provide a forum for them to reflect on their experiences with evaluation projects. The session will highlight success factors and factors that get in the way of successful outcomes in evaluation projects.


Chairs
avatar for Sarah Renals

Sarah Renals

Senior Consultant Evaluation + Research, Allen + Clarke
Sarah is a senior consultant based in Brisbane working for Allen + Clarke. Sarah shares the role of aes20 Conference Co-Convenor and is currently the acting Queensland Regional Committee Convenor and Secretary. Come and speak to Sarah about: Allen + Clarke, aes20 Brisbane and the... Read More →

Presenters
avatar for Anne Markiewicz

Anne Markiewicz

Director, Anne Markiewicz and Associates
Anne Markiewicz is a leading monitoring, evaluation, and training specialist with over 20 years’ experience designing monitoring and evaluation frameworks and developing and implementing a range of evaluation methodologies. She has designed and delivered M&E capacity development... Read More →
avatar for Susan Garner

Susan Garner

Director, Garner Willisson
I see myself as an 'accidental' evaluator having come from a career in science and public policy. I managed my first evaluation project as a policy analyst in the health portfolio. With post graduate qualifications in public policy and public health I found a natural affinity to evaluative... Read More →


Monday September 16, 2019 11:00am - 12:00pm AEST
C2.3

12:00pm AEST

Empathy mapping - Discovering what they value
Andrew Moore (NZ  Defence), Victoria Carling (NZ Defence, NZ)

Empathy mapping is an emerging collaborative approach that focuses on the results of a programme. Used to gain the perspective of different stakeholders, from the commissioner to the programme participants, it seeks to define what they truly value from a programme. Empathy mapping requires participants to reflect on what success looks like, according to them, by considering what they would see, hear, do, say, think, or feel during and post programme. The results can then be used, as the building blocks of evaluation rubrics to define measurable criteria. The collaborative approach ensures a shared understanding is achieved on the quality, value, and effectiveness of a programme.

Drawing from their experience the presenters will demonstrate how empathy mapping has been used to build the foundations for successful evaluation within NZ Defence. Highlighting how empathy mapping can maximize contact time with key stakeholders, document the shared understanding of programme results and subsequently promote a collective interpretation of evaluation reports.

The session will allow participants to gain an insight into: What is empathy mapping? Where did it come from? What are the components of an empathy map? Why are they useful as building blocks for evaluation practice? How they can be used to build evaluation-rubrics?


Chairs
avatar for Larissa Brisbane

Larissa Brisbane

Team Leader, Strategic Evaluation, Dept of Planning and Environment NSW
It was only a short step from training in environmental science, and a background in cross-disciplinary problem-solving, to evaluation where I still ask 'why' and 'how do you know that'. I love hearing your stories of what you've done and what you've learned, especially in the areas... Read More →

Presenters
VC

Victoria Carling

Regional Evaluator, NZ Defence Force
avatar for Andy Moore

Andy Moore

Senior Advisor Performance and Evaluation, NZDF
I have had thirty years of experience in conducting, designing, and evaluating vocational training within NZ Defence. In that time, I have deployed oversea, predominantly in the role of coordinating aid programmes on behalf of NZ Defence. My current role is the Senior Adviser Performance... Read More →


Monday September 16, 2019 12:00pm - 12:30pm AEST
C2.6

12:00pm AEST

Unpacking the skills required for an evaluator - Learning from the past to prepare us for the future
Anthea Rutter (The  University of Melbourne)

What does it mean to call ourselves an evaluator? How do we define our craft? Let me pose another question: what do you put on your departure card when leaving Australia? - Evaluator?

I once put evaluator on my departure card before a flight to the States. I then spent the best part of an hour in Los Angeles airport trying to explain to a customs official what exactly an evaluator is. I felt it would have been so much easier to be a Plumber, an Electrician or a Nurse. We can easily conjure up the visual - somehow, it's not the same for an evaluator. However, defining our craft is important so that others, whether they are emerging evaluators or clients will understand what we are about, as well as what we are not about.

The AES Fellows are an important resource for understanding the history of evaluation, how it has evolved as well as looking towards the future. During the last eight months or so, I have interviewed the majority of the AES Fellows to get their take on what it means to be an evaluator today. I was rewarded by an honest and reflective look at their careers and gleaned some ideas for those emerging evaluators. For a number of those early pioneers of evaluation, they came into it when it was a fledgling field when it was still in the throes of trying to define itself. It has emerged as a profession and has been strengthened by becoming multi-disciplinary as it recognises that it needs to draw on many fields.

In this short paper, I will present some of those thoughts and experiences of the AES Fellows, to illuminate the path, if possible, for new evaluators. I would hope to pass on some ideas which can assist in skill building as well as identifying the qualities needed for the evaluator of today. This paper should add to the knowledge base in terms of providing some valuable information on the perceptions of those evaluators who have gone before.


Chairs
avatar for Sarah Renals

Sarah Renals

Senior Consultant Evaluation + Research, Allen + Clarke
Sarah is a senior consultant based in Brisbane working for Allen + Clarke. Sarah shares the role of aes20 Conference Co-Convenor and is currently the acting Queensland Regional Committee Convenor and Secretary. Come and speak to Sarah about: Allen + Clarke, aes20 Brisbane and the... Read More →

Presenters
avatar for Anthea Rutter

Anthea Rutter

Research Fellow, Centre for Program Evaluation. The University of Melbourne
Anthea Rutter is a Senior Research Fellow in the Assessment and Evaluation Research Centre (formerly the Centre for Program Evaluation) at The University of Melbourne. She has extensive experience working with a wide range of community, state and national organisations. She is particularly... Read More →


Monday September 16, 2019 12:00pm - 12:30pm AEST
C2.3

1:30pm AEST

A Practical Application of a Realist Synthesis Method
Jo Hall (Australian  National University)

There are a number of different methods for synthesising information across multiple evaluations. The emphasis of one of these, realist synthesis (Pawson and Tilley), is on identifying theory (context-mechanism-outcome configurations) to answer the question ‘what works for whom in what circumstances, in what respects and how?’ There are relatively few examples of realist synthesis and they sometimes struggle to articulate mechanisms and theory in ways that can be helpful for policy makers. In particular they tend to be insufficiently focused on explanation and to develop separate lists of context, mechanisms and outcomes. More examples of realist synthesis are important to grow the practical experience of using and refining the method. It is also important to demonstrate a viable and potentially more useful alternative to systematic reviews that are based on randomised control trials, for which there is a growing appetite.

I will share with you my PhD work which adopted a realist synthesis methodology for Australia’s Department of Foreign Affairs Review of Program Evaluations, to see what could be learned from the evaluation reports across two topic areas: policy influence and promoting gender equality.
I will briefly present the findings but spend most of this session reflecting on the methodology. The primary sources of information for the review were the 37 evaluation reports completed by program areas in 2017 and 14 interviews with program evaluators and DFAT staff. The method focused on coding explanatory text in evaluation reports and interview transcripts and analysing the coded text with the help of Nvivo software, drawing on substantive theory.

In a 20-minute presentation I will highlight key aspects of the process and my reflections on mid-range theory, mechanisms and explanation. The remaining 5 minutes will be for questions and discussion.

The learning papers are available at:
https://dfat.gov.au/aid/how-we-measure-performance/ode/strategic-evaluations/Documents/review-of-2017-program-evaluations-policy-influence-learning-paper.pdf

https://dfat.gov.au/aid/how-we-measure-performance/ode/strategic-evaluations/Documents/review-of-2017-program-evaluations-gender-learning-paper.pdf

Chairs
avatar for Ian Patrick

Ian Patrick

Director, Ian Patrick & Associates
Dr. Ian Patrick has wide experience in the evaluation, design and management of social sector programs. This includes a strong background in design and implementation of M&E systems, conduct of program evaluations, strategic planning, analysis and research, and training and academic... Read More →

Presenters
avatar for Jo Hall

Jo Hall

PhD student, ANU
Jo is now a part-time student and part-time evaluation consultant, having spent 30 years in international development, with NGOs and with Government, including 7 years with the Office of Development Effectiveness in the Department of Foreign Affairs and Trade. Jo is very interested... Read More →


Monday September 16, 2019 1:30pm - 2:00pm AEST
C2.5

1:30pm AEST

Digital Disruption - the next industrial revolution is here. What does this all mean for evaluators?
Jenny Riley (Clear Horizon), Jess Dart (Clear Horizon), Kristy Mansfield (Seer Data and Analytics), Reuben Stanton (Paper Giant), Chris Newman (ArcBlue Asia Pacific)

Digital, Cloud, Data Science, AI and Machine Learning, Robots....what does all this mean for the field of evaluation? Award winning evaluator Jess Dart, will host a panel of experts to explore current and emerging trends in what is hailed the 4th revolution. We will explore how new technologies are being used for social change (phone apps for finding free food, wearables for tracking in aged-care facilitates, social media for building resilience amongst farmers, apps for stream lining fines applications) and what evaluators need to be equipped to evaluate these technological interventions and also how digital can be leveraged to enhance the practice of evaluation.

The panel will reflect on real-world examples of how technical fixes can fail but also how new technology and design approaches can more democratic, participatory, transparent and importantly useful at potentially much lower costs than before. The panel will share what they have seen work well and how they evaluate success. We will also explore the ethics, risks and challenges of digital data collection, storage and reporting. We will discuss big data, small data as well as open and closed data and how we can leverage digital.


Chairs
avatar for George Argyrous

George Argyrous

Research and evaluation manager, Institute for Public Policy and Governance, UTS

Presenters
avatar for Jenny Riley

Jenny Riley

Chief Innovation Officer, Clear Horizon
Jen is one of the leading digital disrupters in the evaluation space, having developed and commercialised a digital data collection, storage and reporting tool Track2Change and most recently has developed and launched Clear Horizon's Learning Academy www.clearhorizonacademy.com... Read More →
avatar for Kristi Mansfield

Kristi Mansfield

Co-founder & Director, Seer Data & Analytics
Kristi is an influential social innovator who has a long track record working with government, philanthropists and not-for-profit leaders. Named one of Australia's 100 Women of Influence by the Australian Financial Review in 2015, Kristi is Director of CX and Strategy at Oracle, and... Read More →
avatar for Reuben Stanton

Reuben Stanton

Director, Paper Giant
Reuben is a strategic designer and researcher with over 15 years experience in the design industry in Australia and Japan. He is a co-founder and the Design Director at Paper Giant.He has a background in communication design and software development, and a PhD in interaction design... Read More →
avatar for Chris Newman

Chris Newman

managing Director, ArcBlue Consulting
Managing Director and co-founder of ArcBlue, a leading procurement consulting, training, and analytics company, Chris Newman is a specialist in delivering social and sustainable outcomes through procurement. Chris has led work across Asia-Pacific, working with Government, Industry... Read More →
avatar for Jess Dart

Jess Dart

Chief Evaluator and Founder, Clear Horizon Consulting
Dr Jess Dart is the founder and Chief Evaluator of Clear Horizon, an Australian-based specialist evaluation company. Having received the 2018 Outstanding Contribution to Evaluation Award from the Australian Evaluation Society (AES), Jess is a recognised leader with over 25 years of... Read More →


Monday September 16, 2019 1:30pm - 2:30pm AEST
Pyrmont Theatre

2:00pm AEST

Frameworks for program evaluation: considerations on research, practice and institutions
Ghislain Arbour  (University of Melbourne)

Evaluation frameworks are currently an important concern in evaluation practice, especially for organisations who desire to organise their evaluation activities. But the reflections and decision in that domain are plagued with imprecisions and ambiguities regarding the constitutive dimensions of frameworks. This renders more difficult the identification of needs and potential answers in their selection or development.

In response, this paper provides a model to analyse frameworks for program evaluation organised around four dimensions. The model states that a framework for evaluation is an intellectual framework, made of concepts and or theories (first dimension: types of ideas) about an object related to evaluation (second dimension: object), where the said concepts and theories can be positive and/or normative (third dimension: analytical perspective). These three dimensions provide the means to describe, explain or judge an evaluation related matter. A fourth and optional dimension, the institutional character of a framework, allows an evaluation framework to become a form of regulation for behaviours related to program evaluation (fourth dimension: institutional dimension).

In essence, this paper will raise our awareness about the kinds of theoretical "boxes" we encounter in evaluation so we can get better at relying on them, and even turn them into influential policies when it counts.


Chairs
avatar for Ian Patrick

Ian Patrick

Director, Ian Patrick & Associates
Dr. Ian Patrick has wide experience in the evaluation, design and management of social sector programs. This includes a strong background in design and implementation of M&E systems, conduct of program evaluations, strategic planning, analysis and research, and training and academic... Read More →

Presenters
avatar for Ghislain Arbour

Ghislain Arbour

Senior Lecturer, The University of Melbourne
Doctor Ghislain Arbour is a Senior Lecturer at the University of Melbourne where he coordinates the Master of Evaluation.*Research and consultancy*A primary research interest of Dr Arbour is the clarification of necessary concepts in the analysis and judgement of the performance of... Read More →


Monday September 16, 2019 2:00pm - 2:30pm AEST
C2.5

2:00pm AEST

Travel Behaviour Change Evaluation: Embracing ticketing data insights and moving beyond the box of self-reports
Zarin Salter (Active Transport and Safety, Urban Mobility, Department of Transport - WA), Dr Kim Carter (Data Analysis  stralia, Pty Ltd)

Implemented by The Government of Western Australia, Your Move delivers a suite of tailored travel behaviour change (TBC) programs that provide participants with localised, personalised information, coaching conversations and ongoing feedback to encourage them to walk, ride a bike and use public transport more often for their daily trips.

In 2018, de-identified, residentially coded SmartRider ticketing data made it possible to analyse the public transport patronage habits of residents who lived in two previous Your Move project areas and statistically compare their travel with those who lived in areas of greater Perth that received no Your Move projects. The data source was representative of the whole metropolitan area and was still sufficiently large enough for analysis even after a thorough data cleaning process was applied.
The resulting figures for the two previous Your Move projects were impressive and the most reliable estimate of public transport mode shift that Your Move has been able to obtain in its 20-year history. Having robust figures for public transport mode shift made it possible to extrapolate the shift in other modes and model the overall benefits of a Your Move project to the whole community.

Traditionally, TBC programs have been evaluated using self-report data collection techniques which are expensive and prone to risks associated with data reliability, survey length and respondent burden, small sample size, inaccurate sampling between interviewers, control group selection, panel recruitment loss, and weather variability.

This presentation will discuss the need for practitioners to innovate in the TBC evaluation space, specifically with respect to data source accuracy, and will share insights learned from un-packing the box of treasures hidden within ticketing data.


Chairs
avatar for Matt Healey

Matt Healey

Senior Consultant, First Person Consulting
I've been working as an evaluation consultant since early 2014. In that time I've led or contributed to over 100 projects spanning sustainability, climate change, public health, health promotion, emergency management, energy efficiency, renewable energy, waste management, innovation... Read More →

Presenters
avatar for Zarin Salter

Zarin Salter

I lead all aspects of Program Evaluation for the Active Transport and Safety branch of the Department of Transport's Urban Mobility policy and planning directorate. In this role I am responsible for: the management and leadership of evaluation projects; informing strategic cycling... Read More →


Monday September 16, 2019 2:00pm - 2:30pm AEST
C2.4

2:30pm AEST

Machine-assisted qualitative analysis in Evaluation
Jasper Odgers (ARTD Consultants), Klas Johansson (ARTD Consultants)

We will tell you how Natural Language Processing (NLP) can be used to reduce time and costs associated with qualitative analysis by up to 75%. Our experience with this technology will allow for a vibrant discussion about the real benefits of machine-assisted qualitative analysis. The ethics, limitations and future directions of the technology will also be discussed.

This technology can be used to analyse large amounts of unstructured text data in a way that reduces the resource burden of analysing large qualitative datasets. By using techniques such as topic modelling and keyword identification, analysts can interpret the contents of large datasets in a fraction of the time it would take to do manually. Improvements in this technology will have profound impacts on the practice of evaluation as the use of the technology becomes more widespread. Much of the analysis work that was a large part of an evaluator’s job will be able to be done quickly and easily by machine-assisted technology; however, we focus on the continued need for humans to be involved throughout the analysis process. NLP is also adept at identifying themes from data which may not be apparent to human analysts. Integrating this technology with ongoing monitoring data means that evaluators don’t need to constantly analyse incoming data but can easily keep up to date and concentrate on interpretation and innovative reporting.

As the technology improves and becomes more widespread it is inevitable that it will have an impact on how evaluations are designed and therefore the theory which underpins them.

Chairs
avatar for George Argyrous

George Argyrous

Research and evaluation manager, Institute for Public Policy and Governance, UTS

Presenters
avatar for Georgia Marett

Georgia Marett

Consultant, ARTD Consultants
I have many and varied interests when it comes to evaluation. I work across a variety of sectors including education, health and disability. I have a masters in Behavioural Economics so I am always keen to talk about the latest innovations in this space. I also have an interest in... Read More →
avatar for Jasper Odgers

Jasper Odgers

Manager, ARTD Consultants
Jasper has been studying and working in quantitative research and data analysis for the past eight years. He manages online surveys, quantitative data analysis and data visualisation for all of ARTD’s reporting. He has recently managed several stakeholder surveys for NSW Government... Read More →
avatar for David Wakelin

David Wakelin

Senior Consultant, ARTD Consultants
I am a keen data analyst with a passion for data visualisation. I've been working on a wide range of projects lately and have seen immense value in being able to tell stories with data I am working with.


Monday September 16, 2019 2:30pm - 3:00pm AEST
Pyrmont Theatre

2:40pm AEST

Using a template to collect interview notes for rapid upload and autocoding in NVivo
Carolyn Hooper (Allen and Clarke Policy and Regulatory Specialists)

We have all kinds of tools at our fingertips, yet many of us under-utilise them. If you have NVivo and want to get more out of it, learning how to develop a template in MSWord is a good way forward. In five minutes, I will show you how to to it, and you will never look back.

Chairs
avatar for Matt Healey

Matt Healey

Senior Consultant, First Person Consulting
I've been working as an evaluation consultant since early 2014. In that time I've led or contributed to over 100 projects spanning sustainability, climate change, public health, health promotion, emergency management, energy efficiency, renewable energy, waste management, innovation... Read More →

Presenters
avatar for Carolyn Hooper

Carolyn Hooper

Evaluation + Research Senior Consultant, Allen + Clarke
I'm a social scientist, with a preference for qualitative methods. I've been working in evaluation for seven years and I love the variety of opportunities I get to make a difference. My doctorate in Public Health means I often work on health-related evaluations, examining interventions... Read More →


Monday September 16, 2019 2:40pm - 2:45pm AEST
C2.4

2:45pm AEST

"You seriously need to play more - Let's go! (Participatory design and facilitation with Lego Serious Play)"
Kahiwa Sebire  
It might look like just fun and games, but Lego Serious Play (LSP) is a powerful facilitation tool to enable groups to surface deeper-level assumptions about a topic or program. By supporting participants to think metaphorically to build and then communicate their idea or viewpoint, groups can achieve stronger and clearer communication.

Let me share an example of how I used LSP to help a team build a shared vision of success, while uncovering competing assumptions in a safe and structured manner, and ideas for how you could use it to construct program theories, define success criteria, gather participant insights.


Chairs
avatar for Matt Healey

Matt Healey

Senior Consultant, First Person Consulting
I've been working as an evaluation consultant since early 2014. In that time I've led or contributed to over 100 projects spanning sustainability, climate change, public health, health promotion, emergency management, energy efficiency, renewable energy, waste management, innovation... Read More →

Presenters
avatar for Kahiwa Sebire

Kahiwa Sebire

Director / MEval Student, eLumen/ University of Melbourne
Enthusiastic solution finder and life-long learner. Big fan of thorny questions, sticky notes and whiteboards. In my work, I'm exploring the intersection between learning and technology, and particularly how educational institutions can be purposeful about how they design for student... Read More →


Monday September 16, 2019 2:45pm - 2:50pm AEST
C2.4

3:30pm AEST

When the West Meets the East: Collaborative design, analysis and delivery of program evaluation in rural generalist training program in Japan
Takara Tsuzaki  (Western Michigan University)

This presentation demonstrates a case study of a mixed method and bilingual program evaluation which was conducted on a newly launched rural medicine/rural generalist program in Japan with a focus on collaborative and iterative learning processes. The client, GENEPRO LLC and the evaluator will share challenges in designing and implementing the evaluation, and how we have been successful in building trust among stakeholders, integrating evaluation into practice, and fostering iterative learning within the organization.

The model - Rural Generalist Program Japan (RGPJ) - is based on the Australian model which has been regarded as the most comprehensive and matured rural generalist medicine training scheme in the world. To meet the specific needs of rural generalist medicine in Japan, provision of rural healthcare was needed to be tailored to regional and local context. Exporting this medical training scheme from Australia to Japan also meant a new collaborative endeavor to develop a unique program evaluation model and approach in Japan.

This presentation will highlight the contextual differences between the East and the West in terms of philosophies and cultural values and how they are manifest in the evaluation practices. The concept of both the theoretical and practical evaluation has developed differently in Japan in the past 50 years when it is compared to the West. Furthermore, evaluation has been conducted predominantly using quantitative data in the medical and healthcare sector in Japan. However, the rural generalist medicine requires distinctly broad scope of practice as well as unique combination of abilities and aptitude to respond to the community needs of rural and remote areas of Japan. As a result, the evaluation approach, including the underlying values, philosophies and methodologies had to be thoroughly examined and openly discussed to bring all the stakeholders on board.

We will share the lessons from the collaborative evaluation process by discussing: what the evaluative thinking and collaborative evaluation design mean in the Japanese rural and medical settings; how we have come up with innovative approaches to communicate with stakeholders who have evaluation anxiety and fear of modernist undertaking; how we have acknowledged and overcome (in)translatability issues in languages, imbedded values, and social contexts of each stakeholder groups; and how the collaborative evaluation processes impacted the organizational culture during and after the evaluation.

Chairs
avatar for Rebecca Arnold

Rebecca Arnold

Senior Project Officer - MERI, Department of Environment and Water Resources (SA)

Presenters
avatar for Takara Tsuzaki

Takara Tsuzaki

Interdisciplinary Ph.D. in Evaluation, Western Michigan University
Takara Tsuzaki is a specialist in public relations, social policy research and evaluation. She has worked as researcher, consultant and evaluator for 15 years in the private, public, academic and not-for-profit sectors in Japan and the United States. Working extensively in the fields... Read More →


Monday September 16, 2019 3:30pm - 4:00pm AEST
C2.6

3:30pm AEST

How do we know? Implications of epistemology for evaluation practice
Gill Westhorp  (Charles Darwin University)

What do we know? What can we know, and how do we know that we know it? These are philosophical questions with real implications for the practice of evaluation. Epistemology is the branch of philosophy dealing with the nature of knowledge. Different epistemologies underpin different approaches in research and evaluation. They have implications for what data is considered to be 'valid', how data can or should be collected, how data is analysed and interpreted, and under what conditions findings are portable to other contexts.

This paper deals with two epistemologies - realist and constructivist - from a realist viewpoint. Some authors have claimed that realists 'are realists ontologically, but constructivists epistemologically'. That is, realists believe that there is a real world, which exists independently of our interpretations of it ("realist ontology"). However, we all construct our own interpretations of it. Knowledge is not a direct representation of reality, but an interpretation of it, constructed in our own heads, and shaped by language, culture, personal experience, and previous learning ("constructivist epistemology"). Knowledge does not exist independently of 'the person who knows'. In radical constructivism, we cannot even be sure that there is a real world. Perhaps we are all just avatars in some giant computer game.

This paper argues that there are areas of overlap, but also areas of distinction between, realist and constructivist epistemology. These distinctions have implications for evaluation practice. It will briefly describe the key assumptions of constructivism, and contrast these with key assumptions in realism. It will use a hypothetical evaluation as an example to discuss differences in: the purposes of constructivist and realist investigation; the nature of the data that is collected; the ways that analysis is undertaken; how 'valuing' is approached and how evaluation adds value; the nature of findings; and the portability of findings.


Chairs
MT

Mardi Trompf

Founder and consultant, MarVAL Consulting
International development monitoring, evaluation, learning, project management and procurement. Successes include consolidation of disparate programs to prioritise impact, outcome based monitoring, measuring and delivering value for money.

Presenters
avatar for Gill Westhorp

Gill Westhorp

Professorial Research Fellow, Charles Darwin University
Gill leads the Realist Research Evaluation and Learning Initiative (RREALI) at Charles Darwin University. RREALI develops new methods and tools within a realist framework, supports development of competency in realist approaches and provides realist evaluation and research services... Read More →


Monday September 16, 2019 3:30pm - 4:30pm AEST
C2.4

4:00pm AEST

Evaluating a place-based partnership program: Can Get Health in Canterbury
Amy Bestman (Health Equity Research & Development Unit (HERDU), Sydney Local Health District), Jane Lloyd (Health Equity Research & Development Unit (HERDU),  Sydney Local Health District), David Lilley (Health Equity Research & Development Unit (HERDU), Sydney Local Health District), Barbara Hawkshaw (Central and Eastern Primary Health Network)

This presentation wrestles with the balance between ensuring a robust community-led, inter-sectoral, public health program in a culturally and linguistically diverse (CALD) location and how to effectively provide sufficient monitoring, evaluation, reflection and improvement opportunities while the intervention is in situ.

Can Get Health in Canterbury (CGHiC) is a unique inter-sectoral program with three key partners (the University of New South Wales, Sydney Local Health District and Central Eastern Primary Health Network) and many local partnerships with community organisations. It was established in 2013 to address high health needs among CALD population groups within Canterbury, NSW.
CGHiC's partnership with the community is supported by the employment of community networkers and the establishment of collective control projects. Bengali and Arabic networkers link the community with the health system, and also provide insight to the health system on the unique needs of the community. The collective control projects enable the community to have greater power over decision making, priority setting and allocation of resources. These projects aim to improve capacity of both community groups and the health system and encourage bi-directional learning and reflection.

Two external evaluations have previously been conducted which provide a point in time reflection on the impact of the project. Now that CGHiC is in its sixth year of operation, we are evaluating the program in-house with the following foci: the external impact of the program; the governance structure, priority setting and decision making of the program; and, the activities of the program. While this process is ongoing, the program team have implemented monitoring tools and processes to measure recent activities. The CGHiC evaluation will contribute to the field of evaluation through the development of novel methodologies, approaches and insights to evaluating complex place-based, multi-sectoral, population-level programs in situ.


Chairs
avatar for Rebecca Arnold

Rebecca Arnold

Senior Project Officer - MERI, Department of Environment and Water Resources (SA)

Presenters
avatar for Amy Bestman

Amy Bestman

Community Partnerships Fellow, UNSW
Dr Bestman’s work has been driven by a strong public health approach and has focused on the translation of research to practice and policy. Her research has focused on public health qualitative studies that address inequity in vulnerable populations such as children, disadvantaged... Read More →


Monday September 16, 2019 4:00pm - 4:30pm AEST
C2.6
 
Tuesday, September 17
 

11:00am AEST

Bringing values into evaluation: A tool for practitioners
Mathea Roorda (Allen  + Clarke)

Values are fundamental to evaluation as they provide the basis against which evaluative judgments are made. Yet evaluators often overlook them. In this skill building session, participants will be introduced to a framework intended to unbox dimensions of value for publicly-funded programs. As the overall conference theme states: evaluation can be gift - it has the potential to strengthen people's lives. Evaluation also comes with responsibilities, one of which is that the evaluator's judgments need to be based on all relevant values, not just those of the evaluation commissioner. The framework draws on two approaches to valuing, one of which comes from a branch of philosophy that is focused on value (how we understand concepts such as good and bad); the second on describing value as understood by different program stakeholders. We will step through the framework's components and then discuss its applicability for evaluation practice. A handbook for using the framework will be made available to participants.

Chairs
avatar for Anne Markiewicz

Anne Markiewicz

Director, Anne Markiewicz and Associates
Anne Markiewicz is a leading monitoring, evaluation, and training specialist with over 20 years’ experience designing monitoring and evaluation frameworks and developing and implementing a range of evaluation methodologies. She has designed and delivered M&E capacity development... Read More →

Presenters
avatar for Mathea Roorda

Mathea Roorda

Senior consultant, Allen + Clarke
Values inform what matters, and are at the heart of evaluation. You literally can't get to an evaluative judgement without them. I'm interested in approaches to systematically identifying what matters, and for whom. Come talk with me about the values identification matrix (VIM) I... Read More →


Tuesday September 17, 2019 11:00am - 12:00pm AEST
C2.4

11:30am AEST

Sharing perspectives and creating meaning through insider/outsider evaluation of an Aboriginal transfer of care program from hospital to community
Liz Norsa (Western Sydney University), Nathan Jones (Aboriginal Health Unit SWSLHD), Karen Beetson (Aboriginal Health Unit SWSLHD), An  Speizer (Aboriginal Health Unit SWSLHD), Raylene Blackburn (Camden & Campbelltown Hospitals SWSLHD), Ilse Blign lt (School of Medicine Western Sydney University)

Aboriginal people with chronic conditions are more likely to leave hospital with incomplete transfer of care arrangements and more likely to be readmitted after a recent hospitalisation. The Aboriginal Transfer of Care (ATOC) Program at South Western Sydney Local Health District (SWSLHD), in which Aboriginal Liaison Officers and Transfer of Care nurses work as team to deliver a holistic patient-centred model of care, was designed to address this problem by ensuring consideration of an Aboriginal patient’s medical, cultural and psychosocial needs. Promising early results led to a formal evaluation funded by NSW Health under its Translational Research Grant Scheme. SWSLHD, Western Sydney University and the Ministry of Health are partners in this mixed-methods evaluation. The qualitative evaluation component aimed to: document the program model, describe what is ‘successful’ transfer of care for patients, their families and service providers, and identify opportunities for program enhancement and extension. The evaluation employed participatory methods, which involved over 40 interviews, participant observation and workshops at two hospitals. SWSLHD and the university members of the evaluation team brought insider and outsider perspectives: Aboriginal and non-Aboriginal; service manager or provider, and evaluator. This short presentation describes how the evaluation approach and ways of working were shaped by these different perspectives.


Chairs
avatar for Linda Klein

Linda Klein

Deputy Director Research & Evaluation, GP Synergy
Linda Klein, BSc MSc PhDI have an academic background in psychology and public health, with over 30 years of practical experience in evaluation in sectors spanning health, education/training and business. At GP Synergy, I take primary responsibility for the evaluation of educational... Read More →

Presenters
LN

Liz Norsa

Research Officer, Western Sydney University
Liz Norsa is employed as a Research Officer at the Translation Health Research Institute at Western Sydney University. As a social and cultural anthropologist Liz has a particular interest in patient agency, wellbeing, the production of health/medical knowledge and ethnography within... Read More →
RB

Raylene Blackburn

Aboriginal Liaison Officer, NSW Health
I am a proud Anaiwin & Dungutti woman, lived in the area for 30+ yearsEmployed by Campbelltown Hospital for 10 years, ALO for 7 years The role of ALO is very rewarding and knowing I am helping and supporting our community is the best feeling
avatar for Karen Beetson

Karen Beetson

Deputy Director Aboriginal Health, South Western Sydney Local Health District
Karen Beetson is a Manadandanji woman from Roma QLD and has lived and worked in the Dharawal community for the most of her life. Karen has worked for over 35 years in Aboriginal Community Development and capacity building beginning her career in Aboriginal Employment and Education... Read More →


Tuesday September 17, 2019 11:30am - 12:00pm AEST
C2.5

12:00pm AEST

Using Program Design Logic to manage the risk of program failure
Andrew Hawkins (ARTD  Consultants)

This paper is about identifying, managing and mitigating the risk that a program will not produce its intended effects. A principle of this approach is that a program at its core is simply a proposition that a certain course of action will lead to a certain set of outcomes. It is about putting the logic back in program logic.

Program Design Logic (PDL) is a tool for developing evidence based policy and programs. Through the language of 'necessary' and 'sufficient' conditions in place of 'outputs' and 'outcomes' it provides a framework to determine if a program or course of action makes sense 'on paper' before we attempt to determine if it makes sense in 'reality' through monitoring and evaluation.

The five types of risk are
  1. It doesn't make sense on paper - logical risk
  2. It makes sense on paper, but assumptions don't hold - assumption risk
  3. It makes sense on paper, but we didn't do what we said we would do - performance risk
  4. It makes sense on paper, assumptions hold, we do what we said we would do, but outputs don't materialise - theoretical risk
  5. It makes sense on paper, assumptions hold, we do what we said we would do, outputs materialise, but intended outcomes don't follow, so the array of outputs was not actually sufficient to bring about a desired future state - logical risk
  6. It makes sense on paper, assumptions hold, we do what we said we would do, outputs materialise, intended outcomes follow, but longer term outcomes don't materialise -external factor risk
This paper will discuss how a PDL approach can provide a comprehensive risk management framework before the first participant is even enrolled, which may then be managed and mitigated through program re-design as well as adaptive monitoring and evaluation.


Chairs
RH

Ronelle Hutchinson

Director, PwC Economics & Policy

Presenters
AH

Andrew Hawkins

Partner, ARTD Consultants
Andrew works as a trusted advisor and strategic evaluator for public policy professionals, generating insight and evidence for decision-making. Andrew has worked for a wide range of Australian and NSW public sector agencies and not-for-profits on the design, monitoring and evaluation... Read More →


Tuesday September 17, 2019 12:00pm - 12:30pm AEST
Pyrmont Theatre

12:00pm AEST

How to integrate intercultural considerations in evaluation debate and practice
Rini Mowson (Clear Horizon), Sarah Leslie (Clear Horizon)

The context within which an evaluand exists matters in evaluation. The AES evaluators' professional learning competency framework dedicates an entire domain to "attention to culture, stakeholders and contexts". Oakley, Pratt and Clayton (1998) argued that evaluation context should be treated as being at the very heart of social development and impact assessment must take full account of the bigger picture in arriving at the conclusion about the success or failure of social development programs. Thus, adapting and managing the evaluation "context" is important to balance ensuring sustainable and impactful evaluation to the end users/beneficiaries, with satisfying the needs of the program team and/or evaluation commissioner.

This paper will seek to answer two questions:: "What are the domains of context that evaluators need to be aware of?" and "How can evaluators adapt their practice to fit the context where they work?'.
The presenters will draw on their experiences in evaluation in multicultural contexts through their work in international development.

The presenters propose three domains of context that evaluators should consider before embarking on an evaluation journey. Firstly, studies demonstrate the importance of applying basic principles of evaluation such as participation, community empowerment and communicating the evaluation results back to beneficiaries, however most evaluations are donor driven exercises. With this limitation, how can evaluators empower funding recipients to enforce the application of basic principles of evaluation. Secondly, how can evaluators address power dynamics in the evaluation process to ensure the evaluation results will represent the real outcomes of the program achieved across different types of beneficiaries. Thirdly, presenters propose that all evaluation should find ways to ensure evaluation will support capacity building of relevant stakeholders including beneficiaries and communities.


Chairs
avatar for Kathryn Dinh

Kathryn Dinh

MEL Consultant/PhD Candidate, UNSW
Kathryn is a monitoring, evaluation and learning consultant with more than 20 years of experience across Asia, the Pacific (including Australia) and the Commonwealth of Independent States. She specialises in evaluation for health and international development programs, advocacy evaluation... Read More →

Presenters
avatar for Rini Mowson

Rini Mowson

Consultant, Clear Horizon
Rini has been working in the international development and Australian community development sectors for more than thirteen years. She has worked with a broad range of social change initiatives and businesses globally. Rini has extensive experience in designing and implementing monitoring... Read More →
SL

Sarah Leslie

Senior Consultant, Clear Horizon
Interested in MEL frameworks, evaluation and portfolio level MEL


Tuesday September 17, 2019 12:00pm - 12:30pm AEST
C2.1

2:30pm AEST

Communities of Practice, mentoring and evaluation advice: using soft power approaches to build capacity
Florent Gomez (NSW Department of Finance, Services and Innovation)

In the same way that some countries use culture as a soft power approach to extend their influence, evaluation should give serious consideration to soft capacity building tools such as Communities of Practice. This approach can be incredibly effective in diffusing evaluative thinking across organisations that are less familiar with it.

A New South Wales government department which is not a traditional stronghold for evaluation – as compared to human services departments such as education, health or community services – has established a successful Evaluation Community of Practice since November 2017. The Community of Practice brings together staff with varying levels of evaluation maturity to ‘share the love for evaluation’. The intent is to offer a more informal and less intimidating forum for participants to share challenges and learning than a traditional expert-to-learner approach. Over 50 people gather at each quarterly event where presenters provide case studies, panel discussions and practical exercises such as collectively developing a program logic or crafting good survey questions.

After a year and a half, participants reported an increased understanding of what evaluation is about and of key tools such as program logic, as well as applying those learning back in their workplace. The Community of Practice has opened up the conversation on evaluation across the organisation. While a slow and diffuse process, there is now a growing interest in evidence-based approaches, outcome framing and evaluative thinking.

Other soft power approaches used involve staff mentoring and evaluation advice. These have proved to be particularly powerful in improving the quality of evaluations – and are not necessarily much more resource intensive than formal training. Provided at the initial stage, targeted evaluation advice contributes to getting the evaluation framing right which generates a better evaluation brief. This, in turn, results in better evaluation outcomes, where the evaluation produces evidence around what the organisation is interested to learn about.

Chairs
avatar for Carolyn Hooper

Carolyn Hooper

Evaluation + Research Senior Consultant, Allen + Clarke
I'm a social scientist, with a preference for qualitative methods. I've been working in evaluation for seven years and I love the variety of opportunities I get to make a difference. My doctorate in Public Health means I often work on health-related evaluations, examining interventions... Read More →

Presenters
avatar for Florent Gomez

Florent Gomez

Manager, Planning and Evaluation, NSW Department of Customer Service
avatar for Michelle Bowron

Michelle Bowron

NSW Department of Customer Service
Currently working in the NSW Department of Customer Service and responsible for delivering evaluation activities and increasing evaluation awareness. I have an interest in learning more about evaluation approaches and the value it adds to existing and future business initiatives... Read More →


Tuesday September 17, 2019 2:30pm - 3:00pm AEST
C2.6

2:30pm AEST

Achieving successful outcomes through evaluation: A practical example of evidence-based practice for an Indigenous program
Janice Smith (Charles Sturt University), Shaarn Hayward (Charles Sturt University), Suellen Priest (Charles Sturt University), Christine Lindsay (Charles Sturt University)

The Indigenous Academic Success Program at Charles Sturt University offers a suite of academic services to Indigenous students to improve aspiration, retention, and success. The program has supported over 730 students enrolled across Charles Sturt University courses since its conception in 2016 and is largely comprised of Indigenous staff, who oversee the planning, evaluation, implementation, reporting, and improvement of the program.

The program is deeply embedded within the Indigenous community, with six of the seven permanent staff currently employed in the program identifying as Indigenous, and representing ten different Indigenous nations or language groups. Evaluative practices have been applied throughout program setup and delivery, through the use of program logics, quantitative and qualitative methodologies. A key evaluation method is the use of interviews to gather feedback from students who are using, have used, or been offered access to the service.

This presentation unpacks the program's evaluation process and design, detailing the ways the programs annual evaluation report determines the progress, outcomes, and development of the program in consideration of the student community the program works with. It will also consider how the evaluation of both participants and those invited to participate who did not take up the offer of support has been conducted in a way that provides a safe and effective mechanism for Indigenous participants to participate. Feedback from students at all levels of engagement is positioned as central to understanding the program's progress and success, and this presentation will look at how the evaluation data has been used to measure the program's progress in reaching outcomes and inform its improvements and future direction.






Chairs
avatar for Andy Moore

Andy Moore

Senior Advisor Performance and Evaluation, NZDF
I have had thirty years of experience in conducting, designing, and evaluating vocational training within NZ Defence. In that time, I have deployed oversea, predominantly in the role of coordinating aid programmes on behalf of NZ Defence. My current role is the Senior Adviser Performance... Read More →

Presenters
avatar for Janice Smith

Janice Smith

Tutorial Coordinator, Indigenous Academic Success, Charles Sturt University
Janice Smith has been in her current role with Charles Sturt University (CSU) since August 2016. She has previously worked in the Public Sector administering Indigenous Education Programs that focused on improving the educational outcomes for Aboriginal and Torres Strait Islander... Read More →
avatar for Kristy Saarenpaa

Kristy Saarenpaa

Coordinator, Program Evaluation and Reporting, Charles Sturt University
Kirsty Saarenpaa has worked in the Higher Education sector for 17 years predominately in contract management and compliance, both in the domestic and international sectors at Charles Sturt University and the University of Newcastle. Most recently Kirsty joined Charles Sturt's Division... Read More →


Tuesday September 17, 2019 2:30pm - 3:00pm AEST
C2.5
 
Wednesday, September 18
 

10:30am AEST

Front-end loading: The value of formative evaluation in setting program focus: a case study of the Australian Volunteers Program
Keren Winterford (University of Technology Sydney), Anna Gero (Institute for Sustainable Futures, University of Technology Sydney), Jake Phelan (Austalia Volunteers Program)

This paper explores the practice of a formative evaluation for the Australian Volunteers Program and sets out why formative evaluation is valuable to setting program focus and defining approaches to impact evaluation. Reflections from independent evaluators and the Monitoring Evaluation and Learning team of the Australian Volunteers Program are provided within this presentation drawing together multi-stakeholder and practitioner perspectives on theory and practice of formative evaluation.

The overall objective of the formative evaluation presented in this paper was to map the global footprint of the Australian Volunteers Program in three impact areas in order to (i) establish a baseline; (ii) inform strategic options for strengthening engagement in the impact areas and; (iii) propose methodology for demonstrating outcomes in impact areas. The three impact areas of Inclusive economic growth; Human Rights; and Climate Change/Disaster Resilience/Food Security are informed by the Australian Government Volunteers Program Global Program Strategy. Rather than setting out evaluation findings, the paper explores the practice of collaborative evaluation design; use of mixed methods including key informant interviews, document review, and quantitative analysis to prepare working definitions of impact areas. We explore the practice of drawing on local (country contexts) and global measures (Sustainable Development Goals) to define impact areas and how we have made sense of these to apply to the Australian Volunteers Program.

The paper distinguishes the theory and practice of formative evaluation and sets out the unique contribution it offers to policy and programming agendas. The paper talks about the value of evaluation across multiple points in the project cycle and value of linking formative and summative evaluations as highlighted within this case. Informed by this case study, the presenters offer tips and tricks for those commissioning and conducting evaluations to ensure formative evaluations provide best contribution to policy and programming agendas.


Chairs
avatar for Kiri Parata

Kiri Parata

President, Australian Evaluation Society
Kia ora I'm Kiri, living on Kabi Kabi Country on the Sunshine Coast of Queensland. I share my time between here and my home in Aotearoa New Zealand where my iwi (tribal) affiliations are Te Ātiawa, Ngāti Toarangatira, Ngāti Raukawa, Ngāti Ruanui, and Ngāi Tahu. I'm completing... Read More →

Presenters
avatar for Keren Winterford

Keren Winterford

Research Director, Institute for Sustainable Futures, University of Technology Sydney
Dr Winterford has 20 years of work experience working in the international development sector, in multiple capacities with Managing Contractors, NGOs, as a private consultant, and more recently in development research. She currently provides research and consultancy services for numerous... Read More →
avatar for Farooq Dar

Farooq Dar

Monitoring, Evaluation and Learning Advisor, Australian Volunteers International
Farooq has accumulated 15+ years of experience as an international development practitioner designing and managing complex multi-sectoral humanitarian and development programs/projects, working on governance, compliance and policy issues across various countries around Asia including... Read More →
avatar for Anna Gero

Anna Gero

Research Principal, University of Technology Sydney
Anna Gero is a climate change and disaster risk leader and specialist with over 10 years experience in the Asia-Pacific region. She is an experienced project manager, and has led climate change and disaster resilience consultancies, evaluations and research projects since 2008 across... Read More →


Wednesday September 18, 2019 10:30am - 11:00am AEST
C2.6

11:30am AEST

Confidence for evaluators: The unspoken skill
Matt Healey (First Person Consulting)

Typically, evaluators are seen and presented as all-knowing experts across a never-ending range of areas: quantitative, qualitative and mixed research methods, engagement approaches, cultural competencies, reporting tools and platforms. On top of this, is a need to understand the constant change within and across social, health and environmental arenas, exponential changes in technology and the implications for evaluation. In many ways, it is impossible for evaluators to know everything - even more so for evaluators at the earlier stages of their career.

During an emerging evaluators panel session at aes18, one of the key themes that emerged when emerging and early career evaluators discussed how to move out of the intermediate "fuzzy middle" towards becoming 'experts' was the need to be both comfortable in uncertainty and confident with themselves, their knowledge and their practice in evaluation. While the AES competency framework emphasises competence in a range of areas, the need to be confident (and develop confidence) is only implicit across domains, and only explicit in the context of building confidence in others, and in statistical methods!

This session will draw on practices and principles from the presenter's own experience in developing his confidence in the context of presenting, facilitating and dealing with large audiences. Through a mix of lightning talks, light-hearted hands-on activities and reflective small group discussions attendees will leave with tools and approaches immediately implementable during and post-conference. Ultimately, this highly interactive skill session will make explicit the unspoken (but crucial) soft skill of confidence.




Chairs
avatar for Jess MacArthur

Jess MacArthur

Doctoral Candidate, Institute for Sustainable Futures - University of Technology Sydney
Jess is a doctoral student at the Institute for Sustainable Futures, University of Technology Sydney. Her research focuses on recognising and discovering innovative ways to measure how water and sanitation programming in South and Southeast Asia affects women. Jess specialises in... Read More →

Presenters
avatar for Matt Healey

Matt Healey

Senior Consultant, First Person Consulting
I've been working as an evaluation consultant since early 2014. In that time I've led or contributed to over 100 projects spanning sustainability, climate change, public health, health promotion, emergency management, energy efficiency, renewable energy, waste management, innovation... Read More →


Wednesday September 18, 2019 11:30am - 12:30pm AEST
C2.3

11:35am AEST

The whole box and dice: economic evaluation trends and forecasts
Mark Galvin (EY)

Recent government moves towards outcomes budgeting is the latest illustration that outcomes thinking is here to stay. Outcomes evaluation coupled with economic evaluation is increasing and increasingly interdependent, especially in the social policy and services space. With such anticipation, the risk of an empty box looms large. Demonstrating and valuing outcomes requires intentional and fit-for-purpose measurement approaches. Sharing approaches is critical to further innovation and support for robust public decision making.

This Ignite presentation will showcase changes in the policy landscape, as well as visual depictions of evaluation methodologies that situate 'traditional' social outcomes as benefits and how significant economic value is derived through effective services delivery and cost avoidance.


Chairs
avatar for Kiri Parata

Kiri Parata

President, Australian Evaluation Society
Kia ora I'm Kiri, living on Kabi Kabi Country on the Sunshine Coast of Queensland. I share my time between here and my home in Aotearoa New Zealand where my iwi (tribal) affiliations are Te Ātiawa, Ngāti Toarangatira, Ngāti Raukawa, Ngāti Ruanui, and Ngāi Tahu. I'm completing... Read More →

Presenters
avatar for Mark Galvin

Mark Galvin

Partner, EY
Finding solutions to complex problems. Rugby tragic. Beach holiday enthusiast. Dad to two awesome kids and Ollie the Irish terrier.I have over 20 years’ experience as a professional advisory consultant and has spent much of his career advising state and federal governments, Ministerial... Read More →
avatar for Alain Nader

Alain Nader

Senior Manager, EY
Over the past ten years I have delivered strategic advice and implementation support to a number of government agencies, both State and Federal. Areas of particular interest include examining the roles and responsibilities of government, improving citizen outcomes and the allocative... Read More →


Wednesday September 18, 2019 11:35am - 11:40am AEST
C2.6

11:40am AEST

Using e-diaries to collect evaluation data
Carolyn Hooper (Allen and Clarke Policy and Regulatory Specialists)

During an intervention evaluation, front-line service delivery staff made periodic diary entries using an on-line portal. Diarists responded to prompts specific to the evaluation questions. The output provided valuable insights to the day-to-day realities of those delivering the intervention; resulting in front-line staff having a strong voice in the evaluation report. The e-diary is an accessible, innovative method for collecting data, suited to situations where a detailed view of the work at the intervention delivery interface is valuable, but direct observation by an evaluator is problematic. Come and see how we did it.

Chairs
avatar for Kiri Parata

Kiri Parata

President, Australian Evaluation Society
Kia ora I'm Kiri, living on Kabi Kabi Country on the Sunshine Coast of Queensland. I share my time between here and my home in Aotearoa New Zealand where my iwi (tribal) affiliations are Te Ātiawa, Ngāti Toarangatira, Ngāti Raukawa, Ngāti Ruanui, and Ngāi Tahu. I'm completing... Read More →

Presenters
avatar for Carolyn Hooper

Carolyn Hooper

Evaluation + Research Senior Consultant, Allen + Clarke
I'm a social scientist, with a preference for qualitative methods. I've been working in evaluation for seven years and I love the variety of opportunities I get to make a difference. My doctorate in Public Health means I often work on health-related evaluations, examining interventions... Read More →


Wednesday September 18, 2019 11:40am - 11:45am AEST
C2.6

12:00pm AEST

Evaluating system change: Exploring how project innovations transform business as usual
Adrian Field (Dovetail), Julian King (Julian King and Associates), Kate McKegg (The Knowledge Institute)

How do project innovations create changes in wider organisational systems and practice? This short paper will discuss our learning from evaluating three dynamic road safety projects working within an innovation umbrella programme.

This session will highlight the challenges and opportunities for taking innovation to scale, reflecting on our learning from theoretical approaches outside evaluation that offer compelling new windows for evaluator’s understanding of impact and change. Grounded in real-world application of three innovative road safety projects, the paper will present the inter-weaving of socio-technical systems theory, developmental evaluation, rubrics, and learning from the innovation literature.

These approaches were used, along with their practical application through rubrics and multiple data collection methods, to explore the extent to which the projects fostered innovation that translated into sustained business operations.

This paper will provide useful ideas and reflections for participants including how collaboratively developed evaluation rubrics were used to define and assess levels and dimensions of system change that each project could reflect against, through a developmental process of engagement and reflection.

We will also reflect on the contribution that collaborative partnerships, communities of practice, people-centred approaches, and reframing risk offer to evaluation practitioners as avenues for exploring the translation of innovation to system change. Socio-technical systems theory will be provided as a lens for understanding the potential for local or niche innovations to lever changes in wider systems.

The session will conclude with an exploration of the role of evaluation in capturing and catalysing innovation.

Chairs
avatar for Natalie Fisher

Natalie Fisher

Director, NSF Consulting
I am an evaluation consultant with more than 15 years of experience working for clients predominantly in the arts and cultural sectors but also in environmental sustainability and human services. In 2017 I graduated with a Master of Evaluation from the University of Melbourne (First... Read More →

Presenters
avatar for Julian King

Julian King

Director, Julian King & Associates
Julian specialises in evaluation and value for money. The Value for Investment evaluation system, developed through Julian’s doctoral research, combines evaluative and economic thinking to assess the quality, impact and value of policies and programs. Julian received the 2021 A... Read More →
avatar for Adrian Field

Adrian Field

Director, Dovetail
Adrian is the director of Dovetail, an Auckland-based evaluation consultancy, and a member of the Kinnect Group. Adrian has worked in evaluation in different capacities for some 20 years and doesn't really like how old that makes him feel. Adrian's experience traverses health, social... Read More →


Wednesday September 18, 2019 12:00pm - 12:30pm AEST
C2.4

1:40pm AEST

A live unboxing: The evaluation capacity building role
Liam Downing (Centre for Education Statistics and Evaluation)

In a session designed especially for those who LOVE watching those unboxing videos on YouTube, I will unbox, set up, and use a brand new evaluation capacity building role live on the AES 2019 stage. I will show you what's inside, how it works and what it can do. You can see if it's the right choice for you to build skills and grow the profession through capacity building. This Ignite presentation will also use props. PROPS!

Chairs
avatar for Duncan Rintoul

Duncan Rintoul

Director, Rooftop Social
ECB devotee, mentor in the AES group mentoring program, used to be on the AES board, still have heaps to learn. Keeping busy doing research and evaluation and facilitation work in education and justice and sustainability and health and you name it. Looking forward to catching up with... Read More →

Presenters
avatar for Liam Downing

Liam Downing

Manager, Evaluation and Data, Quality Teaching Practice, NSW Department of Education
Liam is an experienced and impactful evaluation leader, with 17+ years of experience. He is focused on ensuring that evaluation is rigorous in its design, meaningful in informing next steps, and driven by building the capacity of as many people as possible to engage deeply in evaluative... Read More →


Wednesday September 18, 2019 1:40pm - 1:45pm AEST
C2.6

2:00pm AEST

The dance of evaluation: Engaging stakeholders to develop an evaluation framework across a highly diverse training organisation
Racheal Norris (GP Synergy), Linda Klein (GP Synergy)

This presentation will outline the processes and challenges involved in developing an efficient evaluation framework, using a state-wide vocational training organisation as a case-study. GP Synergy delivers an accredited General Practice training program, across eight highly diverse subregions of NSW and the ACT, for doctors wishing to specialise as General Practitioners. A small Evaluation Team was established in 2017 to develop a rigorous, adaptive evaluation system to monitor and report on delivery of educational activities.

Using evidence-based methodology, the team adopted a participatory approach and engaged stakeholders across three key levels:

Education Executive
An interactive program logic workshop was held to discuss and identify various evaluation priorities at the senior-level.

Medical Educators
The team worked closely with individual educators to design evaluation tools that were standardised, yet responsive to the unique needs of each region. This involved careful consideration of psychometric properties to ensure robust and reliable measures of key outcomes. A semi-automated reporting system was created to maximise efficiency of delivering timely feedback, and the team guided educators to correctly interpret and utilise this information for continuous improvement.

GP Registrars
The team consulted with registrars (trainees) to explore and develop pathways to "close the loop" and communicate evaluation findings and implications for the training program. This also involved educating registrars about the broader theoretical framework behind evaluation and how to provide useful, constructive feedback.

Evaluation at GP Synergy remains an evolving process, with ongoing multi-level engagement ensuring evaluation systems continue to be responsive and adaptable to stakeholder needs. The role of the Evaluation Team in educating stakeholders and colleagues about evaluation 'steps' has been fundamental to successful data collection and reflection on findings resulting in change. Insights will be offered to others developing evaluation frameworks/methods within settings where flexibility and responsiveness are key.


Chairs
avatar for Duncan Rintoul

Duncan Rintoul

Director, Rooftop Social
ECB devotee, mentor in the AES group mentoring program, used to be on the AES board, still have heaps to learn. Keeping busy doing research and evaluation and facilitation work in education and justice and sustainability and health and you name it. Looking forward to catching up with... Read More →

Presenters
avatar for Racheal Norris

Racheal Norris

Evaluations Officer, GP Synergy
Racheal is an Evaluation Officer within the NSW & ACT Research and Evaluation Unit of GP Synergy. Racheal is involved in the collection and reporting of feedback from GP registrar and supervisor development workshops. Racheal also contributes to the ongoing development of a broader... Read More →
avatar for Linda Klein

Linda Klein

Deputy Director Research & Evaluation, GP Synergy
Linda Klein, BSc MSc PhDI have an academic background in psychology and public health, with over 30 years of practical experience in evaluation in sectors spanning health, education/training and business. At GP Synergy, I take primary responsibility for the evaluation of educational... Read More →


Wednesday September 18, 2019 2:00pm - 2:30pm AEST
C2.6

2:00pm AEST

Exploring 'beyond the box': Applying implementation theory to evaluate a quality improvement project in Aboriginal and Torres Strait Islander primary health care
Alison Laycock (University Centre for Rural Health), Gillian Harvey (The University of Adelaide), Nikki Percival (University of Technology Sydney), Frances Cunningham (Menzies School of Health Research), Jodie Bailie (University Centre for Rural Health), Veronica Matthews (University Centre for Rural Health), Kerry Copley (Aboriginal Medical Services Alliance Northern Territory), Louise Patel (Aboriginal Medical Services Alliance Northern Territory), Ross Bailie (University Centre for Rural Health)

Implementation science examines what methods and strategies work to promote the use of research findings and other evidence into routine practice, to improve the quality and effectiveness of health services and care. It explores, for example, how health interventions can be adapted and scaled in ways that are accessible and equitable to improve health. Implementation science can provide important knowledge for improving Aboriginal and Torres Strait Islander health, however little research addresses how implementation theories or frameworks have been applied to evaluate projects and programs in Indigenous health.

Drawing on developmental evaluation data, we used the integrated Promoting Action on Research Implementation in Health Services (iPARIHS) framework to examine factors contributing to the success, or otherwise, of a large-scale interactive dissemination project. The project engaged stakeholders with continuous quality improvement data from Aboriginal and Torres Strait Islander primary health care services to co-produce knowledge for improving care.

In this presentation, we describe how we selected and applied this theoretical framework as an evaluation tool. We examine the extent to which use of the framework enhanced our understanding of project interactions, limitations and success in the Aboriginal and Torres Strait Islander health care context and influenced our ongoing work to improve health.


Chairs
avatar for Peter Ellis

Peter Ellis

Director, Nous Group
Professionally I'm both an evaluator and a statistician, with a particular interest in using evidence to improve public sector outcomes. While I'm now in consultancy, I've previous run evaluation functions, including as Director Program Evaluation for AusAID, and Manager Tourism Research... Read More →

Presenters
avatar for Alison Laycock

Alison Laycock

PhD Candidate, Menzies School of Health Research
Alison is an evaluator and PhD candidate at Menzies School of Health Research and the Centre for Research Excellence in Integrated Quality Improvement in Indigenous primary health care. At aes19, Alison is presenting the evaluation of a collaborative knowledge translation project... Read More →


Wednesday September 18, 2019 2:00pm - 2:30pm AEST
C2.4

2:30pm AEST

Unpacking the competencies - among commissioners, managers and evaluators
This presentation is a merge of the two listed below and contains a free-ranging discussion on evaluator competencies,


Advanced Tips For Commissioning and Managing High-Quality, Useful Evaluation 
Jane Davidson (Real Evaluation LLC),Tessie Catsambas (Encompass LLC)

What are the most important traps to avoid and tips for commissioning and managing high-quality, value-for-money evaluation? This interactive panel session will be an informative helicopter tour for evaluation commissioners, evaluation team leaders, and internal and external professionals who oversee or manage evaluation projects. It will provide: (1) a deeper appreciation of the role of evaluation management in commissioning and delivering high-quality, value-for-money evaluations; (2) an overview of the role and essential competencies of evaluation managers; and (3) sample strategies and tools for commissioning and managing better and more useful evaluations for organizational learning and stronger leadership. Participants are invited to share their own experiences and engage in a highly interactive discussion with the presenters, who will draw on decades of practical experience leading both large international multi-country evaluations and small-team and solo evaluation projects, as well as providing advice to client organizations on how to scope, commission, and manage highly effective evaluations.

Unpacking the competencies - in theory and practice
 
Amy Gullickson (University of Melbourne Centre for Program Evaluation), Delyth Lloyd (Department of Health and Human Service), Sue Leahy (ARTD)

The AES Professional Learning Competency Framework was developed in 2012 and in 2019, the Learning and Professional Practice Committee engaged the AES community in research with the intention to update the competency set. The goal was to assess the framework in relation to what evaluation theorists have discussed in the literature about what skills and knowledge are needed for evaluation practice. In this interactive session, we'll report out on recent theoretical work in this area, the findings to date of the community research project, get community feedback on the findings so far and their relevance to evaluation practice, and discuss next steps.

Chairs
avatar for Amy Gullickson

Amy Gullickson

Associate Professor, Director, Centre for Program Evaluation, The University of Melbourne
Associate Professor Amy Gullickson is the Director of the University of Melbourne Centre for Program Evaluation, which has been delivering evaluation and research services, thought leadership, and qualifications for more than 30 years. She is also a co-founder and current chair of... Read More →

Presenters
avatar for Jane Davidson

Jane Davidson

Real Evaluation LLC
Dr. Jane Davidson is best known for pioneering the increasingly popular Evaluation Rubrics Methodology, along with her various other refreshingly practical evaluation frameworks and approaches.Originally from Aotearoa New Zealand, Jane is former Associate Director of The Evaluation... Read More →
avatar for Sue Leahy

Sue Leahy

Managing Director, ARTD Consultants
Sue is an accomplished evaluator, policy analyst and facilitator and managing Principal at ARTD, a leading evaluation and public policy company based in NSW. She joined ARTD in 2009 from the NSW Department of Family and Community Services, where she managed a wide-ranging program... Read More →
avatar for Delyth Lloyd

Delyth Lloyd

Manager, Centre for Evaluation and Research Evidence, Department of Health, Vic
This year I'll be chasing interesting new ideas in capacity building, competncies, and facilitation. You'll find me on a quest to up my game in systems thinking, eval theory and methods, and pondering the intersection between rigour and pragmatic evaluation.


Wednesday September 18, 2019 2:30pm - 3:00pm AEST
Pyrmont Theatre

2:30pm AEST

Operationalising systems-thinking approaches to evaluating health system innovations: The example of HealthPathways Sydney
Carmen Huckel Schneider (University of Sydney), Sarah Norris (University of Sydney), Sally Wortley (University of Sydney), Angus Ritchie (University of Sydney), Fiona Blyth (University of Sydney), Adam Elshaug (University of Sydney), Andrew Wilson (University of Sydney)

There have been increasing calls to take a systems-thinking approach to evaluating health policies and programs - acknowledging the complexity of health systems and the many actors, institutions, relationships, drivers and values that impact on health system change. Several key frameworks have emerged that support systems-thinking, including "WHOs Framework for Action"; "NASSS - Non-Adoption, Abandonment, and Challenges to Scale-Up, Spread and Sustainability"; and the "Vortex Model". However little has been written on how to operationalise systems framework elements into practical evaluation studies comprising methodologically rigorous data collection and analysis methods - all while staying true to the principles of systems-thinking.

In this presentation we seek to unbox the challenge of operationalising a system-thinking approach to evaluating healthcare delivery innovations. We use the NASSS framework as our example to demonstrate how to expand system-thinking frameworks, progress towards theories and pose systems-thinking-driven, yet researchable questions. This requires crossing epistemological boundaries, and taking a 'multiple studies' approach adopting various methods of inquiry. We report on applying these principles to evaluate HealthPathways Sydney, a website for GPs to navigate care pathways for their patients through primary and specialist care. We followed a two phase approach, beginning with a series of sub-studies using standard qualitative and quantitative methods and reflected on the conduct of these studies to pinpoint system level factors (macro contexts, institutional settings, critical events, agents and relationships) that were necessary to understand in order to determine how the innovation interacted with the system. Our second phase adopted systems-thinking study methods including geo-spatial mapping, social network analysis, process tracing, frames analysis and situational analysis. Results were then synthesised into a rich case of the introduction of an innovation into the system. We uncovered progress towards desired outcomes, but also barriers to consolidating and embedding the technology when other system factors were in play.


Chairs
avatar for Duncan Rintoul

Duncan Rintoul

Director, Rooftop Social
ECB devotee, mentor in the AES group mentoring program, used to be on the AES board, still have heaps to learn. Keeping busy doing research and evaluation and facilitation work in education and justice and sustainability and health and you name it. Looking forward to catching up with... Read More →

Presenters
avatar for Carmen Huckel Schneider

Carmen Huckel Schneider

Senior Lecturer, Health Policy, University of Sydney
I am Deputy Director at the Menzies Centre for Health Policy, and Program Director of the Master of Health Policy at the University of Sydney. I am co-lead of the Health Governance and Financing, and Applied Policy Analysis Groups at the Menzies Centre for Health Policy, a Senior... Read More →
SN

Sarah Norris

Senior Research Fellow, Menzies Centre for Health Policy
How broader approaches to evaluation can be applied to health technology evaluation, and vice versa.


Wednesday September 18, 2019 2:30pm - 3:00pm AEST
C2.6

2:30pm AEST

Buddhist Evaluation: Thinking outside the box of Western-derived methods
Kathryn Dinh (UNSW), Heather Worth (UNSW), Bridget Haire (UNSW)

The field of evaluation tends to be dominated by certain Western-derived understandings of the way the world works and underlying belief that these understandings are universal. Culturally responsive evaluation recognises the existence of diverse world views and some of its exponents argue that it needs to encompass more than simply working closely in collaboration with locally-based partners. It should additionally involve modifying and creating new evaluation approaches that are grounded in non-Western world views.

While there has been significant innovation in evaluation approaches that reflect Indigenous world views in Australia, New Zealand, the US and elsewhere, there has been less progress in reflecting the world views of South-East and East Asia. Buddhism has a significant global influence today, and particularly in these regions where it is practised by a large majority of the population.

In this presentation, we suggest an applied approach to culturally responsive evaluation by first analysing the world views underpinning Buddhism and the Most Significant Change (MSC) technique, a participatory method for monitoring and evaluation that involves the collection of stories of significant change. We then identify where these converge and diverge and suggest practical ways in which the MSC technique could be adapted to reflect a Buddhist world view. Finally we look at how in a globalised world, societies are made up of a complex and dynamic mix of values, philosophies, traditions, religions and cultures. We discuss that as evaluators, we can use this approach to work with locally-based colleagues to unpack the theory and value systems underpinning existing evaluation methods, and repackage the methods or create new ones that reflect, and are responsive to, the complex and dynamic world views in the local context being evaluated.


Chairs
avatar for Karen Fisher

Karen Fisher

Professor, Social Policy Research Centre UNSW
I conduct research and evaluation about disability and mental health policy in Australia and China. I use inclusive methods with people with disability, families and other stakeholders. I am enjoying extending that to film, photos and other accessible methods to ensure the evaluations... Read More →

Presenters
avatar for Kathryn Dinh

Kathryn Dinh

MEL Consultant/PhD Candidate, UNSW
Kathryn is a monitoring, evaluation and learning consultant with more than 20 years of experience across Asia, the Pacific (including Australia) and the Commonwealth of Independent States. She specialises in evaluation for health and international development programs, advocacy evaluation... Read More →


Wednesday September 18, 2019 2:30pm - 3:00pm AEST
C2.2
 
Filter sessions
Apply filters to sessions.