blogs
Estimated reading time 7 min

Five critical points for evaluation management

What should one consider when commissioning and managing an evaluation of the impact of an organisation? Here are five critical points that should be considered before starting evaluation work.

Writers

Jari Hyvärinen

Senior adviser, Business Finland

Published

The need for evaluation knowledge is widely acknowledged by many different organisations. To improve the quality and relevance of evaluations, it is important to increase the capacity of evaluation commissioners and managers and to share good practices and lessons learned.

We have highlighted a number of critical points to consider when commissioning and managing evaluation. Our remarks are based on our experiences, mostly of evaluations from Business Finland (formerly Tekes) and the Finnish Innovation Fund Sitra, the latter of which promotes systemic change toward sustainable well-being. Both organisations have commissioned dozens of evaluations over the years. By identifying the critical points, we want to strengthen the ability of evaluation commissioners to carry out the evaluation process, from the planning of tenders to the stage where lessons are learned and results are reported successfully. The list is not exhaustive or detailed (you can find more specific tasks here), but it is a start for further elaboration and discussion.

1. Define the role of evaluation in the organisation

Evaluation management is about working through the process of planning and implementing the evaluation. It is about connecting the critical points during the evaluation process, and thus building a bridge between 1) evaluation; and 2) a strategy and operational work. The planning of evaluation starts with defining the role and purpose of evaluation and how it is linked to organisational elements, and then communicating the links thoroughly to the participants.

From an organisational point of view, a strategy is a good starting point for defining what gets evaluated, what the scope of evaluation is and how evaluation is used for strategy design and implementation processes. As Hallie Preskill reflected in the AEA Conference in Washington D.C. in November 2017, the understanding of the intersection between evaluation and strategy has increased during the past decade. It is influencing evaluation practices and, above all, how an evaluation is defined. And when evaluation is intertwined with the strategy processes, new kinds of internal evaluation capacity and resources are needed.

An evaluation of the impact of an organisation should be focused on what the organisation is aiming to achieve. At Sitra, the purpose of impact evaluation is to produce high-quality and unbiased information on the impact of Sitra’s activities on Finnish society. The starting point of the evaluation is Sitra’s shared goals for impact, which guide the work to promote sustainable well-being in society. In evaluation design this means that the focus of the impact evaluation has shifted from individual projects’ outputs and outcomes to an analysis of what has the organisation’s contribution has been on systemic change.

Business Finland (formerly Tekes) has been systematically developing monitoring and evaluation activities for the last two decades. Evaluation of Tekes funding is based on an overall impact model. The model describes how the impact of Tekes funding is created and how this impact can be seen in funded projects, project results and outcomes, and eventually on industry, the economy and society. The main methodology is called “additionality”, where the leading idea is to analyse how the intervention of the public sector in research, development and innovation activities increases the efforts of the private sector.

2. Choose the most fitting evaluation toolkit

There are lots of different approaches, methods and tools available for evaluation. How could a commissioner know what to choose and how to order the best-fitting evaluation? The most important part of evaluation design is formulating the tasks and questions that need to be answered by an evaluation. We cannot stress enough that the methodology must serve the evaluation purpose and questions, and not vice versa. Although some methods may at times seem to be more popular, the appropriateness of the methods strongly depends on the phenomena and the contexts that are being evaluated. The same approaches and methods seldom fit all situations, whether they involve simple or complicated and complex phenomena or interventions.

Evaluation commissioners have a crucial role to play in encouraging and enabling evaluators to use new and innovative methods. In a procurement process, for example, it is important to define in advance the methodological approach and guiding principles so that the design respects the purpose and context. However, specific methods can be defined together with evaluators during the evaluation process. Sometimes the methods may need to be adjusted during the evaluation process too.

3. Understand the context

Context matters when defining and designing an adequate evaluation. Therefore, it is necessary that the evaluation manager properly understands the nature of the action that is evaluated. A good base is to define what the theories of change are, the short- and long-term outcomes and the impact on society, and how they are expected to come about.

Evaluation procedures have been developed in project and programme contexts. However, when evaluating the impact of an organisation in a complex and highly connected world we need to understand impact paths holistically. For example, one of the challenges for assessing the impact of research, development and innovation (RDI) funding is related to assessing the impact on broader societal changes. The challenge is to evaluate the impact on the broad industrial and service changes or in ecosystems (such as digitisation, cleantech, bioeconomy, health and well-being) at the macroeconomic level when funding decisions are usually made at the project level. From the evaluator’s point of view, evaluation of RDI funding is traditionally carried out by starting with the project data. In the end, results are aggregated to the impacts that are relevant for the whole of the economy and society. Therefore, there is a danger that the results of the impact and recommendations are too universal to be used appropriately in internal strategic decision-making. We see that there is a need for more explicit ecosystem-level programme goals, i.e. explicit goals for societal impact, on which evaluators could focus when carrying out the impact analysis. New kinds of evaluation approaches and methodologies are also needed to consider systemic changes better.

4. Make sure that evaluation serves learning

The crucial question for evaluation design and implementation is how an evaluation, as a process and its results, serves organisational learning. Our own experience shows that it is important to have continuous interaction during the evaluation process between the evaluation manager and the evaluators to make sure that the evaluation serves its needs, that it answers the questions that are relevant for organisations and that the evaluators have adequate knowledge available to them. It is also important to identify forums for organisational learning where the evaluation findings can be shared and best used at the right time. The independent nature of the external evaluators does not mean that the evaluators should remain distant and disconnected from the object of evaluation, but the impact evaluation could benefit from learning and development-oriented approaches.

5. Support utilisation

A good evaluation is useful for its intended users. From the viewpoint of an organisation and management, an evaluation’s role is usually to support decision-making and enable learning and development by providing relevant knowledge and constructive feedback. A major part of evaluation design is thinking about how evaluation as a process can serve reflection and knowledge production among stakeholders and to engage them during the process. What kinds of competences do evaluators need to have for accomplishing these tasks? And what kind of forums support the use of the evaluation knowledge? No matter how excellent the final evaluation report is, facilitated forums are vital for helping decision-makers and other users to make the most of the results.

Finally, the evaluation manager needs to choose the most useful method for reporting the results. Is a traditional report always the most effective way to deliver what has been learned? The form of reporting results strongly depends on the purpose of an evaluation and any formalities, for example funding conditions or the intended audience and users of the evaluation results. Whatever form the reporting takes, the basic elements – the results, conclusions and recommendations – must be clearly reported and presented, and it should be easy to understand how the evaluation questions have been answered and how the results have been gleaned from the data.

In the first place, it is the evaluator’s duty to take care of the quality and standards of evaluation. On the other hand, evaluation management should not accept poor quality either.

What's this about?