archived
Estimated reading time 10 min
This post has been archived and may include outdated content

Evidence-based approaches and experiments – Towards progressive policymaking?

Evidence-based policymaking is currently a hot topic. How can we incorporate evidence into policies in a meaningful way? Possible approaches include the systematic trialling of policy measures, better post-monitoring of law reforms and human-centred service development.

Writers

Published

The first-time applicant quota introduced at Finnish universities last year has proven somewhat counterproductive: applicants are finding tactical ways to play the system. Such unexpected consequences are a well-known problem in policymaking. However, what is crucial in this case is that the chance tactical applications would be made was known in advance. This case is a good example of problems that can arise in evidence-based policymaking. No one benefits from inadequately planned policy measures.

Interest in evidence-based policymaking has grown in recent years. It is a topic frequently discussed within the EU[1] and at institutions like the OECD and the UN. In Finland, the debate has reached various think tanks and research institutes (VATT, THL, Sitra, Demos Helsinki). A report by Kari Raivio[2] on the Finnish evidence-based system was published by the Prime Minister’s Office in 2014. The launch of the Strategic Research Council is part of this new development. At its core, evidence-based policy is a response to slow and complex policymaking. The debate is also characterised by a focus on legislative and regulatory development and enthusiastic support for social policy experiments. However, more work is needed to form a comprehensive picture.

Despite the success of science and evidence, it should be noted that the experiential and multifaceted nature of reality cannot always be moulded into scientific models, which, by definition, are simplified representations of reality. On the other hand, solutions[3] to what are known as wicked problems depend on how the problems are determined, and what works in one place can be ineffective in another. One example of this is a highly successful project carried out in the 1990s in Tamil Nadu, India, to combat malnutrition among children. The same concept was then copied in Bangladesh – with poor results.[4]

Even when research scientists agree on a compromise for the best solution, policymakers’ receptiveness to the evidence can vary: the availability of knowledge does not automatically translate into the ability, skill or even willingness to use it in policymaking. Decision-makers use rules of thumb, values and intuition to navigate a deluge of information and political pressure.[5]

The purpose of this article is to take a modest first step away from bemoaning the challenges described above. We believe that evidence can improve political decision-making as long as it is appropriately incorporated into political processes.

Randomised controlled trials to serve policy planning

The interaction between evidence provision and policymaking should be seen as an iterative learning process.

Dialogue between politics and science is characterised by juxtaposition: either there is a single scientific solution, or the solution is purely political. What if we were to let go of this dichotomous paradigm for a change? The interaction between evidence provision and policymaking should be seen as an iterative learning process.

One of the characteristics of a learning process is that we cannot know everything in advance. Scientific evidence is more or less context-dependent, which means that previous evidence – “it worked there” – cannot be taken to directly show how the same policy would work here. Because it is impossible to know all possible effects in advance, we need to experiment. Experimentation does not mean aimless shots in the dark. Major policy areas can be systematically broken down into smaller, more manageable parts for experimentation. The best outcomes can then be scaled back up at the system level.[6]

Interesting policy measures can be trialled in a highly controlled manner, and the results can be used for learning purposes. For example, ministries could use some of their research funds (211 million euros[7]) for experiments. Processes such as these have been proposed by Demos Helsinki[8] and others. Some experiments could be implemented outside the regular law-making process. Ministries could use experiments to develop their knowledge base. The experimentation discourse should be part of the law-making process.

Randomised controlled trials have been at the core of the debate on societal experiments.[9] For scientists, they provide the ideal model for knowledge acquisition. Randomised controlled trials can sometimes be fully integrated into policy planning. For example, in the case of major reforms such as the basic universal income, extensive – and expensive – trial systems seem well justified.

However, this does not mean that every reform should be tested in the same way as basic universal income. For less extensive reforms, small-scale randomised controlled trials could be conducted to obtain indicators about how the measure would work in a specific, limited context. Carefully targeted randomised controlled trials combined with other forms of knowledge acquisition could provide nearly the same level of reliability as large-scale experiments, but at lower costs. One example of this is the mechanism experiment, which tests only the specific mechanism that is expected to produce the intended results, not the full policy measure.[10]

Human-centred evidence for everyday policy work and governance

The discussion on evidence-based policy often gets stuck on arguments about whether randomised controlled trials work. We believe that the possibilities for evidence-based policy extend far beyond the use of such trials.

Instead of expecting that all information used in policy planning will meet the requirements of an evidence hierarchy[11], evidence should be seen as pluralistic. Requirements on scientific rigour can be scaled up or down depending on what makes sense in relation to the objectives of the reform in question.

The pluralistic conception of evidence places more emphasis on the human perspective. The use of human-centred evidence is not in conflict with randomised controlled trials. In addition to seeking to understand how things generally work, the idea is to learn about the processes and mechanisms that underpin them. The effectiveness of policies crucially depends on how well processes and the context are understood. This means using quantitative and qualitative research methods. 

In Finland, social policy development has traditionally been system-focused. A good example of this is student financial aid reform, where the student housing supplement has been replaced by the general housing allowance. The system became more streamlined but worsened the situation for students. Evidence-based policy should also mean that services are co-developed from a human-centred approach. In practice, this could involve using the tools of service design, for example. Service users are in the best position to know what works.

An experimental society promises a shift away from a top-down expert-led approach. The old, rigid institutions are not able to respond to new challenges, and the current debate on the culture of experimentation is likely to attract mainly those who already have an interest in these subjects. Change happens by getting people involved in the experiments. This also increases the legitimacy of institutions. Better legitimacy is also supported by low-level experiments such as Nopeat Kokeilut (“Agile Piloting”) in Helsinki. They can be used to test less extensive interventions for which in-depth scientific evidence is not needed.

The human-centred approach is also linked to the behavioural social policy ideal.

The human-centred approach is also linked to the behavioural social policy ideal. In behavioural social policy, people’s real-world behaviour is taken into account in the planning of policy measures. The notion of the rational agent has faded over the decades; in the new model, people’s systematic irrationalities are taken into account as much as possible. In the UK, the Behavioural Insights Team has achieved both savings and more effective outcomes by drawing from behavioural science.

With this in mind, when preparing law reforms, there should perhaps be more of a focus on mechanisms related to behaviour, if the aim is to change people’s behaviours. Various easy-to-understand theoretical frameworks exist for this purpose, such as the Behavioural Insights Team’s EAST.

 Better evidence from interaction and post-monitoring

When we concentrate on experiments and evidentiary pluralism, it is also important to pay attention to more effective use of evidence in existing institutions. For example, the scope of the knowledge base underpinning legislative proposals greatly depends on how well the parties involved in the preparatory work communicate with each other.

From the point of view of the quality of the preparatory process, one of the recognised problems is that the knowledge base is often mapped too late in the process. The contents of a legislative proposal are often decided before any broader consultations with experts and stakeholders have even begun. If a decision is made on the solution before the knowledge base has been advanced, there is a risk that what is meant to be evidence-based policy becomes, in fact, policy-based evidence, and only data that supports the chosen solution is selected.[12]

If one were to choose a specific point in the policymaking process that could benefit from an extra “injection” of evidence, it would be the planning stage of legislative proposals and other policy measures. Planning and preparatory work should also be steered towards a more interactive approach. Increased interaction is also needed between administrative branches. The debate should also look at whether sectoral ministries are the answer to dealing with wicked problems.

One example of progress in interactive legislative drafting is the postal service bill prepared by the Ministry of Transport and Communications, which included a public consultation to probe opinions on changes needed in the existing law before the proposals were formulated. Based on the consultations, a report on possible alternatives was produced and sent back to stakeholders for a round of comments. It was not until this stage that the government proposal for the bill was drawn up.[13]

It should also be noted that even if the impacts of a legislative measure cannot be determined, for example through trials, each and every law reform is an experiment in itself. It is considerably easier to study the effects of past reforms than it is to predict future outcomes. For that reason, systematic effort in the post-monitoring of law reforms is needed.[14] It could be another way to build the evidence base of the political process alongside experiments. The measurement of the impacts of projects in the social and healthcare sector also deserves more attention.

Social progress in the 2020s will require the ability to experiment, learn from mistakes and engage citizens in planning a new society.

There is no single progressive approach, nor should we seek one. Similarly, there is no single solution for any problem in today’s society. Social progress in the 2020s will require the ability to experiment, learn from mistakes and engage citizens in planning a new society.

 

[1] European Commission (2015) Strengthening Evidence Based Policy Making through Scientific Advice: Reviewing existing practice and setting up a European Science Advice Mechanism.

[2] Raivio, Kari (2014) Näyttöön perustuva päätöksenteko – suomalainen neuvonantojärjestelmä. Valtioneuvoston kanslian raporttisarja 3/14.

[3] Rittel, Horst & Webber, Melvin (1973) Dilemmas in a General Theory of Planning. Policy Sciences (4), 155-169.

[4] Cartwright, Nancy & Hardin, Jeremy (2012) Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford University Press.

[5] Päätöksenteon psykologiasta esim. Cairney, Paul (2016) The Politics of Evidence-Based Policy Making. Palgrave Macmillan, s. 24–27.

[6] Berg, Annukka (2016) Kokeilujen muotoilu, tekeminen ja tilaaminen – järjestystä kokeiluviidakkoon.

[7] Raivio, Kari (2014) Näyttöön perustuva päätöksenteko – suomalainen neuvonantojärjestelmä. Valtioneuvoston kanslian raporttisarja 3/14.

[8] Demos Helsinki (2015) Design for Government: Human-centric Governance Through Experiments.

[9] Suomessa uusimpana avauksena VATT 1/17 Policy Brief: Joko Suomessa koittaisi satunnaiskokeiden aika?

[10] Ludwig, Jens & Kling, Jeffrey & Mullainathan, Sendhil (2011) Mechanism Experiments and Policy Evaluations. Journal of Economic Perspectives, 25(3), 17-38.

[11] Näyttöön perustuvasta lääketieteestä tullut ajatus, jonka mukaan RCT:t ja niistä tehdyt meta-analyysit ovat parasta näyttöä. Perustuu ajatukseen satunnaistamisen mahdollistamasta kausaalianalyysista. Ks. kuitenkin kritiikki Ylikoski, Petri (2015) Hyviä ja huonoja perusteluja kokeelliselle sosiologialle. Sosiologia 3(15), 204-211.

[12] Ongelmasta suomalaisen lainvalmistelun kontekstissa esim. Pakarinen, Auri (2011) Lainvalmistelu vuorovaikutuksena. Analyysi keskeisten etujärjestöjen näkemyksistä lainvalmisteluun osallistumisesta.

[13] ks. Hallituksen esitys HE 272/2016, 60–61.

[14] Tätä mieltä on ollut myös muun muassa Valtiontalouden tarkastusvirasto, ks. Valtiontalouden tarkastusviraston selvitys 3/2013 Lainvalmistelun laatu ja kehittämistarpeet,  s. 33–36.

What's this about?