Behavioural insights for effective regulation(3): Evidence and experiments

Now that we have a better understanding of human behaviour and have looked at some examples of regulatory interventions using insights from the behavioural sciences, it is time to ask the hard question: does it work?

Answering the ‘does it work’ question is all but easy because of the variety of responses to that question in the literature. Here, I let two strands of the literature speak to each other. I give voice to organisations dedicated to developing, testing and marketing these interventions. They include but are not limited to, the UK based Behavioural Insights Team and the USA based Office of Evaluation Sciences—let’s call them ‘BITs and Nudge-units’. And, I give voice to the broader academic community that has been scrutinising these interventions for some decades now.

 

BITs and Nudge-units: it often does

BITs and Nudge-units were first introduced around 2010, under the Cameron Government in the UK and the Obama Government in the USA.[1] They develop public policy by drawing on ideas from the behavioural science literature. They have seen considerable growth and expansion. The original UK based BIT, for example, started off with little staff and is now an organisation of around 150 employees, with offices in London, Manchester, New York, Singapore, Sydney and Wellington.

These organisations are actively involved in tests and trials to understand whether a specific intervention has the desired outcomes. This allows them to draw detailed lessons about what works and what does not, before scaling up the trial to a large population. The ongoing testing and trialling have also allowed for drawing more general lessons about the use of behavioural insights into policy and regulation and how to develop well-working interventions.

BITs and Nudge-units often find that the regulatory interventions they have developed change the behaviour of those they target desirably. The UK BIT has, for example, been publishing annual reports that give insight into its trials. Its 2016-2017 Update Report presents tremendous results, including an intervention that resulted in a 20% reduction in speeding in the six months after police officers explain to those caught speeding why and how speeding limits are set.

 

Academia: no evidence that it works across the board

When looking at the broader academic literature, the answer to the question ‘whether the use of insights from the behavioural sciences results in desirable outcomes’ is less clear-cut. For this series of blog posts, I have reviewed about a 100 peer-reviewed articles on behavioural science-informed regulatory interventions. In a nutshell, academics find that sometimes interventions building on these insights have desirable effects, and sometimes they have not. They stress that, at this time, we lack robust evidence to make generic statements about the extent to which behavioural science-informed regulatory interventions live up to their expectations (which is a careful way of saying: despite much research, we don’t know if it works across the board).

Take the effects of information disclosure, for example. A major review of the literature on this topic by Loewenstein, Sunstein and Golman finds no to at best modest evidence that consumers respond to information disclosure in areas ranging from energy-efficiency labelling on appliances to privacy disclosures on websites, or calorie labelling on food items. It finds that people pay even less attention to the absence of information than to the presence of it. And, it finds that when information is unpleasant to deal with people fail to attend to it. People are more likely to update believes if the information provided supports earlier held believes rather than when it challenges these.[2]

To complicate things further, people’s heuristics and biases appear age-dependent and cohort-dependent. In other words, a behavioural intervention that works to change the behaviour of men in their 30s to 40s, may not have the desired effect on the behaviour of women or that of pensioners. In short, evidence that a behavioural intervention has the desired effect in a specific policy area or geographic location is by no means a guarantee that the exact same intervention will have the same impact elsewhere.

 

Academia: research into the effects of these interventions often is subpar

Academics are critical also to the studies into these interventions. These are usually carried out in a laboratory or otherwise manipulated setting and not in the actual environment where the intervention is to be implemented. Lacking ‘real-world’ exposure, critical academics question, how can we know how the intervention would perform in reality? Even more problematically, these studies are often carried out without first establishing a baseline against which the outcomes of the intervention can be assessed. How then, these academics ask, can we know if the intervention makes any real improvement at all?

Another critique is that these interventions are never studied in comparison to other regulatory interventions. So even when they are found to work as desired, we do not know if another intervention would also have worked in that specific setting and perhaps at a lower cost. This indeed is also the verdict of the cases that are discussed in the 2017 OECD report on Behavioural Insights and Public Policy. This OECD report brings together 111 policy and regulatory interventions that build on behavioural insights (and is a good source of inspiration, as I discussed in the previous blog post).

All 111 cases from the OECD report were recently scrutinised by a group of academics.[3] They find that 18 are not experimental and 33 were not comparative (in other words, their outcomes cannot be compared with ‘real-world’ situations). They further find that, often, conclusions of the workings of the intervention are drawn without are presenting clear statistics or effect sizes, and in none of the 111 cases a comparison was made with the effects of other policy or regulatory interventions. The group of academics finds that only 50% the interventions were found to have a positive impact on behavioural change.

 

BITs and Nudge-units: let’s carry out more RCTs!

So, who is right here? Surprisingly, perhaps, BITs and Nudge-units, and academia have good points. There are no inherent conflicts in their findings. Even more, the core challenge that academics point to (the quality of studies into regulatory interventions that apply insights from the behavioural sciences) are addressed by BITs and Nudge-units. The BITs and Nudge-units around the world also call for more systematic research into the workings of behavioural science-informed regulatory interventions. And they are actively involved in such research by carrying out Randomised Control Trials (RCTs).

RCTs build on the same logic as the testing of new medication or internet-based businesses. In a nutshell, an RCT follows the following steps. (1) People or organisations participating in the experiment are randomly allocated in one or more groups that are subject to the intervention or interventions to be tested, and a group that will not be subject to it or these (the control group). (2) The groups are followed for a period in the same way, and the only difference between them is the intervention they are subject to. (3) After the trial is completed observations are compared between the groups to understand whether the behaviour of the group or groups that received the intervention is (statistical significantly) different from that of the control group.

For example, aiming to increase on-time payment of traffic fines, the New South Wales Government BIT in Australia carried out an RCT in 2012. One group of people received the traditional payment notice (the control group), and another group received a redesigned payment notice (the intervention group). The redesigned notice has a salient “PAY NOW” stamp on it, uses simple language, and clearly communicates the consequences of not paying the fine. People who received the redesigned notice (the intervention group) were about 3% more likely to pay their fine, which reflects over AUD 1 million in revenue for the New South Wales Government and 9,000 fewer people losing their licenses.

 

Take home lesson: Test, Learn, Adapt

The strength of this approach to the development and implementation of policy and regulation is that different interventions can be tested within the population you seek to target. The lessons learnt from these tests may help you to refine and adapt the interventions, test them again, and finally implement that intervention that has proven to result in the best outcomes.

While RCTs will not always be possible because of resource constraints or ethical considerations, it is relevant to note that they allow for the testing of many new regulatory interventions—whether they build on insights from the behavioural sciences or not. If you wish to hear more about RCTs, here is an excellent BBC Radio 4 documentary on how to use them to test government policy and regulation. And if you are interested in reading more about this specific approach developing policy and regulation, the UK BIT’s document Test, Learn, Adapt is a good starting point.

Another advantage of systematically testing new regulatory interventions is that it helps to draw lessons across them. Combining insights across its trials, the UK BIT has over the years also published reports that seek to aid the application of insights from the behavioural sciences in policy and regulatory practice. Some relevant are MINDSPACE, EAST and Behavioural Government. MINDSPACE is a mnemonic for nine of the most robust behavioural influences: Messenger, Incentives, Norms, Defaults, Salience, Priming, Affect, Commitments, Ego. EAST is a mnemonic to capture that when seeking to change behaviours, it should be made Easy, Attractive, Social and Timely. Behavioural Government, finally, applies insights from the behavioural sciences to policymaking.

 

[1] The Nudge-unit that was launched under the Obama Government has been merged into the Office of Evaluation Sciences under the Trump Government.

[2] Loewenstein, G., Sunstein, C. R., & Golman, R. (2014). Disclosure: Psychology Changes Everything. Annual Review of Economics, v6, 6, 391-419.

[3] Osman, M. et al. (2018) Learning lessons: How to practice nudging around the world. Journal of Risk Research, DOI: 10.1080/13669877.2018.1517127.

2 thoughts on “Behavioural insights for effective regulation(3): Evidence and experiments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s