The focus of the first year of the Chair in Regulatory Practice is on the use of insights from the behavioural sciences in regulation. Such regulation is often referred to as ‘Nudging’, following a famous book from 2008 by Professors Richard Thaler and Cass Sunstein.
In their book, Nudge: Improving Decisions about Health, Wealth, and Happiness, Thaler and Sunstein explain that humans use mental shortcuts (heuristics) and biases when making choices. While this often helps us in making good choices, sometimes it does not. As a result, we tend not to save enough for our retirement, overeat, exercise too little, and so on.
Thaler and Sunstein also explain that with subtle regulatory interventions governments can help people to make choices that are in their own best interest and serve their long-term well-being. This is, in a nutshell, the underlying idea of Nudging and the use of behavioural insights in regulation more broadly.
A series of blog posts to understand the use of behavioural insights in regulatory practice
There are high hopes and expectations of this approach to regulation, but considerable concerns also. Therefore, I will dedicate several blog-posts to the use of behavioural insights in regulation to discuss core insights from the academic and policy literature. I will cover six themes. The evolution of our understanding of (ir)rational behaviour (this blog post); examples; evidence of its performance and experiments to understand its workings; and, epistemic and ethical challenges that come with this approach to regulation.
In this blog-post, I will pay attention to prospect theory which captures how people deviate in predictable ways from ‘rational behaviour’. Understanding prospect theory is essential to understand whether, why, and where the use of behavioural insights may help improving regulatory practice. Hang on—it will be a dense information post.
How neoclassical economy understands behaviour
Policymaking and implementation have for long built on rational choice theory—and often still do. This is an analytical framework from the neoclassical economics for understanding and modelling the social and economic behaviour of groups of people—for example, the population of a country. A central aspect of this theory is that people are rational beings and thought of as having ‘stable, coherent and well-defined preferences rooted in self-interest and utility maximisation that are revealed through their choices’.[1] When they can choose from a variety of alternatives, they are expected to choose the alternative that has the highest worth or value to them.
While this sounds like a plausible description of what people would do when facing a choice, economists and others have for long struggled with the notion of ‘utility’.[2] The notion was initially introduced by moral philosophers such as John Stuart Mill and Jeremy Bentham who considered it as a measure of pleasure or satisfaction: positive utility is defined as the tendency to bring pleasure and negative utility as the tendency to bring pain. Within this conceptualisation of utility, it is an open question if what people desire (maximum utility) is what they choose.
With the advance of neoclassical economics, scholars became particularly interested in measuring and modelling utility.[3] It was expected that if all the preferences of individual people in a group would be known these could be added up to estimate the greatest (social) welfare possible for that group. Measuring individual utility is, however, exceptionally difficult.[4] To overcome measuring problems, neoclassical economists have often used expressed or observed choices as indicators for utility. It has become accepted practice within this strand of economics to consider what people choose (or express they would choose) representative of what they want.[5]
Updated understandings of utility
Contemporary behavioural economists and others claim that this is a too narrow understanding of utility. They point out that people may desire one thing (being healthy) but choose to do something else (smoke, eat unhealthy food, exercise too little). In part, this has to do with our personal and ever-changing understanding of the utility we get from a specific decision. Pioneering work, starting in the 1970s, by Professors Daniel Kahneman and Amos Tversky, has pointed out that we humans have at least three understandings of utility: experienced utility, decision utility, and remembered utility.[6]
Experienced utility can best be understood as the pleasure (or pain) you experience right now when reading this blog post. Decision utility is the pleasure (or pain) you expected to get from reading this blog post before you started reading it. These two utilities may coincide, as neoclassical economics assumes, but often they will not.[7] People are reported to routinely overestimate the positive utility (pleasure, joy, opportunity) and underestimate the negative utility (pain, regret, risk) they expect to get from a choice. To complicate matters further, the utility we remember we got from a specific choice at a later point in time may again differ from these two other forms of utility. [8]
These different forms of utility affect each other, and in doing so make human behaviour less ‘rational’ than what neo-classical economics predicts. For example, Kahneman has observed that the way an experience ends may alter the remembered utility of that experience. [9] An overall painful experience may still be considered a valuable one if it ends with a high peak. Think about studying for exams: the utility experienced while studying does not depend out the outcome (the outcome is unknown while studying). Yet, the remembered utility will be very different depending on passing or failing the exam.
Deviations from the neoclassical economics understanding of rational behaviour
Besides such deviations from the utility function, scholars have also pointed out that humans are less rational in making choices under uncertainty than predicted by neoclassical economics. Herbert Simon was one of the first to point out, in the mid-1940s, that people find it difficult to understand many of the problems they are facing fully. It is also often impossible for them to acquire all the relevant information to make a rational decision, and even if they could get all this information, then they will most likely lack the mental capacity or the time to process this information. In other words, when making decisions humans possess only ‘bounded rationality’ and must make decisions by ‘satisficing’—they choose what makes them happy enough.[10]
The work of Kahneman and Tversky’s and other behavioural scientists has pointed out further that people deviate in other predictable ways from neoclassical assumptions of rationality—we humans rely on cognitive biases and heuristics when making choices. Sometimes this results in suboptimal outcomes. To name a few dominant heuristics and biases and their possible suboptimal effects:
- Present bias and hyperbolic discounting: People tend to give stronger weight to payoffs that are closer to the present time when faced with the choice of getting a payoff at two moments in time. For example, when given a choice between receiving $50 today or $100 tomorrow and when they don’t need it at either day, people would likely choose to get $100 tomorrow. Yet, the longer the time between the two moments, the more likely they choose for the instant $50.
- Anchoring: People tend to rely heavily on an initial piece of information when deciding. For example, people tend to be much more likely to buy a car used car for $4,000 if it has its price reduced from $5,000 than to pay $4,000 when they were not given this initial ‘anchor’.
- Probability neglect: People tend to disregard probability when deciding under uncertainty—and the more unknown the situation, the less good we are at estimating likelihoods. This leads to overestimation of small risks. For example, people tend to be much more concerned about the risk of an act of terrorism affecting their lives than about ordinary risks that are statistically much larger.[11]
- Loss aversion: People tend to prefer avoiding losses over acquiring gains. For example, if one person is given $50 and the other $100 but must give back (or otherwise loses) $50 of that amount, the first person will experience greater pleasure than the second—even though their result is the same (an added $50).
- Optimism bias: People tend to believe that compared to others they are at lesser risk of experiencing adverse events and more likely to experience positive ones. For example, smokers tend to believe that they are less likely to develop lung cancer than other smokers.[12]
There are a variety of explanations of why we make these ‘irrational’ choices. A widely acknowledged explanation is dual process theory—often referred to as system 1 (or automated) and system 2 (or reflective) behaviour. [13] The argument is that the brain capacities that we have inherited from our ancestors are well developed for making automated life-or-death choices (system 1) needed to survive in the African savannah, but are ill-suited for making reflective and complex choices (system 2) that give the greatest utility in modern market economies.[14]
From predictable and rational to predictably irrational: prospect theory
Bringing together insights from their studies of human behaviour, Kahneman and Tversky have proposed ‘prospect theory‘ as a better predictor of choice under uncertainty than the neoclassical economic model.[15] It is not a normative but a descriptive model, building on the following premises:
- When choosing under uncertainty, people set a reference point from which they assess perceived gains and losses,
- they become less sensitive to changes in probability as they move away from that reference point (‘diminishing sensitivities),
- they tend to overweight small probabilities, and
- underweight large probabilities.
The following examples are illustrative. What would you choose in the following four situations:
- Get $9,500 with certainty, or have a 95% chance to get $10,000.
- Lose $9,500 with certainty, or have a 95% chance to lose $10,000.
- Get $500 with certainty, or have a 5% chance to get $10,000.
- Lose $500 with certainty, or have a 5% chance to lose $10,000.
Most likely, you would take not take the risk in the first situation (and take the $9,500); take the risk in the second and the third situation; and, not take the risk in the fourth situation. At least, that is what Kahneman and Tversky found in most of the people they have asked these and related questions. Have a closer look at what just happened:
- Fearing disappointment of missing out on the certain $9,500 even though the chance of getting $10,000 is almost certain, most people would take the $9,500. This situation of risk-averse behaviour may, in part, help explain why people do not save for their retirement.
- Hoping to avoid an inevitable loss of $9,500 most people would take the bet of losing $10,000 even though the chance of losing that bet is almost guaranteed. This situation of risk-seeking behaviour may, in part, help explain why people do not act to climate change.
- Hoping of the significant gain of $10,000 most people would reject the certain $500 even though the chance of getting the $10,000 is almost nil. This situation of risk-seeking behaviour may, in part, help explain why people play pokies.
- Fearing a substantial loss of $10,000 most people would prefer to lose $500 with certainty even though the chance of the substantial loss is almost nil. This situation of risk-averse behaviour may, in part, help explain why people hold too many insurance policies (think of the excessive insurance policy you probably agree on when renting a vehicle for a few days).
This is remarkable because depending on the chance of winning or losing a large sum of money, the direction of risk behaviour under the same probability of risk goes in opposite directions (seeking or avoiding)—which is irrational. Besides, the direction of risk behaviour (seeking or avoiding) is different depending on the probability of the risk—and in all situations opposite to what a ‘rational’ human should do (that is, utility maximization).
In sum then, for many decades, research has pointed out that our behaviour is less rational than what is often assumed by neo-classical economics modelling. This modelling is, however, at the base of many policies. Scholars from the behavioural sciences, therefore, call for policy interventions (including regulatory practice) that is sensitive to the ‘cognitive failures’ of humans.[16] In the blog posts that follow I will discuss examples of regulatory interventions that are informed by these insights, as well as the ethical and epistemic challenges that come with them.
[1] McMahon, J. (2015). Behavioral economics as neoliberalism: Producing and governing homo economicus. Contemporary Political Theory, 14(2), 137-158, p. 141.
[2] Reed, D. (2007). Experienced utility: Utility theory from Jeremy Bentham to Daniel Kahneman. Thinking & Reasoning, 13(1), 45-61.
[3] Barbera, S., Hammond, P., & Seidl, C. (1999). Handbook of Utility Theory. Dordrecht: Springer.
[4] Pinto-Prades, J. L., & Abellan-Perpinan, J. M. (2012). When normative and descriptive diverge: how to bridge the difference. Social Choice and Welfare, 38(4), 569-584
[5] McMahon, J. (2015). Behavioral economics as neoliberalism: Producing and governing homo economicus. Contemporary Political Theory, 14(2), 137-158, p. 141.
[6] Kahneman, D., Wakker, P., & Sarin, R. (1997). Back to Bentham? Explorations of experienced utility. Quarterly Journal of Economics, 112(2), 375-405.
[7] Friedman, D., Isaac, M., James, D., & Sunder, S. (2014). Risky curves: On the empirical failure of expected utility. London: Routledge.
[8] Kahneman, D. (2011). Thinking Fast and Slow. New York: Farrar, Straus and Giroux.
[9] Kahneman, D. (2011). Thinking Fast and Slow. New York: Farrar, Straus and Giroux.
[10] Simon, H. A. (1945). Administrative behavior. A study of decision-making processes in administrative organization. New York: Free Press
[11] Sunstein, C. (2003). Terrorism and Probability Neglect. Journal of Risk and Uncertainty, 26(2-3), 121-136.
[12] Windschitl, P. (2002). Judging the accuracy of a likelihood judgment: the case of smoking risk. Journal of Behavioral Decision Making, 15(1), 19-35.
[13] Kahneman, D. (2011). Thinking Fast and Slow. New York: Farrar, Straus and Giroux.
[14] Bissonnette, J. F. (2016). From the moral to the neural: brain scans, decision-making, and the problematization of economic (ir)rationality. Journal of Cultural Economy, 9(4), 364-381.
[15] Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263-292.
[16] Jolls, C., Sunstein, C., & Thaler, R. (1998). A Behavioral Approach to Law and Economics. Stanford Law Review, 50(5), 1471-1550.
5 thoughts on “Behavioural insights for effective regulation(1): The evolution of (ir)rational behaviour”