In the previous blog post, I have approached the notion of regulatory failure from a public interest perspective on regulation. I will look at regulatory failure from a public choice perspective in this post.
A very brief summary of the public choice theory of regulation
Public choice perspectives on regulation hold that regulators and their targets, beneficiaries, and political principals pursue or oppose regulation as self-interested agents. Two distinct arguments stand out in this perspective.
The first argument is that those involved in regulation seek personal gain. It touches on bureaucrats, managers, policymakers who use their involvement in regulation for personal gain. Likewise, at the agency level, units may pursue objectives to get more resources or prestige or use agency appointments hoping that this will result in future (political) rewards.
The second argument is that regulatory systems create a false sense of rationality and predictability. Individuals and collectives, including regulatory agencies, typically lack the cognitive abilities, information, and time to understand and influence the complex causality through which regulation achieves its effects.
Regulatory failure from this point of view
For those who approach regulation from a public choice point of view, regulatory failure means those situations where bureaucrats or regulatory agencies willingly abuse their powers. It also includes situations where policymakers promise to introduce or reduce regulation to gain power but do not follow through on their promises when they are in power.
Kinds of failure and their causes from this point of view
Overlooking the literature that engages with regulatory failure from a public policy point of view, a mixed bag of themes stands out that I summarize as self-interested bureaucrats and agencies, limited rationality, and electoral/legislative/jurisdictional failures.
Self-interested bureaucrats and agencies
The literature touches on self-interested bureaucrats who are not willing to undertake, or even neglect, their regulatory responsibilities (shirking); on the bribery of individual regulators by targets or beneficiaries (or even policymakers); and, on the tendency of (some) bureaucrats to shift their careers between, at one time, holding public office to regulate an industry and, at another time, working in a key position in that industry (the ‘revolving door’ mechanism). Obviously, such behaviour makes individuals who work in a regulatory environment prone to capture by those they seek to regulate (a theme that returns in a future blog post in this series).
Along roughly similar lines, the literature touches on the possibility that regulatory agencies (or units within them) pursue their mandates with insufficient effort (‘bureaucratic slack’), or that they engage in turf wars over which agency (or unit) is responsible for what part of a regulatory system. The latter could result in parts of regulatory systems getting insufficient attention because agencies (or units) assume that others are responsible for them. Agencies could also seek to please (or bend to the will of) politicians through agency appointments—an illustrative example is the string of vocal anti-regulation individuals that were appointed to lead key regulatory agencies in the USA under the 2017-2020 Trump Government.
Another group of regulatory failures presented in this literature revolve around the idea of bounded rationality. This idea holds that rationality is limited when humans (and the collectives they form) make decisions and applies equally to individual regulators and the agencies they are part of. This insight may come across as somewhat stale now that regulators have rapidly embraced insights from the behavioural sciences to develop regulation that deals with the bounded rationality of their targets. Yet, when these insights first appeared in the literature, it provided novel explanations for why regulation (and other policy interventions) sometimes do not achieve their objectives: we often simply lack the information, mental capacity, and time to develop and implement perfect regulation and the best we can do is ‘satisfice’.
We should not dismiss such critiques, however, now that we have a better understanding of ‘real’ human behaviour. More knowledge may result in people holding even stronger to their beliefs about why a specific regulatory solution is necessary or why it has failed. Likewise, more data, more data processing power, and more reliance on artificial intelligence is not necessarily a solution to the bounded rationality of the regulatory systems. On the contrary, they run the risk of only further inflating the false sense of security that complex regulatory systems bring.
Yet another group of regulatory failures presented in this literature are derivatives of the previous ones. For example, during elections, (self-interested) politicians may promise regulation or deregulation of parts of society when in office but not follow up on their electoral commitments when in power. Or, lacking the time or capacity to understand a political party’s regulatory agenda, voters may choose politicians that do not serve their interest. Alternatively, voters may not choose politicians that do serve their interest because they fail to explain the benefits of their regulatory agenda in a comprehensible manner.
Regulation may also fail because it is too flexible and open to interpretation, which may give (self-interested) regulators or regulatory agencies considerable room to diverge from their mandates. Or, regulation may fail because it is easy for small groups of regulatory targets to organize and oppose regulation, but it is difficult for large groups of beneficiaries to organize in support of regulation. This could result in situations where regulators (or policymakers) repeatedly hear from a (relatively) small group of targets and misinterpret their ‘noise’ as the general or societal opinion about regulation.
Yet another kind of regulatory failure is when the unintended or undesirable consequences from the (lack of) regulation in one jurisdiction fall on another. A typical example here is the relocation of polluting industries from jurisdictions with strong environmental regulation to those with weak environmental regulation.
The public choice perspective adds an essential dimension to understanding regulatory failure by shifting our attention towards human and organizational behaviour. Seen in this light, regulation may fail not only because of technical errors in its design and implementation (as per the public interest perspective discussed in the previous blog post). Instead, regulatory failure may happen simply because individuals make (cognitive) errors or use regulation to serve their personal interests. Arguably, the public choice perspective has an overall negative outlook on regulation. Still, the insights it adds may help us refrain from replacing one regulatory design with another when it fails (from a public interest perspective). Unintentional human errors and intentional self-interested behaviour will not be solved by swapping one regulatory design with another. Instead, such failures will ask for different types of solutions—for example, increased training of regulatory staff and improved accountability structures for regulatory agencies.
 Horwitz, R. (1989). The irony of regulatory reform: The deregulation of American telecommunications. Oxford: Oxford University Press.
 Carman, J., & Harris, R. (1986). Public regulation of marketing activity, part iii: A typology of regulatory failures and implications for marketing and public policy. Journal of Macromarketing, 6(1), 51-64.
 Stigler, G. J. (1971). The theory of economic regulation. Bell Journal of Economics and Management Science, 2(Spring), 3-21.
 Caplan, B. (2007). The myth of the rational voter: Why democracies choose bad policies. Princeton: Princeton University Press.
 Lodge, M. (2002). The wrong type of regulation? Regulatory failure and the railways in Britain and Germany. Journal of Public Policy, 22(3), 271-297.
 Bueno de Mesquita, E., & Stephenson, M. (2007). Regulatory quality under imperfect oversight American Political Science Review, 101(3), 605-620.
 Tollefson, J. (2017). Trump EPA begins push to overturn Obama-era climate regulation. Nature News, 550(2017), 311.
 Baldwin, R., Cave, M., & Lodge, M. (2012). Understanding regulation: Theory, strategy and practice – second edition. Oxford: Oxford University Press.
 Simon, H. A. (1997 ). Administrative behavior. A study of decision-making processes in administrative organization. New York: Free Press.
 Hallsworth, M., Egan, M., Rutter, J., & McCrae, J. (2018). Behavioural government: Using behavioural science to improve how governments make decisions. London: The Behavioural Insights Team.
 Bovens, M., ‘t Hart, P., & Peters, B. G. (2001). The state of public governance. In M. Bovens, P. ‘t Hart, & B. G. Peters (Eds.), Success and failure in public governance: A comparative analysis (pp. 3-11). Cheltenham: Edward Elgar.
 Seifter, M. (2018). Further from the people: The puzzle of state administration. New York University Law Review, 93(1), 107-174.
 Wilson, G. (1984). Social regulation and explanations of regulatory failure. Policy Studies, 32(2), 203-225.
 Altman, M. (2001). When green isn’t mean: Economic theory and the heuristics of the impact of environmental regulations on competitiveness and opportunity cost. Ecological Economics, 36(1), 31-44.