Risk governance and risk-based regulation(6): Epistemic and ethical challenges

We are nearing the end of this series of blog posts on risk governance and risk-based regulation. It builds on a systematic review of over 150 academic publications published between 2009 and 2018. The aim of the series is to introduce those working in a regulatory environment to the key concepts of risk governance and risk-based regulation, and to discuss the state of the art of academic knowledge on these topics.

If you are tuning in just now then I strongly recommend to read the earlier posts first: a broad introduction to the series, an exploration of the history of risk and risk governance, examples of risk governance and risk-based regulation, and evidence of its performance. In this blog post we will look at the epistemic and ethical challenges that come with this approach to regulatory governance.

The next blog post in the series will appear together with the publication of an open access paper in the State of the Art in Regulatory Governance Research Paper series to bring together the insights from the review presented over the last weeks in these blogs. In it I will also present a more critical evaluation of the full set of literature reviewed than I have done thus far. Expect the paper to be available by the end of June 2019.

The epistemic challenges of risk as an approach to regulatory governance

Many of the epistemic challenges discussed in the literature address the limits of and differences in knowing what constitutes a risk and how to respond to it best. Scholars agree that sound risk assessment and management build on multiple sources of knowledge regarding a range of elements. These elements include, but are not limited to the extent of harm, the probability of occurrence, the remaining uncertainties (incertitude), the geographical and temporal spread of harm (ubiquity), the duration of harm (persistence), the reversibility of harm, the delay effect between trigger and the occurrence of harm, and the potential for mobilization of those affected. But obtaining sound knowledge on these elements and applying it well is all but easy.

When risk governance and risk-based regulation backfire

Sometimes risk governance and risk-based regulation backfire and do more harm than good. Scholars are critical to too technocratic applications of the models and guidelines presented in the previous episodes of this blog-series, and to overconfidence of what this approach to regulatory governance may bring. ‘Expectations that risks can be anticipated and managed may lead organisations to convey impressions that they are in much greater control [than] is in reality feasible, and the pressure may be on them to be seen to be doing something in response to the identification of risks’ [1]. With the growing knowledge of risk assessment and risk management risk governance has become ‘something of a cult. Today, an almost magical aura surrounds the estimation of probable harm’ [2].

Particularly problematic is that risk assessment and risk management require to simplify complex data and rely on proxies where data is lacking. Existing data often does not allow for risk assessment, historical data may be outdated, and too much weight may be given to probabilities derived from incomplete data. Data can be compromised at the political level by partisan or other interests, and biases may colour how data is interpreted or even provided. For example, in risk governance, benefit-cost analyses are often used to understand whether people are willing to be subject to a specific risk. The exact framing of questions is critical. To illustrate, the answer to the question of how much people are willing to pay for reducing the risk of losing income/health/happiness/etc. will be substantially different from the question of how much they would ask for selling the certainty of maintaining that income/health/happiness/etc.

The changing nature of knowledge

More and more, knowledge claims of what constitutes a risk and what constitutes an appropriate response are disputed. Often a distinction is made between ‘objective’ and ‘perceived’ risks. Over the last decades, scholars put to the test the objectivity of technical risk assessments and have identified a range of biases, ethical and sociocultural influences that affect risk identification and estimation. In response, more weight is now given to the knowledge of risk held by others than technical experts; for example, the general public affected by the risk. It should be kept in mind, however, that a larger knowledge base of what constitutes a risk and how to respond to it by no means provides a blueprint for effective risk governance.

Again, our bounded cognitive abilities come into play. One of the core epistemic challenges of risk governance is that humans have a limited capacity to deal with uncertainties and probabilities. We quickly jump to conclusions based on partial or misunderstood information of risk. To prevent a situation of poorly designed risk governance interventions by leaning too much on either ‘objective’ or ‘perceived’ risks and risk knowledge, scholars urge for a move away from a static understanding of risk towards a more dynamic understanding of degrees of uncertainty. They further urge to move away from risk aversion towards trial-and-error risk taking that allows for learning from adversity and promotes resilience.

Limits to how much can be known of risks

Many risks are relatively simple in structure, their probabilities of harm are well understood, and their risk governance interventions have become conventional. Over the last decades, however, other risks have rapidly grown in complexity, ambiguity and uncertainty and pose challenges to regulatory governance. This holds particularly for ‘systemic risks’, those risks that  ‘are at the crossroads between natural events (partially altered and amplified by human action, such as the emission of greenhouse gasses), economic, social and technological developments, and policy-driven actions, all at the domestic and international level’ [3]. Strikingly, it is often not the activities and events underlying these systematic risks that have become more complex, but how they interact.

Interactions between different activities and events in complex systems may multiply risks or trigger synergies where the total of the risk is larger than the sum of individual parts. Depending on how loose or tight they are coupled, an accident (of high magnitude) is unpreventable. For example, sometimes ‘two or more failures, none of them devastating in themselves in isolation, come together in unexpected ways and defeat the safety devices – the definition of a “normal accident” or system accident. If the system is also tightly coupled, these failures can cascade faster than any safety device or operator can cope with them, or they can even be incomprehensible to those responsible for doing the coping’ [4].

The ethical challenges of risk as an approach to regulatory governance

When overviewing the range of ethical challenges discussed, two broad issues stand out. First is a call on governments to reduce risk-inequalities across different groups in society; second is a call on them to improve the legitimacy and accountability of risk evaluation and risk reduction, pooling, mitigation and prevention.

Reducing risk-inequality

One of Ulrich Beck’s better-known statements in the Risk Society is that ‘[wealth] accumulates at the top, risk at the bottom’ [5]. Beck was concerned that risks disproportionally affect already marginalised groups in society. Other scholars are less dystopian, but still, warn that risks do not affect everyone equally. Often, they warn, risks are not chosen by individuals or groups but imposed on them by the actions of others. Likewise, risk responses desired by some may have negative consequences for others. ‘[R]isk-related decision making is not about risks alone or about a single risk usually. Evaluation requires risk-benefit evaluations and risk-risk trade-offs. [There] are competing, legitimate viewpoints over evaluations about whether there are or could be adverse effects’ [6]. It is partly because of these insights that scholars have begun to call more inclusive and participatory risk governance processes.

Such participatory processes allow for risk assessment and the development of interventions by building on the knowledge of technical experts, expert bureaucrats, scientists and laypeople. It calls for a move away from a rational-instrumental model of regulatory governance towards a societal-political one (see an earlier blog post in this series). That does, of course, not imply that all risk governance interventions require extensive participation. Changing levels of risk knowledge and the changing nature of risks allow for different types of participation. For example, relatively simple and conventional risks can be addressed by expert bureaucrats, technical experts and scientists. When facing more complex and ambiguous risks, affected stakeholders and sometimes even civil society at large may need to be consulted. Not only does this allow for obtaining a broad knowledge base, but it also helps to make affected stakeholders aware of the risks they are subject to and the actions they can themselves take to reduce their exposure.

Improving legitimacy and accountability

Scholars also call on regulatory policymakers and practitioners to keep in mind that risk governance ‘[is not] a free-standing and technical guide to regulatory intervention [but a] particular way to construct the regulatory agenda’ [7]. Risks are not value free. Their construction, packaging and identification involve political choices and gives considerable power to decisionmakers. At the same time, heightened public awareness of risks asks decisionmakers to ‘justify not taking action rather than taking action’ [8]. A challenge for regulators to deal with under risk governance is that they ’be criticized for being too harsh when things are calm and being too lax when risks have been realized’ [9]. Put differently, the growth of risk regulation may raise legitimacy problems for the government: how can it demonstrate its effectiveness if the problems it seeks to address do not occur?

Governments engaged in risk regulation may face accountability challenges also. The aura of objectiveness and rationality that comes with this approach to regulatory governance may shift blame away from government or result in symbolic responses with little practical value. Risk governance can also change the dynamics of regulatory capture, particularly when governments are highly reliant on third parties for technical risk assessments and other knowledge. To help to improve the legitimacy and accountability of risk governance, scholars, therefore, call on governments to increase the openness of their risk governance regimes. That can be done, for instance, by increasing public participation in risk assessment and the development of interventions, as discussed before. Alternatively, governments may wish to provide greater transparency in information-gathering and processing, as well as in decision-making of the kind of risks that are accepted and those that are not.

References

1.           Hutter, B.M., ed. Anticipating Risk and Organising Risk Regulation. 2010, Cambridge University Press: Cambridge.

2.           Durant, J., Once the Men in White Coats Helt the Promise of a Better Future… in The Politics of Risk Society, J. Franklin, Editor. 1998, Polity Press: Cambridge. p. 70-75.

3.           Renn, O., Risk Governance: Coping with Uncertainty in a Complex World. 2008, London: Earthscan.

4.           Perrow, C., Normal Accidents: Living with High-Risk Technologies. 1999, Princeton: Princeton University Press.

5.           Beck, U., Risk Society. Towards a New Modernity. 1992, London: Sage Publications.

6.           Renn, O. and A. Klinke, Risk governance: Concept and application to technological risk, in Routledge Handbook of Risk Studies, A. Burgess, A. Alemanno, and J.O. Zinn, Editors. 2016, Routledge: London. p. 204-215.

7.           Black, J. and R. Baldwin, Really Responsive Risk-Based Regulation. Law and Policy, 2010. 32(2): p. 181-213.

8.           Tosun, J., Risk Regulation in Europe: Assessing the Application of the Precautionary Principle. 2013, New York: Springer.

9.           Hutter, B.M., A Risk Regulation Perspective on Regulatory Excellence, in Achieving Regulatory Excellence, C. Coglianese, Editor. 2017, Brookings Institution Press: Washington, D.C. p. 101-114.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s