Systems thinking and regulatory governance (2): The evolution of systems thinking

Capturing the evolution of systems thinking and systems science is all but easy. Conventional scientific methods for unpacking and understanding historical developments often fall short of capturing the non-linearity, emergence, different worldviews, and role of feedback that have affected the different trajectories of systems thinking over the last hundred years, or so.

In this blog post, I present a somewhat chronological overview of the evolution of systems. Despite the abovementioned shortcomings, a chronological overview provides a helpful structure for mapping some of the key developments in systems thinking that have relevance for regulatory governance.

The Cartesian systematic dissecting of complexity

With the development of Western science during the Enlightenment (17th and 18th centuries), scholars became increasingly interested in deductive methods and deductive reasoning. Through René Descartes and his followers, a reductionist view of science made rapid inroads. It replaced the more holistic understandings of how the world operates that dominated pre-Enlightenment science.

The Cartesian view holds that to understand how things work (to find the ‘truth’, or to find causality), “the way to proceed was to successively split up entities into their component parts until ultimate components were reached, at which point ultimate explanations were possible”.[1] The sciences have applied this Cartesian reductionist view of systematic dissecting of complexity with great success. It has provided major breakthroughs in areas ranging from astrophysics to molecular physics, and from structural engineering to social engineering.

Nevertheless, the Cartesian view comes with shortcoming as well. The central assumptions it makes are that (i) “a component part is the same when separated out as it is when part of a whole”, and (ii) that “the whole is the sum of the parts, no more and no less”[2].[3]  By the turn of the 20th century, however, scholars increasingly noticed that these rules do not always apply.

Particularly those studying biological, societal and other ‘living’ systems, found that this mechanistic or clockwork understanding of the world could often not explain why the whole performs as it does. A well-known example is Adam Smith’s ‘invisible hand’ that seeks to explain market equilibrium: it cannot be reduced to or observed in any individual market participant, but emerges at the level of the market as a whole.

Early systems thinking: Inquiry into emergence

Emergence has been defined differently over time, but for now, it suffices to consider it the behaviour of a system as a whole that cannot be observed in or reduced to the component parts of the system. Emergence is a result of how the component parts of a system relate and interact. Or more formally: “Emergent properties of an entity are properties possessed only by the entity as a whole, not by any of its components or by the simple aggregation of the components”.[4] Emergence is central to systems thinking and logically gets us to a widely acknowledged definition that systems are “complexes of elements standing in interaction”.[5]

Studying emergence is important, for example, to understand how a system achieves stability (or fails to achieve it) and how to maintain that stability over time. In the first half of the 20th century, scholars have been quite successful in studying emergence in closed systems (systems in which no elements enter or leave the system) but it quickly became clear that most living systems, such as society and its (sub)systems, are open systems. Systems in which elements (including information, people, resources, and excess) flow between the system and its environment.

With the insight that most living systems are open systems, scholars increasingly began to include the impact of a system’s environment on the system’s behaviour, and they became particularly interested in understanding the boundaries between system and environment: “Systems have boundaries. This is what distinguishes the concept of system from that of structure. (…) [A] boundary separates elements, but not necessarily relations. It separates events, but lets causal effects pass through. (…) Boundaries can be differentiated as specific mechanisms with the specific purpose of separating yet connecting”.[6]

Because of their characteristics, open systems are more likely to be in a state of dynamic equilibrium (if they reach a state of stability at all) rather than steady or fixed equilibrium (a characteristic of closed systems). Small changes in the elements of a system or its environment may affect the overall behaviour of the system in unexpected manners. In other words, open systems often show nonlinear behaviour in which “a small change in initial conditions can lead to a radical change in a later state of the system … or, inversely, a large change in initial conditions might not lead to any significant change in later states of the system”.[7]

Further advancements: Inquiry into functionally different societal systems

Open systems thinking also acknowledges that the environment of one system is often another system or even a set of other systems: “Every change in a system is a change in the environment of other systems; every increase in the complexity in one place increases the complexity of the environment for all other systems”[8]. This insight raised a range of other questions. Particularly the work of Niklas Luhmann is worth mentioning here because it has had a considerable impact on (socio-)legal theory.

Luhmann observed that since the 19th century society has become a functionally differentiated. A multiverse of ‘function systems’, all with their mode of communication and logic, now operate side by side. For example, the economic system uses money as its mode of communication and seeks to ease the transfer or movement of goods and services. Other function systems include law, politics, religion, and science. Scholars conceptualise these function systems as highly autonomous and self-referential. None of these can replace, coordinate or dominate the other.

Each of these function systems operates with a specific set of binary codes that reduces the complexity within the system. The legal system uses the binary coding of legal/illegal; the science system uses true/false; the economic system uses profitable/non-profitable and so on. The differences between function and coding in these systems result in challenges. For example, translating the legal coding of legal/illegal to the science coding of true/false or the economic coding of profitable/non-profitable is difficult. Some scholars observe that the work of Luhmann and those building on it are especially relevant for regulatory governance.

The concepts of system-specific communication, logic and coding may help to understand better why certain state-led regulatory interventions fail, for example, when they encounter ‘trans-systemic incompatibilities’; or succeed, for example, when they allow for ‘structural coupling’ of systems. Besides, so argue these scholars, we perhaps have to acknowledge that external state-led regulation is unable to steer society’s functional systems, and can at best be used to set some general rules (‘meta-regulation’) on how to reduce and resolve the trans-systemic incompatibilities of logic and coding.

Parallel and later systems thinking: Inquiry into stocks, flows and feedback loops

Advances in cybernetics strongly influenced Luhmann—cybernetics is the study of communication and automated control. Other trajectories of open systems thinking also have their roots in cybernetics but have evolved in a slightly different direction. Particularly the works of systems thinkers such as Donella Meadows and Peter Senge are worth mentioning here. In Meadow’s words: “System thinkers see the world as a collection of stocks along with the mechanisms for regulating the levels in stocks by manipulating flows”.[9]

In this trajectory of systems thinking the role of feedback has central attention. It distinguishes between two broad feedback mechanisms. The first is balancing or stabilising feedback (sometimes referred to as negative feedback). This form of feedback aims to keep the system in balance or respond to imbalances (maintain the level of the stock when there is too much inflow or outflow). For example, if the level (the ‘stock’) of noncompliance with regulation goes up (‘flow’) in a sector, the responsible regulatory agency may decide to increase the number of its inspections, increase the stringency of inspections, increase the number of fines issued, or increase the severity of fines (all are forms of ‘feedback’ to balance or stabilise the system).

The second form of feedback is reinforcing or amplifying feedback (sometimes referred to as positive feedback). This form of feedback causes imbalances in the system (they are often the cause of too much inflow or outflow). For example, the changed perception about compliance-costs in a sector may result in firms seeking to cut corners, which may ultimately result in an increased level of noncompliance in the sector. This may further change perceptions about compliance-costs, causing more firms to cut corners, resulting in even higher levels of noncompliance.

Systems often show highly complex or competing forms of feedback that have a nonlinear and sometimes circular impact on the stability of the system as a whole. The difficulty is then to limit the impact of unwanted reinforcing or amplifying feedback (note: sometimes, these forms of feedback are desirable). Systems scholars are particularly vocal about the risk of time-delays that decisionmakers face when seeking to influence feedback loops. Any intervention needs time to achieve its desired effect, and further instability is likely when the intervention is not given time or is too vigorous to begin with: “aggressive action often produces exactly the opposite of what is intended. It produces instability and oscillation, instead of moving you more quickly towards your goal”.[10]

Other advancements: Questioning the ontological reality of systems

By the end of the 20th century, systems thinking had become an accepted approach for studying complexity, dynamics and adaptation in various areas of society. At the same time, some scholars became critical to the notion of systems as ‘something existing out there’ and the limits of systems thinking faced in the ‘engineering’ of solutions for societal problems.

In response, a new trajectory of systems thinking emerged that considers systems as heuristic devises that help to study the world, but that do not exist in the world. Central research themes are the reflexivity of humans the different worldviews they have (which complicates the manageability of living systems)—the different meanings and senses that people bestow upon the systems they are part of.

Stafford Beer and Peter Checkland are central figures in this trajectory of systems thinking. For Checkland, systems thinking in this manner is helpful because it allows for “modelling purposeful ‘human activity systems’ as sets of linked activities which together could exhibit the emergent property of purposefulness”.[11] Looking at systems in this manner allows at the very least for learning why a specific behaviour or outcome emerged in a (human activity) system, and possibly an opportunity to steer this emergent behaviour or outcome towards a desirable state.

This trajectory of systems thinking argues that it is often difficult to define the specific problem to address in the management and administration of (social) processes and organisations. It acknowledges “that the nature of the problem cannot be understood separately from its solution. Policy responses cannot therefore be ‘designed’, but represent a way of navigating through the problem”.[12] Approaches such as the Soft Systems Methodology  and the Viable System Model  provide tools to map, explore and interrogate (social) processes and organisations (‘systems’) and work towards improving them.

Conclusion: The relevance of systems thinking for regulatory governance and practice

In this chronological overview, I have at best been able to scratch the surface of the rich systems thinking literature and science that has emerged over the last hundred years or so. My aim with this overview was to indicate (i) how different trajectories of systems thinking have relevance for regulatory governance and practice in highly varied ways, and (ii) the central concepts of systems thinking that recur across these trajectories.

Thinking in systems can mean many things when applying it to regulatory governance and practice: regulation ‘as’ system, regulation ‘of’ systems, regulation ‘through’ systems, regulation ‘in’ systems, regulation ‘between’ systems, and so on. Systems thinking gives the tools and concepts to look at regulation in a systematic and systemic matter—that is, to look at the parts and to look at the whole. Likewise, it helps to think about regulation as sometimes being complicated and other times being complex—that is, sometimes it has many parts that influence the outcome in a predictable a linear manner, and sometimes the outcome emerges in an unpredictable and non-linear manner.

Equally important, systems thinking helps to think about regulatory governance and practice in different ways. Thinking about society as having functionally different systems asks us to consider what language or coding will resonate with those we seek to target through regulation. Thinking about regulation as a system of stocks and flows asks us to consider the risks of oscillation that may result from a specific regulatory intervention. Thinking about systems as a heuristic tool asks us to be modest in what we can achieve through regulatory reform and accept that sometimes we can only learn how to do better the next time.

Embracing systems thinking as a tool for regulatory governance and practice in this manner also fits well with very recent developments in systems thinking. These hold that we should not radically distinguish between either order or chaos, but think in terms of partial order. Sometimes phenomena are complex and show nonlinearity, and sometimes they are merely complicated and are predictable. In regulatory governance and practice, there likely is room for ‘old school’ reductionism and more recent holism as practised in systems thinking.

[1] Mingers, J. (2015). Systems thinking, critical realism and philosophy. London: Routledge.

[2] Checkland, P. (1999). Systems thinking, systems practice. Chicester: John Wiley & Sons.

[3] Geyer, R., & Rihani, S. (2010). Complexity and public policy: A new approach to tenty-first century politics, policy and society. London: Routledge.

[4] Mingers, J. (2015). Systems thinking, critical realism and philosophy. London: Routledge.

[5] von Bertalanffy, L. (1969). General system theory: Foundations, developments, applications. New York: George Braziller Inc.

[6] Luhmann, N. (1995). Social systems. Stanford: Stanford University Press.

[7] Sawyer, R. K. (2005). Social emergence: Society as complex systems. Cambridge: Cambridge University Press.

[8] Luhmann, N. (1995). Social systems. Stanford: Stanford University Press.

[9] Meadows, D. (2008). Thinking in systems: A primer. White River Junction, VT: Chelsea Green Publishing.

[10] Senge, P. (2006). The fifth discipline: The art & practice of the learning organization. New York: Currency.

[11] Checkland, P. (1999). Systems thinking, systems practice. Chicester: John Wiley & Sons.

[12] Stewart, J., & Ayres, R. (2001). Systems theory and policy practice: An exploration. Policy Sciences, 34, 79-94.

5 thoughts on “Systems thinking and regulatory governance (2): The evolution of systems thinking

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s