In the Part 3 of a 4-part blog post series, I discuss using practical examples 2 principles (building on 5 discussed in prior posts) for analyzing the functionality and unanticipated behavior emergence of complex systems as applicable to diverse socio-technical systems. Let's explore......
As seemingly different as human beings, relationships, machineries, governance, businesses, personal development, and a host of other entities appear to be, they actual have something in common. They are complex ‘systems’ and respond to similar principles or concepts, albeit in different application forms. Systems are created from the interactions of two or more interconnected parts to achieve a purpose that cannot be accomplished by each part independently or by mere addition of efforts of the parts. With every new addition to the system, the complexity increases as the number of interactions within the system and with external parties increase in multiples. While complex systems are often challenging to analyze, some underlying principles anchored to a holistic view of systems cut across various types of systems and help with understanding existing systems, designing new ones, or fixing broken systems.
.... While complex systems are often challenging to analyze, some underlying principles anchored to a holistic view of systems help with understanding existing systems, designing new ones, or fixing broken systems....
System Thinking allows us to manage complex systems by breaking them into manageable sizes while keeping sight of the whole; focusing on the behaviors that define such a system driven by the interactions within the system, and between the system and its environment. In this four-part series blog posts in which I discuss practical principles of system thinking, I explore 10 principles that are applicable to various types of complex systems. In Part 1, I discussed the principles of the system as a synergy (not just an addition) of its subsystems; stakeholder needs assessment as a critical basis of good ‘system’ design; and solution neutrality in defining the core needs that the ‘system’ addresses. Part 2 was a practical expose of two additional principles – the right level of decomposition and abstraction to preserve system identity during analysis, and the concept of value delivery at the system interfaces.
Taking the conversation up a notch, in this third part I discuss two more principles, one on how the arrangement of a system enables or impacts the functioning of the system and the other on the unanticipated emergent behaviors of a system often driven by higher-order interactions and incentive structures within the system design.
Principle 6: Principle of structure-enabled system Functionality – ‘The How-enabled What’ – The function delivered by the system is largely dependent on the arrangements and/or connections of its components/subsystems. As such, changing the functionality of a system sometimes only requires the re-arrangement of existing parts to support the new function.
The defining features of a design and associated system are the 1) form – “how the system looks” (physical or virtual arrangement), and 2) function – “how the system works”. While the functionality of the system is the solution that the system provides to address key stakeholder needs as discussed in some of the prior principles, the ability to deliver the function is hinged on the connections of the components of the system. As such, a slight change in arrangement could potentially change the function that is delivered by the system. It is therefore important that the connectivity and interactions of the system component enables the flow of value all through to the user interface to enable the system to perform as intended.
So, how does this principle apply to your organization, relationship, and personal development system? It could serve as a basis to rectify functionality issues or optimize performance delivery through rearrangement of the form (components structure) of the system. Changing the functionality of a system may not require new parts but a re-arrangement of existing parts to support the new function. For instance, it’s always interesting to see how event spaces take on different functionalities just by changing the arrangement of existing chairs and artifacts. Hence a quick reshuffling transforms the room from a classroom setting targeted at an instructor-led training delivery to a boardroom arrangement for face-to-face discussions, or to banquet seating for smaller group interactions, all from the same components. This is easily applicable to improved collaboration or teaming between critical functions in a business system by transforming organizational structures from hierarchical to an agile team of teams’ approach eliminating the longer cycle time for information flow from the customer-facing departments to the back-office operational teams resulting in faster and agile product delivery facilitated by elimination of redundant connections and the introduction of new information routes.
A practical example that I experienced firsthand is of a standing fan that had been assembled and being in use for many months, but which was blowing air inefficiently yet sounding so loud that it would distort the sleep of light sleeper. The fan was on the verge of being discarded when there was a suggestion to look at the orientation of the fan blade. It turned out that the fan blades had been installed backwards and so the re-arrangement was done. This simple effort eliminated the loud sound that we had become accustomed to and in fact transformed the fan to become the most effective one in the house. What had been assumed to be factory error or a substandard brand turned out to be a “form impact on functionality” as the connection that was to facilitate air flow had been tampered with due to the misarrangement. Examples such as this abound across the range and types of complex systems, so before throwing more resources at a functionality issue, it is worth confirming that a ‘form rearrangement’ isn’t a quick fix for it. On the flip side, it is important to understand that making changes to the arrangement of system components, possibly for aesthetic or other reasons may introduce changes (albeit slight) to the functionality of such a system. Hence, changes to a system form must be managed with a holistic view of the number of interactions that are impacted by that change, while also understanding what new interactions have been introduced that translate to a change in the overall functioning of the system to avoid unintended consequences.
....so before throwing more resources at a functionality issue, it is worth confirming that a ‘form rearrangement’ isn’t a quick fix for it....
So, with the understanding of how the form of a system impacts the function, system design or optimization then should be straight forward, or maybe not? What happens in the situations of a great design which works well to deliver the intended function but somehow also exhibits some unintended or difficult to explain behaviors? Let’s explore this in principle #7.
Principle 7: Principle of Unanticipated Emergence – The interactions within a system and external interactions with the system, may in addition to the intended functionality of the system yield unexpected functions and performance characteristics (which may be positive or negative) often driven by second-order effects and the incentive structure design.
Have you ever reviewed a report from a system performance and thought “Hmm… it wasn’t supposed to do that.”? That is exactly how the developers of AlphaGo, the Artificial Intelligence machine that beat the human world champion at the game of Go, a Chinese strategy board game, felt when the machine made the game move that became popularly known as “Move 37” that everyone initially thought was a mistake on the part of the machine (https://www.wired.com/2016/05/google-alpha-go-ai/).According to the movie AlphaGo, “Though trained by humans, Move 37 was a move that no human would make……AlphaGo machine was trained to play like a human but with the ability to learn patterns from the training games. The program eventually learnt new methods and taught us new ways to play Go”. The principle of unanticipated emergence evidenced in this scenario plays out daily in many complex socio-technical systems with many stories of strategies, relationships and policy implementations gone sour or in some cases faulty policies somehow managing to drive some positive unanticipated outcomes. Two key factors often come into play in these emergence – the underlying incentive structures and the second-order effects of dynamic interactions within the system in response to these incentives resulting in scenarios that were not accounted for in the initial system configuration. This often happens because all the possible scenarios at dynamic interfaces are not (and probably not possible to be) captured in design. This emphasizes the need for a healthy sense of vulnerability in design, ensuring that all assumptions are properly documented to enable a revisit if certain scenarios call those assumptions to question.
....Two key factors often come into play in these (unanticipated) emergence – the underlying incentive structures and the second-order effects of dynamic interactions...
The term ‘Cobra Head effect’ is often used across several fields to explain this concept of unanticipated system behavior emergence in which the incentive structure is encouraging a worsening of the system behavior when it was in fact conceptualized to improve the system performance. The effect gets its name from a story often told of some British rulers in one of their Indian colonies in the 1800s who in response to an abundance of venomous snakes – cobras paid the locals a stipend in exchange for the head of a dead cobra. Straight forward math it appeared, pay for the service to make the cobras disappear. Unfortunately, it could not have been more wrong as after an initial decline in the number of cobras, they were back in multiples. The incentive structure encouraged the locals to then breed the cobras so that they could receive payments for their head. Seeing how miscalculated the move was, the payments were stopped and with no economic value for the locals, their cobra breeds were then released back to the streets resulting in a worse situation at the end than when the intervention was initially conceived. Hence, while the system design may have been great, the actual incentive built into that policy implementation drove a dynamic interaction that was not initially conceptualized.
Examples of these sort of incentive structures can be seen in the payment structure in some contracts that could encourage a never-ending work scope instead of delivering on a deadline; or in the case of organizational metrics driving unhealthy competition instead of synergies with suboptimal results for the organization driven by local optimizations by individuals looking to outshine others; or even in design of organizational divisions where the performance measures for the Research Group, for instance, are in exact contrast with those of the marketing / sales group for which the research is meant to enable in principle, leaving stranded products from unsatisfied customers. In the AlphaGo case study, the machine was programmed to win a game by any margin in which case winning by 1 point is same as winning by 30 points. This had fundamental implications for how the machine ranked decisions which led to the Move 37 which yielded marginal victory and would likely not have been made by a human player where the incentive is to play safe and win by larger margins. Based on several games played with itself, the machine learnt new strategies to win (albeit by a minimal margin) – a second order effect from the dynamic interactions and enabled by the incentive designed into the machine’s decision making.
....these incentive structures can be seen in the case of organizational metrics driving unhealthy competition .... with suboptimal results for the organization driven by local optimizations by individuals looking to outshine others...
These unplanned functionalities tend to take place at dynamic interfaces. It is therefore important to understand the system interfaces and to consistently analyze potential internal or external interactions enabled by the design of the technical or socio-technical ‘system’ incentives that could lead to unplanned functionalities. While its near impossible to plan for all eventual outcomes of a dynamic scenario during system design, a good understanding of the risks left in the design is required and should be documented/communicated accordingly to serve as a basis for determining the trigger points for revisiting the design bases of the now evolved system. Sufficient efforts must be put into determining the incentive structures of any kind of system with key focus on the interaction risks within the system and with the environment of the system to minimize unanticipated emergence. Also, short-cycle feedback loops must be designed into the system to ensure early identification of deviation in system performance and to initiate a re-evaluation before the system spirals out of control.
So, teeing it all up…
As you think about the complex human body system, your business, family relationships, machinery, and other systems that you interact with or are accountable for, a few questions must be answered to test the application of the 2 principles discussed in this blogpost as follow:
Do the arrangements of the components of my system enable the functionality it is designed to solve? Are there opportunities to rearrange certain components to address suboptimal performance in some aspects of the system? What is my incentive structure? What sort of behavior is my incentive structure likely to drive over the long term? Do I have structures in place to balance the loop? How robust are my system assumptions for the range of future scenarios that may occur? What are the triggers to revisit my assumptions? Do I have a feedback loop in place to effectively detect unanticipated system behaviors? Affirmative answers to these questions alongside others discussed in the five prior principles bring you closer to the realization of the full potentials of your ______ system.
If you missed out on any of the previous principles, I invite you to read Part 1 and Part 2 of these series that leverages day-to-day examples to drive home the principles applicable to systems of varying complexities. In the next and final post in this series, I will discuss 3 final principles of the 10 System thinking principles I identified from my learnings and reflections. Watch out for it.
Let me know what resonates with you in the comments section and if you found this blogpost insightful share with someone else. The world needs a growing community of system thinkers to address our complex socio-technical systems and emergent issues.
Wishing you a transformational journey ahead.
Funmilola (Funmi)
#SystemThinking #funmilolaasasblog #interactions #SystemBehavior #subsystems #integration #SystemPrinciples #stakeholders #interfacedesign #SystemsThinking #systemform #systemfunction #SystemInterface #systemdesign #UnanticipatedEmergence #DynamicInteractions
Comments