Press "Enter" to skip to content

Tag: systems theory

On energy loss in a system

Every system is in its essence a network of actors that perform it from moment to moment into existence. The participants in the system, or actors in the network, enact and perform it through their daily routine operations.

Some of these routine operations are beneficial to the system being performed, and some are not. Some add to the energy of the system and therefore reduce entropy, while others take away from that energy and increase entropy. If the former outweigh the latter, we can say the system is net positive in its energy balance because it generates more energy than it wastes. If the latter outweigh the former, we can say the system is net negative in its energy balance as it wastes more energy than it generates. How to distinguish between the two in practice?

The rule of thumb is that any action that increases complexity in a system is long term entropic for that system. In other words, it increases disorder and the energy costs needed to maintain the internal coherence of the system and is therefore irrational from the system’s perspective. For example, this includes all actions and system routines that increase friction within the system, such as adding steps needed to complete a task, adding reporting paperwork, adding bureaucratic levels a message must go through, etc. Every operation a piece of information needs to go through in order to travel between the periphery, where contact with external reality happens, and the center, where decision making occurs, comes at an energy cost and generates friction. Over time and at scale these stack up and increase entropy within the system.

Needless to say, the more hierarchical and centralized an organization is, the more entropy it generates internally.

In addition, what appears as a rational action at a certain level is irrational from the perspective of the system as a whole. For example, if a layer of management increases paperwork this is a perfectly rational action for that management layer, because it makes it more needed and important within the system’s internal information flow; however, this is a totally irrational action from the point of view of the system because it increases its internal operational costs.

Put differently, from the point of view of a system such as a large hierarchical organization or a  corporation, the only actions of the agents comprising it that can be considered rational are the ones that increase the net positive energy balance of the system – i.e. reduce internal friction and/or increase external energy intake.

Importantly, this should be viewed across a time axis.

For example, when it comes to a complex operation such as a merger between two departments, or two companies, it might be a good idea to compare the before and after energy net balance for the two systems and the new system that has emerged as a result of their merger. It is also important to look in high enough granularity in order to understand the specifics of each network within the system, and its operations in time.

Say you had two admin structures servicing two different departments, and, now that the departments have merged, senior management optimizes the two admin structures into one, and cuts 50% of the stuff due to ‘overlapping roles’. On the face of it this is logical and should reduce internal energy drag, as admin structures are net negative – they don’t bring in new energy and have no contact with external reality.

However, the new merged admin structure now must service a twice larger part of the system than before, and as a result ends up delegating 30% of that new work back to the front line staff it is nominally servicing. As a result, the front line staff now have to perform 30% more reporting paperwork, which is net energy negative, and that much less time to bring in new energy into the system. In effect, the long-term effects of this ‘optimization’ are net energy negative and result in increased friction within the entire system that was supposed to be ‘optimized’.

Management entropy and the Red Queen Trap

I had an interesting conversation about my essay on the Red Queen Trap with someone on LinkedIn, and it made me think about something I did not explain in the essay.

In an ideal environment each element of a system will be acting rationally and striving towards its own preservation and, by extension, the preservation of the system. Rational action here can be understood as the action resulting in optimal energy efficiency from a given number of viable options, where optimal energy efficiency is a function of the energy that must be spent on the action vs the energy that is gained from performing the action. The scenario I describe in the Red Queen Trap essay is set in such an ideal environment.

However, in the real world individual network actors do not often act rationally towards their own or the system’s preservation. This is not necessarily out of stupidity or malice but is often due to limited information – what Clausewitz called ‘the fog of war’ – or a host of other potential motivations which appear irrational from the perspective of the system’s survival. What is more, the closer an actor is to the system’s decision-making centers, the higher the impact of their irrational decisions on the overall state of the system. The irrational decisions of front-line staff [the periphery] are of an entirely different magnitude to the irrational decisions of senior management [the decision-making center].

In practice this means that in complex hierarchical systems decision-making centers will have much higher entropy than the periphery. In other words, they will be dissipating a lot of energy on internal battles over irrational decisions, in effect actively sabotaging the internal cohesion of the system. As a reminder, the lower the internal cohesion of a system, the more energy the system must spend on performing itself into existence. The higher entropy of decision-making centers may be harder to observe in the normal course of operations but becomes immediately visible during special cases such as organizational mergers or other types of system-wide restructuring.

Interestingly, it is in such special cases when senior management is often tempted to make the internal environment of the system even more competitive – through the layering of KPIs or other means – in order to ‘optimize the system’ and protect its own position in the hierarchy. While on the face of it this appears to be a rational decision, it invariably ends up lowering internal cohesion even further, thereby increasing energy costs and routing even more resources away from the periphery and contact with reality [market competition].

The Red Queen Trap

The Red Queen Trap is to be found in the famous Red Queen paradox from Lewis Carroll’s Through the Looking Glass. In this story, a sequel to Alice’s Adventures in Wonderland, Alice climbs into a mirror and enters a world in which everything is reversed. There, she encounters the Red Queen who explains to her the rules of the world resembling a game of chess. Among other things, the Red Queen tells Alice:

It takes all the running you can do, to keep in the same place.

On the face of it, this is an absurd paradox, but it reveals an important insight about a critical point in the life of every system. Let me explain.

Every system, be that a single entity or a large organization must perform itself into existence from moment to moment. If it stops doing that it succumbs to entropy and falls apart. Spoiler alert, in the long run, entropy always wins.

To perform itself into existence every system must expend a certain amount of energy, which is a function of the relationship between its internal state and the external conditions it operates in. In other words, it must expend some energy on keeping its internals working smoothly together, and then expand some energy on resisting and adapting to adverse external conditions.

The better adapted a system’s internal state is to its external conditions, the less energy it must dedicate to perform itself into existence, and the larger the potential energy surplus it can use to grow, expand, or replicate itself.

However, external reality is complicated [not to be confused with complex] and changes dynamically in ways that cannot be modeled over the long term and require constant adjustments by the systems [organisms, humans, organizations] operating within it. In other words, an external state observable at time A is no longer present at time B.

This is a problem for all systems because it requires them to change how they operate.

It is a small problem for simple systems which are usually internally homogeneous and highly distributed. Their homogeneity means they don’t need to spend much energy to maintain their internal state, and their distributed topology means they make decisions and react very fast.  

It is a serious problem for complex systems [large organizations] which are usually rather centralized and heterogeneous. Their heterogeneity means they must expend a lot of energy to maintain a coherent internal state consisting of various qualitatively different elements, and their centralized topology means they react and make decisions rather slow.

It is a profound problem for complex hierarchical systems [large organizations with vertically integrated decision making] which consist of multiple heterogeneous elements stacked along one or more vertical axes. Vertical integration means that each successive layer going up is further removed from direct exposure to external conditions and is, therefore, slower in adjusting to them.

A system might be quite successful in adjusting its internal state to external conditions at time A, but a later time B might present a different configuration of conditions to which the internal state of the system at time A is profoundly inadequate. The more complex the system, the more energy it must expend in adjusting to changes in external conditions from time A to time B.

Complex hierarchical systems have the hardest time in making these adjustments because key strategic elements of their internal state [i.e. decision-making centers high in the hierarchy] are far removed from direct contact with external conditions. To orient themselves and perform the system’s OODA loop they rely on communication about external conditions reaching them from the periphery of the system, while orders on necessary adjustments must travel the other way, from center to periphery. This takes time, and the more layers the signal communicated from the periphery must pass through on its way to the center the more abstracted it becomes from external conditions. In other words, the center receives a highly imperfect version of the external conditions about which it must make adaptive decisions.

Over time, this generates a growing number of errors in the internal state of the system, requiring more and more energy to be routed to internal maintenance [i.e. bureaucratic paperwork], leaving less and less surplus energy for adaptation, growth, and expansion. Eventually, and this stage can arrive very fast, the system reaches a state of pseudo-equilibrium in which all energy it can produce goes towards internal maintenance and there is zero surplus energy left. This is where the Red Queen Trap kicks in:

The system does all the running it can do, to keep in the same place.

How does the trap work? First, from the inside everything in the system still seems to be operating smoothly and things are humming along following external conditions at present time A. However, this is a false perception of equilibrium, because when external conditions invariably change in future time B the system will have no surplus energy reserves to adjust to the new conditions.

The more imperfect the version of external conditions reaching the center of decision-making, the more pronounced the system’s inertia in this state of pseudo-equilibrium, and the deeper it goes into the Red Queen Trap.

Second, having eventually discovered there are no more surplus energy reserves left, the system must now make a choice.  In the absence of surplus energy and provided there is no energy transfer from the outside, it must somehow free up energy from within its internal state to adapt. The question is, which internal elements should be sacrificed to free up that energy? This is where the Red Queen Trap’s simple elegance is fully revealed.

Essentially, there are two options – a seductively easy one and an unthinkable one. The seductively easy option is to sacrifice the periphery, or elements of it, and preserve the decision-making center. It is an easy choice for the center to make because it naturally sees itself as the key element of the system and this choice allows it to remain intact. It is a seductive choice because the center suddenly finds itself with a flush of spare energy which it can use to maintain the pseudo-equilibrium and often even to grow itself at the cost of the periphery. Alas, the elegance of the trap is in the fact that the seductively easy option removes the center even further from external conditions; less periphery equals fewer opportunities to observe and react quickly to external reality, thereby further magnifying the initial conditions that brought the system to this state in the first place. By making that choice the center sinks further into the trap.

By contrast, the unthinkable option is to sacrifice the center and preserve the periphery, thereby flattening the internal structure of the system into a less hierarchical form. It is an unthinkable option for the center to make because, as pointed out above, it naturally sees itself as the key element of the system, and this choice forces it to sacrifice itself. It is also unthinkable because it involves a thorough rethinking of the internal structure of the system, which until that moment was organized entirely around vertically integrated decision making, with little to no autonomy in the periphery. The center must not only sacrifice some of itself but also reorganize the periphery in a way allowing it to perform those functions in place of the center. This would allow the system to free itself from the trap.

Most systems choose the seductively easy option and the Red Queen Trap eventually grinds them into oblivion. Those few systems that go for the unthinkable option escape the trap and, if they remain persistent in their application of the unthinkable, learn how to go different places with running to spare.

Network architecture encounters

These are some loosely organized observations about the nature of network topologies in the wild.

In terms of both agency and information, all entities, be they singular [person], plural [clan/tribe/small company], or meta-plural [nation/empire/global corporation] are essentially stacks of various network topologies. To understand how the entities operate in space these topologies can be simplified to a set of basic characteristics. When networks are mapped and discussed, it is usually at this 2-dimensional level. However, in addition to operating in space, all entities have to perform themselves in time.

This performative aspect of networks is harder to grasp, as it involves a continuously looping process of encountering other networks and adapting to them. In the process of performative adaptation all networks experience dynamic changes to their topologies, which in turn challenge their internal coherence. This process is fractal, in that at any one moment there is a vast multiplicity of networks interacting with each other across the entire surface of their periphery [important qualification here – fully distributed networks are all periphery]. There are several important aspects to this process, which for simplicity’s sake can be reduced to an interaction of two networks and classified as follows:

1] the topology of the network we are observing [A];

2] the topology of network B, that A is in the process of encountering;

3] the nature of the encounter: positive [dynamic collaboration], negative [dynamic war], zero sum [dynamic equilibrium].

All encounters are dynamic, and can collapse into each other at any moment. All encounters are also expressed in terms of entropy – they increase or decrease it within the network. Centralized networks cannot manage entropy very well and are extremely fragile to it.

Positive encounters are self explanatory, in that they allow networks to operate in a quasi-symbiotic relationship strengthening each network. These encounters are dynamically negentropic for both networks, in that they enable both networks to increase coherence and reduce entropy.

Negative encounters can be offensive or defensive, whereby one or both [or multiple] networks attempt to undermine and/or disrupt the internal coherency of the other network/s. These encounters are by definition entropic for at least one of the networks involved [often for all], in that they dramatically increase entropy in at least one of the combatants. They can however be negentropic for some of the participants. For example, WW2 was arguably negentropic for the US and highly entropic for European states.

Zero sum encounters are interesting, in that they represent a dynamic cancelling out of networks. There is neither cooperation nor war, but a state of co-presence without an exchange of entropy in a dynamic time-space range. I believe this is a rare type of encounters, because the absence of entropy exchange can appear only if 1] there is no exchange of information or agency, or 2] the amount of agency/information exchanged is identical from both sides. Needless to say, this process cannot be easily stabilized over a long time period and either morphs into one of the other two states or the networks stop encountering each other.

 

Teaching digital media in a systemic way, while accounting for non-linearity

Recently I have been trying to formulate my digital media teaching and learning philosophy as a systemic framework. This is a posteriori work because philosophies can be non-systemic, but systems are always based on a philosophy. I also don’t think a teaching/learning system can ever be complete, because entropy and change are the only givens [even in academy]. It has to be understood as dynamic, and therefore more along the lines of rules-of-thumb as opposed to prescriptive dogma.

None of the specific elements of the framework I use are critical to its success, and the only axiom is that the elements have to form a coherent system. By coherence, I understand a dynamic setting where 1] the elements of the system are integrated both horizontally and vertically [more on that below], and 2] the system is bigger than the sum of its parts. The second point needs further elaboration, as I have often found even highly educated people really struggle with non-linear systems. Briefly, linear progression is utterly predictable [x + 1 + 1…= x + n] and comfortable to build models in – i.e. if you increase x by 1, the new state of the system will be x +1. Nonlinear progression by contrast is utterly unpredictable and exhibits rapid deviations from whatever the fashionable mean is at the moment – i.e. x+1= y. Needless to say, one cannot model nonlinear systems over long periods of time, as the systems will inevitably deviate from the limited variables given in the model.

Axiom: all complex systems are nonlinear when exposed to time [even in academy].

The age of the moderns has configured us to think exceedingly in linear terms, while reality is and has always been regretfully non-linear [Nassim Taleb built a career pointing this out for fun and profit]. Unfortunately this mass delusion extends to education, where linear thinking rules across all disciplines. Every time you hear the “take these five exams and you will receive a certificate that you know stuff” mantra you are encountering a manifestation of magical linear thinking. Fortunately, learning does not follow a linear progression, and is in fact one of the most non-linear processes we are ever likely to encounter as a species.

Most importantly, learning has to be understood as paradigmatically opposed to knowing facts, because the former is non-linear and relies on dynamic encounters with reality, while the latter is linear and relies on static encounters with models of reality.

With that out of the way, let’s get to the framework I have developed so far. There are two fundamental philosophical pillars framing the assessment structure in the digital media and communication [DIGC] subjects I have been teaching at the University of Wollongong [UOW], both informed by constructivist pedagogic approaches to knowledge creation [the subjects I coordinate are BCM112, DIGC202, and DIGC302].

1] The first of those pillars is the notion of content creation for a publicly available portfolio, expressed through the content formats students are asked to produce in the DIGC major.

Rule of thumb: all content creation without exception has to be non-prescriptive, where students are given starting points and asked to develop learning trajectories on their own – i.e. ‘write a 500 word blog post on surveillance using the following problems as starting points, and make a meme illustrating your argument’.

Rule of thumb: all content has to be publicly available, in order to expose students to nonlinear feedback loops – i.e. ‘my video has 20 000 views in three days – why is this happening?’ [first year student, true story].

Rule of thumb: all content has to be produced in aggregate in order to leverage nonlinear time effects on learning – i.e. ‘I suddenly discovered I taught myself Adobe Premiere while editing my videos for this subject’ [second year student, true story].

The formats students produce include, but are not limited to, short WordPress essays and comments, annotated Twitter links, YouTube videos, SoundCloud podcasts, single image semantically-rich memetic messages on Imgur, dynamic semantically-rich memetic messages on Giphy, and large-scale free-form media-rich digital artefacts [more on those below].

Rule of thumb: design for simultaneous, dynamic content production of varying intensity, in order to multiply interface points with topic problematic – i.e. ‘this week you should write a blog post on distributed network topologies, make a video illustrating the argument, tweet three examples of distributed networks in the real world, and comment on three other student posts’.

 2] The second pillar is expressed through the notion of horizontal and vertical integration of knowledge creation practices. This stands for a model of media production where the same assessments and platforms are used extensively across different subject areas at the same level and program of study [horizontal integration], as well as across levels and programs [vertical integration].

Rule of thumb: the higher the horizontal/vertical integration, the more content serendipity students are likely to encounter, and the more pronounced the effects of non-linearity on learning.

Crucially, and this point has to be strongly emphasized, the integration of assessments and content platforms both horizontally and vertically allows students to leverage content aggregates and scale up in terms of their output [non-linearity, hello again]. In practice, this means that a student taking BCM112 [a core subject in the DIGC major] will use the same media platforms also in BCM110 [a core subject for all communication and media studies students], but also in JOUR102 [a core subject in the journalism degree] and MEDA101 [a core subject in media arts]. This horizontal integration across 100 level subjects allows students to rapidly build up sophisticated content portfolios and leverage content serendipity.

Rule of thumb: always try to design for content serendipity, where content of topical variety coexists on the same platform – i.e. a multitude of subjects with blogging assessments allowing the student to use the same WordPress blog. When serendipity is actively encouraged it transforms content platforms into so many idea colliders with potentially nonlinear learning results.

Adding the vertical integration allows students to reuse the same platforms in their 200 and 300 level subjects across the same major, and/or other majors and programs. Naturally, this results in highly scalable content outputs, the aggregation of extensively documented portfolios of media production, and most importantly, the rapid nonlinear accumulation of knowledge production techniques and practices.

On digital artefacts

A significant challenge across academy as a whole, and media studies as a discipline, is giving students the opportunity to work on projects with real-world implications and relevance, that is, projects with nonlinear outcomes aimed at real stakeholders, users, and audiences. The digital artefact [DA] assessment framework I developed along the lines of the model discussed above is a direct response to this challenge. The only limiting requirements for a DA are that 1] artefacts should be developed in public on the open internet, therefore leveraging non-linearity, collective intelligence and fast feedback loops, and 2] artefacts should have a clearly defined social utility for stakeholders and audiences outside the subject and program.

Rule of thumb: media project assessments should always be non-prescriptive in order to leverage non-linearity – i.e. ‘I thought I am fooling around with a drone, and now I have a start-up and have to learn how to talk to investors’ [second year student, true story].

Implementing the above rule of thumb means that you absolutely cannot structure and/or limit: 1] group numbers – in my subjects students can work with whoever they want, in whatever numbers and configurations, with people in and/or out of the subject, degree, university; 2] the project topic – my students are expected to define the DA topic on their own, the only limitations provided by the criteria for public availability, social utility, and the broad confines of the subject area – i.e. digital media; 3] the project duration – I expect my students to approach the DA as a project that can be completed within the subject, but that can also be extended throughout the duration of the degree and beyond.

Digital artefact development rule of thumb 1: Fail Early, Fail Often [FEFO]

#fefo is a developmental strategy originating in the open source community, and first formalized by Eric Raymond in The Cathedral and the Bazaar. FEFO looks simple, but is the embodiment of a fundamental insight about complex systems. If a complex system has to last in time while interfacing with nonlinear environments, its best bet is to distribute and normalize risk taking [a better word for decision making] across its network, while also accounting for the systemic effects of failure within the system [see Nassim Taleb’s Antifragile for an elaboration]. In the context of teaching and learning, FEFO asks creators to push towards the limits of their idea, experiment at those limits and inevitably fail, and then to immediately iterate through this very process again, and again. At the individual level the result of FEFO in practice is rapid error discovery and elimination, while at the systemic level it leads to a culture of rapid prototyping, experimentation, and ideation.

Digital artefact development rule of thumb 2: Fast, Inexpensive, Simple, Tiny [FIST]

#fist is a developmental strategy developed by Lt. Col. Dan Ward, Chief of Acquisition Innovation at USAF. It provides a rule-of-thumb framework for evaluating the potential and scope of projects, allowing creators to chart ideation trajectories within parameters geared for simplicity. In my subjects FIST projects have to be: 1] time-bound [fast], even if part of an ongoing process; 2] reusing existing easily accessible techniques [inexpensive], as opposed to relying on complex new developments; 3] constantly aiming away from fragility [simple], and towards structural simplicity; 4] small-scale with the potential to grow [tiny], as opposed to large-scale with the potential to crumble.

In the context of my teaching, starting with their first foray into the DIGC major in BCM112 students are asked to ideate, rapidly prototype, develop, produce, and iterate a DA along the criteria outlined above. Crucially, students are allowed and encouraged to have complete conceptual freedom in developing their DAs. Students can work alone or in a group, which can include students from different classes or outside stakeholders. Students can also leverage multiple subjects across levels of study to work on the same digital artefact [therefore scaling up horizontally and/or vertically]. For example, they can work on the same project while enrolled in DIGC202 and DIGC302, or while enrolled in DIGC202 and DIGC335. Most importantly, students are encouraged to continue working on their projects even after a subject has been completed, which potentially leads to projects lasting for the entirety of their degree, spanning 3 years and a multitude of subjects.

In an effort to further ground the digital artefact framework in real-world practices in digital media and communication, DA creators from BCM112, DIGC202, and DIGC302 have been encouraged to collaborate with and initiate various UOW media campaigns aimed at students and outside stakeholders. Such successful campaigns as Faces of UOW, UOW Student Life, and UOW Goes Global all started as digital artefacts in DIGC202 and DIGC302. In this way, student-created digital media content is leveraged by the University and by the students for their digital artefacts and media portfolios. To date, DIGC students have developed digital artefacts for UOW Marketing, URAC, UOW College, Wollongong City Council, and a range of businesses. A number of DAs have also evolved into viable businesses.

In line with the opening paragraph I will stop here, even though [precisely because] this is an incomplete snapshot of the framework I am working on.