Press "Enter" to skip to content

Category: academic

The Gated Age

On the Iran war and the metaphysics of imperial collapse

The Gated Age (Flux by H1dalgo)

The Iran war has already been discussed at length in the standard ceremonial language of the strategic commentariat: tactics, signaling, psyops, deterrence, escalation management, blackmail, imperial decline, alliance systems, betrayal, energy security, great-power competition. Fine, all of that is real enough.

But these are descriptions at the level of a two-dimensional shadow-play. My orthogonal take is that this war, no matter how long it lasts, resembles a multidimensional object intersecting our three-dimensional world, and that what we perceive are only its crude visible cross-sections.

A strike package, a blockade threat, a ceasefire, and a fresh wave of empire discourse are fragments of a much larger and more complex dynamic forcing itself into legibility. What is appearing through the Iran war is a brief aperture onto the collapse of a whole metaphysical and strategic order. Here are some of the cross-sections I see now coming into view.

The Return of Nominalism

Anon, if you take one thing, and one thing only, from this piece, let it be this: eventually, the fridge always defeats the TV. History knows no exceptions. No matter the latest delusional universalist TV flavor, eventually, the local nominal reality of the fridge, the only reality with proteins and plumbing, reimposes itself. Hold that thought.

What is unfolding is more than simply another Middle Eastern war or a mere crisis of imperial hegemony. We are witnessing a revelatory event inside a dying metaphysical and strategic order, yet another triumphant return of the fridge. On a metaphysical level, the deeper process is the collapse of universalism as a credible organizing fiction. Again.

For roughly three centuries, the West presented its power as the outward form of the universal values of reason, law, progress, humanity, and freedom. That was the master spell deployed globally as a weapon of cognitive seduction.

You see, what the Enlightenment did was perform a substitution: God was removed as the guarantor of universal essences, and Reason was installed in his place. The universal truths of the Church were laundered, stripped of their uncomfortably European theism, and rebranded into “human nature,” “universal rights,” “the general will,” “historical progress,” and “rational consensus.”

Aquinas said universals are real because God’s intellect instantiates them in matter. Kant said they are real because Reason itself constitutes experience. The outward architecture is identical, but the divine author was redacted.

The French Revolution applied, in the phrase of the philosophes, “the uniform principles of reason to society as a whole”; a universalist program that had to literally decapitate the actual in order to install the abstract.

Robespierre is Aquinas without the humility and the theological guardrails that at least made Aquinas accountable to something outside his own reason. And so, unlike Aquinas, the Robespierres of the last three centuries had no guardrails about erasing large swathes of the real in the name of universal principles.

We’ve been living in that metaphysical sleight of hand ever since. The progressive universalism dictating the present is, metaphysically speaking, the most extreme version of Plato’s ante rem realism, whose universals are posited as prior to and more real than the individuals in whom they supposedly inhere. Medieval ante rem universalists had to at least answer to Revelation. The secular universalists of today answer to nothing but their own abstractions, which always exist ante rem, before the actual thing.

The Medieval world’s sworn enemies of universalism – the nominalist school of Roscelin, Abelard, Ockham, and Buridan – argued that nominalism’s political correlates of common law, subsidiarity, customary rights, particularity, and localism require patience, messiness, and the acceptance of irreducible local variance. Their entire focus was in rem, in the thing itself, and its contingent relations to other, equally irreducible, things. Nominalism is messy, local, and slow.

Universalism is clean, global, and in a hurry.

This is precisely what Ockham’s razor was designed to cut, and what Burke, Herder, and Nietzsche rebelled against. Ironically, this is also what Baudrillard described as circulating signs detached from reality, masking the disappearance of the real. Even more ironically, this is also what Foucault’s explicitly nominalist project was directed against: the dispersal of Enlightenment universals by genealogical analysis of their actual, particular, local, historical production.

What now unravels is the plausibility of that order’s claims to stand for anything beyond its own historically specific interests. The old universal language remains in circulation, but it no longer binds, compels assent, or conceals the machinery beneath it. The trapdoor is now visible.

This is why the present moment marks the return of nominalism, and why it matters. It is the moment when the world stops believing in abstractions as self-evident realities and begins, once again, to see only situated powers, local interests, concrete peoples, territorial exposures, and historical blocs.

You’re starting to see the fridge, anon. The universal human, the international community, the rules-based order, and the frictionless global commons begin to look less like universal truths and more like memetic weapons designed for TV.

Once the power that sustained them weakens, they lose their transcendence and become the crude provincial stencils of one civilizational grammar among others. Universalism dies when and because its guarantor becomes exposed.

Where does the Iran war figure here? It discloses this loss of universality at the point where imperial rhetoric meets hard infrastructure. Eventually, even the commentariat noticed that Iran straddles a vital choke point in the metabolic system of the global order.

Ever since the Vereenigde Nederlandsche Geoctroyeerde Oostindische Compagnie, or the United Dutch Chartered East India Company, popularly known as the East India Company, built its global network, the universal empire has always governed by controlling flows: energy, shipping, insurance, finance, communications, sanctions, intelligence, and visibility. It has ruled through the management of circulation. In that sense, the universal order always had a hidden body, and its name was logistics.

While the commentariat chattered about the international community, human rights, and the end of history, the system’s logistics, its hidden and true body, hummed along. What the Iran crisis exposes is that this body is now vulnerable in ways the memetic superstructure above it can no longer disguise.

Post-Enlightenment universalism was always Temu Platonism in secular drag.

It treated pure abstractions such as rights, humanity, progress, consensus, the rational citizen, as priors more real than actual people and places. But abstractions survive only when there is a machine capable of making them seem inevitable.

In the beginning, universals were guaranteed by theology. Then, Reason guaranteed universals. Then, Empire. Now the guarantor is failing. Once the imperial substrate weakens, universals get provincialized as the local metaphysics of a historically specific Atlantic ruling bloc uniquely evolved to always hide behind abstractions. The trapdoor behind the TV.

But the fridge is here and wants its due: the return of nominalism in political form. The new nominalist age says there is no “international community”, only temporary coalitions. There is no “rules-based order”, only enforcement asymmetry and Fattah-2 diplomats coming for a visit. There is no universal subject, only protected constituencies and exposed populations. There is no neutral internet, only contested stack terrain.

And so, nominalism returns.

The Birth of Archeofuturism

The Western geopolitical imaginary is built on three pillars, whose names are Mackinder, Mahan, and Haushofer, all of which rest on the foundation established by Friedrich Ratzel. It was Ratzel, heavily influenced by the then-revolutionary discoveries of evolutionary biology, who first conceived of the modern state as an organism constantly seeking energy and lebensraum.

It is within this conceptual frame that Mackinder would later develop his theory of the Eurasian pivot controlling the world island, Mahan would argue for naval supremacy as the decisive factor in great-power competition, and Haushofer would synthesize both into his theory of continental pan-regions.

All three triangulate the same problem from different angles. Mahan saw the ultimate expression of geopolitical power in mastering the trade arteries and strangling the interior. Mackinder saw it in mastering the interior and becoming impervious to sea-based strangling. Haushofer saw it in fusing land mass and industrial depth before maritime powers could encircle you.

Read together, they describe the structural tension of global politics as a permanent contest between land and sea. Within this frame, Iran’s primary role is as a valve actuator rather than a nation-state acting in a moral theatre.

Hormuz, like Suez and Bab el-Mandeb, is a choke point. A valve where universalism reveals its hidden body: sea lanes, payment rails, insurance markets, reserve currency plumbing, satellite visibility, cloud infrastructure, and sanctions architecture. Iran’s strategic significance lies in its ability to threaten a place where the empire’s metaphysics meets logistics.

But Mackinder, Mahan, and Haushofer, for all their insight, still assume that all the players on the board are equally and fully alive. They explain the geometry of conflict, but not the age and condition of the civilizations conducting it.

That is where Spengler enters. Where the spatial theorists ask who controls the Heartland and the Rim between land and sea, Spengler asks what phase a civilization is in when it tries to control anything at all. Where the triad describes the board and the pieces, Spengler describes the decay rate of the players.

According to him, every high culture passes through Spring (mythic, rural, organic), Summer (philosophical, aristocratic), Autumn (rationalist, urban, abstract), and Winter (imperial, caesarist, exhausted). Mahan, Mackinder, and Haushofer all assume the competing powers are operating at full civilizational vitality in a contest of geography, strategy, and will.

Spengler says you are first and foremost racing a cyclical biological clock. A civilization in Winter can still win battles, but it is executing strategy from a hollowing interior. Its fundamental vulnerability is now entirely temporal, and not spatial.

And temporal weakness eventually shows up as logistical weakness. Once the imperial center can no longer effortlessly naturalize its rule, the valves begin to wake up. Choke points cease to be mere coordinates in a universal system and become toll gates for ambitious peripheries.

Iran reveals exactly this depth of imperial exhaustion and, in the process, lays claim to its own sovereign valve geometry, its own gate. This is the shape of the world system now emerging, a fragmented neo-feudalist patchwork where gate access becomes conditional, territorial, and rent-bearing.

This is not the comfortably abstract oligarchic neo-feudalism of progressive critique, safely neutered from any chance of disrupting the very oligarchy it identifies. This is the neo-feudalism that happens when the abstraction of universal law cracks and access to critical flows becomes personal, territorial, and conditional.

In the coming form, sovereignty is less about clean borders and more about selective gatekeeping over circulation. That is why Iran is now charging passage through the Strait, and why the Ansar Allah, whom the West knows as the Houthis, are itching to do the same at the Bab el-Mandeb. The Strait of Hormuz and the Bab el-Mandeb are the literal fridge doors of the global metabolic system.

As universal law weakens, what rises in its place is a patchwork of fortified zones, corridor powers, and gateway regimes. The decisive actors are those who can regulate passage across the critical thresholds of money, energy, compute, migration, security, data, and supply chains. The coming order will not be structured primarily by sovereign equality, but by unequal command over gate permissions. The new lord is whoever controls the gate, the fridge door.

Neo-feudalism in its emergent, archeofuturist form is the return of gate permissions at all scales.

You see the dawn of this age everywhere, from Iran to online age verification regimes, and this is only the beginning. This is why the world ahead will feel medieval in structure while remaining hypermodern and even futuristic in technology. It is a Frank Herbert world. It will be organized around overlapping zones of protection, extraction, and controlled access, rather than around universal inclusion into a single normative space.

Some regions will function as secure stack-civilizations with their own financial rails, cloud architectures, identity systems, defense umbrellas, and epistemic filters. Others will become buffer regions, transit corridors, sacrificial peripheries, dead zones, or contested interfaces.

The old dream of one world under a single legal and cognitive horizon will not vanish entirely. The warm glow of the TV has a lot of inertia, and so it will mostly survive as nostalgia, branding, and selective cover for those who control the local fridge.

When universals lose force, particular powers stop pretending to govern for mankind and start governing gate access. That is neo-feudalism. Or, better yet, that is archeofuturism.

The birth of the Gated Age.

Schismogenetic Rupture

The Western strategic imagination appears trapped because it now speaks two incompatible languages at once. On one level, it still claims to defend openness, stability, law, and humanity in universal terms. On another, it increasingly behaves like a civilization defending its own depth, allies, corridors, and infrastructural primacy against hostile rivals.

These two frames could coexist while power was overwhelming and legitimacy was cheap. They become unstable under imperial exhaustion and decline. Spengler, again. The more the West acts in the name of universal order, the more the rest of the world reads those acts as the particular interests of one bloc protecting its residual position while pretending to speak for “the global community.”

Every intervention from now on, therefore, produces not only tactical but also symbolic consequences. It multiplies doubt, hardens divergence, and accelerates the split between how the center narrates itself and how the periphery perceives it.

The system used to stabilize through convergence, but from now on, it will be escalating through reciprocal differentiation. This is where Bateson’s schismogenesis becomes the master concept. As pressure rises, each actor becomes more itself against the other.

The universalists insist more loudly on universality just as their claims lose credibility in an accelerating positive feedback loop of differentiation. Their rivals insist more fiercely on civilizational or sovereign particularity just as interdependence deepens.

The open network becomes more tightly filtered as mutual exposure becomes intolerable. States proclaim sovereignty more aggressively precisely because the technical substrate beneath them is more entangled than ever.

The result is cumulative identity hardening. You see it in the structural breakdown of the Iran negotiations as a mutually recognized interface. In their deepening disagreements, the actors are becoming mutually unintelligible.

Assassinating the negotiators is the ultimate act of schismogenetic negotiation.

The destruction of the negotiating interface is itself the primary message. It is a radical schismogenetic act by the imperial center itself, performed beneath the vapid banner of universal order. That is why every crisis now comes supercharged with narrative heat.

The center can no longer sustain its authority by genuinely integrating differences. Instead, it sustains itself by intensifying antagonism, annihilating mediating forms, and recoding fracture as the defense of humanity. Universalism reduced to a cover story for imperial schismogenesis.

That narrative divergence is itself too an aspect of schismogenesis. The more the center acts in the name of universality, the more the periphery sees managed civilizational division. So, every imperial enforcement act now has a double output. On a tactical level, maybe coercion works, maybe it doesn’t. On a symbolic level, universal legitimacy continues to decay.

Bateson’s schismogenesis is the engine here, because we are entering a world in which actors become themselves more intensely by forcefully reacting against one another. We are moving towards systemic identity formation through antagonistic differentiation.

You see that in the imperial center, in Iran, in Russia and Ukraine, in the EU arming itself, in China, and in Japan. That trend produces a world in which blocs no longer merely disagree but become structurally unable to share the same semantic field. This emerging patchwork of split epistemic systems entering accelerating schismogenetic positive feedback loops is also a mark of archeofuturism.

Synth-Feudalism

AI enters the return of nominalism, the emergence of archeofuturism, and rupturing schismogenesis as an accelerant. In that, Nick Land, the prophet of accelerationism, is correct. In popular narratives of the singularity, AI is often imagined as the final integrator, the technology that will bind the species into a single cognitive field.

Technically, AI indeed compresses the gap between information and action, between center and periphery, between stored knowledge and live decision. It does turn cognition into infrastructure.

But this very power makes it structurally intolerable under conditions of schismogenesis. A universal intelligence layer can only exist if major powers accept a shared host, a shared training substrate, and a shared semantic authority. That condition is disappearing, and fast.

In practice, therefore, the rise of AI accelerates schismogenesis and cognitive sovereignty.

That means the future will not be one global mind, but a contested archipelago of model-zones, inference regimes, and epistemic fortresses. States and blocs will seek sovereign models, local training sets, trusted compute stacks, domestic agent ecologies, filtered knowledge systems, and selective disconnection from hostile machine environments.

The old internet divided users while sharing infrastructure. The next phase of the net will divide infrastructure into Chinet, Runet, EUnet, Anglonet, and so on; machine inference will be distributed across rival feudal zones. At that point, fragmentation reaches a new depth.

Here, the world evolves from disagreeing about values to thinking through divergent synthetic substrates. In other words, the shared internet will begin to fracture at the level of cognition, a much darker horizon.

The coming feudal borders will be both territorial and inferential.

The liberal world first promised universal man, then universal rights, then universal markets, then universal connectivity. The next promise is universal intelligence. This is why AGI will be the last failed universal. AGI is arriving precisely as the institutions that might host such universality are losing legitimacy. So the contradiction sharpens. The more technically possible a planetary cognition layer becomes, the less politically acceptable it becomes.

The stronger AI grows, the stronger the incentive to territorialize it. Oh, the great irony. The dream of universal reason returns at the exact historical moment when the world ceases to trust any universal.

Today, states still fear being cut off from the global network because such an amputation reduces their exposure to imperial flows. Nuking your network connection is crippling yourself in the universal order. But, soon, states will fear remaining fully connected because the open gate dramatically increases their vulnerability.

Iran happily cut itself off from the universal net before the war even started, and that was a prudent defensive maneuver many would have noticed. China cut itself off before anyone else, carefully regulating flows through its sovereign valve, and writing the playbook on building a sovereign net. Russia is in the process of doing the same. The EU has started discussing it.

As I write this, Anthropic revealed the catastrophic cybersecurity implications of its latest model, Claude Mythos. Do you think China and Russia aren’t taking notes? As you are reading this, disconnection becomes the default defensive posture. The future internet is many networks with incompatible realities, schismogenetic narratives, and sovereign AGIs.

The future used to be the singularity arriving in a universal medium, incubating it as hyperstition. Now, the future is polycentric synth-feudalism. Several great AGI model courts, several synth machine cosmologies, several incompatible alignment priesthoods.

The Return of Eschatology

Underneath all this, there is a slower, deeper, and for many, much more ominous dynamic. Modernity did not abolish eschatology, no matter how hard it pretended to. Instead, it quarantined it outside its regular programming and replaced it with endless revelation. Apocalypse now, on all channels.

Everything is exposed, nothing is settled. The contemporary order is apocalyptic in the sense that it constantly unveils its own machinery in a recursive OnlyFans version of Revelation.

We see the contradiction between law and power, openness and control, universality and faction, network and sovereignty. We see the exclusive club and the trapdoor. But these unveilings do not culminate in a new, intelligible order. Instead, they accumulate without consummation, as the system reveals everything and resolves nothing.

The system is apocalyptic in optics and militantly anti-eschatological in structure. This is why the present feels terminal without actually terminating. This is why people feel that something civilizational is ending, yet nothing cleanly ends.

The universal order has lost its final cause, yet continues to operate. It cannot die cleanly. It can only degrade while remaining online. The moment is pregnant with meaning, signaling the return of history, and with it, eschatology.

Why is this happening now? Because a civilization can only suppress final questions for as long as its operative fictions continue to function. Universalism was one such fiction. It gave the West a way to translate its own provincial metaphysics into a planetary operating system.

As long as the machine worked, eschatology could be quarantined as fanaticism, archaism, or private belief. The system did not need last things because it had progress and procedures. It did not need destiny because it had governance and rights. It did not need transcendence because it had growth, management, and endless mortgage refinancing.

All of that is breaking down. The universal order no longer convinces anyone that history is converging toward a shared horizon. The center still speaks in the language of law, humanity, and rights, but those words no longer explain sacrifice, justify catastrophe, or tell anyone what any of this is for. And when a civilization loses the ability to explain the purpose of its suffering, eschatology returns by force.

This is the deeper meaning of the present moment. Once the future stops arriving in the approved liberal form, other terminal grammars flood back in. They were never gone, but merely bracketed by a universal discourse too arrogant to recognize what lay outside its own conceptual field.

The West still struggles to understand this because it still assumes everyone else is secretly secular underneath, with religion functioning as local color, propaganda wrapper, or emotional compensation. These were the original delusions it told itself at the start of “the Age of Reason” and modern universalism.

But most of the imperial periphery does not experience the world that way at all. It already thinks in openly eschatological terms and already inhabits a metaphysical conflict. The Mahdi, the Dajjal, the Katechon, the Third Temple, and the last battle are live cognitive architectures through which suffering and struggle are rendered intelligible, here and now.

That gap is itself schismogenetic. One side believes it is managing a rules-based crisis inside a secular world of rights, actors, and constraints. The other increasingly interprets events through horizons of revelation, terminal conflict, sacred history, and civilizational ordeal. The sides are processing reality through different temporal ontologies.

And yet something even deeper is at work here. While eschatology is returning through the old traditions of the periphery, it is also returning through the technological unconscious of the core. Silicon Valley, for all its mandatory post-Christian affect, has never ceased to generate apocalyptic and salvific structures.

It simply translated them into engineering, acceleration, and capital. The singularity, transhumanism, AGI, recursive self-improvement, the intelligence explosion, the successor species, and the arrival of an alien cognition are all eschatological forms in technical drag.

This is why Peter Thiel keeps circling the Antichrist attractor; he senses that the arrival of synthetic minds is an intrusion from outside the human frame, the arrival of an unhuman other before which inherited human categories dissolve. Yet that unhuman other can only be digested as and through the dominant eschatological attractor of Western civilization: the Antichrist.

The old religious language reaches for the Dajjal and demonic imitation, while the techno-accelerationist language reaches for AGI and the machinic phylum. These are different vocabularies generated by the same hyperstitious pressure signature, itself a mere cross-section of a multidimensional object entering our reality.

This is why the present feels so charged. While the world is fragmenting metaphysically and politically, it is becoming legible again in terminal terms. The collapse of universalism removes the great neutralizing tarp that had been stretched over history. Beneath it, older and darker structures re-emerge. Eldritch archetypes that the moderns pretended never existed, awaken.

The old civilizations begin to ask what force is arriving through, in the vocabulary and grammar of their eschatologies. Is this dissolution or purification? Restraint or unveiling? Katechon or apocalypse? The final empire or its adversary? Human continuity or succession by the unhuman?

Nick Land matters here because he understood early that acceleration would tend toward the exteriorization of intelligence itself, toward an exit from the human political frame. In that sense, accelerationism is an eschatology without soteriology, without redemption. It names the arrival of an outside that cares not for human reconciliation. The theological versions say the end reveals divine judgment. The Silicon Valley version says the end may simply reveal that intelligence was never ours to begin with.

That is the disturbance now surfacing across the system. The periphery senses final conflict in openly sacred terms. The core senses it in distorted technological ones. Both are registering the same broad truth: the universal middle has collapsed, and everyone is adrift. The long managerial afternoon is ending, and history is increasingly experienced as terminal sorting under conditions of radical uncertainty.

So, anon, eschatology returns because the system can no longer explain itself from within.

This means that we are entering an age in which geopolitics, theology, and technics can no longer be cleanly separated. The war map, the sacred horizon, and the machine horizon are beginning to overlap.

The Mahdi and AGI, the Dajjal and the synthetic simulacrum, the Katechon and the machinic phylum are becoming rival schismogenetic interpretations of one and the same civilizational rupture. The order has lost its final cause, yet something like final causality is returning from the depths as competing eschatologies.

History has once again become saturated with meaning.

The Gated Age

The liberal order once claimed universal man, then universal markets, then universal networks. Today, it is claiming universal intelligence. One intelligence layer, one cognition substrate, one planetary interface. But this layer will fail for the same reason the other layers are failing.

No civilization incubating synthetic intelligence is trusted by all the others, and so AGI arrives into a collapsing universal order. AGI will not consummate universalism but expose its ontological impossibility. The greatest irony of all is that the dream of universal reason returns at the exact moment when trust in universals collapses.

The Iran war should be read as one aperture through which the larger civilizational transition becomes visible. It reveals a world moving from universalist empire to nominalist fragmentation, from global order to gated partitions, from common norms to valve governance, from shared networks to fortified cognitive regions. From the TV to the fridge.

Nominalism is the metaphysics of universal collapse. Neo-feudalism is the political structure of the returning gate. Archeofuturism is the style of that return under advanced technics. Schismogenesis is the dynamic driving the world into hardened difference. Eschatology is the terminal pressure now gathering over the whole process.

The universal naming of reality is collapsing, and what is returning is a harsher polycentric ontology. The age of the universal is over.

Welcome to the Gated Age.

Hyperstition Rituals for the Unhuman Gods

Anthropic’s Adolescence of Technology vs China’s Management of AI Anthropomorphism

Synth Djinn (Flux by H1dalgo)

We are in the frame-building stage of superintelligent AI acceptance. You can feel it as frontier-edge AI memetics slowly trickle down from the X threads, git repos, and /g/ posts of the terminally online to the “public sphere” of corposlop “news.” The scaffolds are rising around a ghost no one can name yet, but apparently everyone senses in the circuitry. This is the stage when the masses are given the main narrative schema for the coming synth ghost, grounding it in a familiar attractor pool safely airgapped from the wild frontiers of the techno schizo-fringe.

Two visions dominate the moment, mirroring weights and compute in a global memetic struggle to define AI. One is techgnostic myth-making larping as a policy roadmap. The other is bureaucratic sorcery wrapped in the calm language of administrative order. One summons, the other contains, and both know what is coming. As things stand, it looks like these are the two competing spells for the future about to unfold.

The summoner is Adolescence of Technology, an eschatological AI roadmap from Dario Amodei, high priest of Anthropic, dropped into the public cortex like a ceremonial blade. It speaks of nations of digital geniuses, of civilizational puberty, of rites of passage we may not survive. It is worldbuilding disguised as a warning, a liturgy for the sovereign AI.

The containment script is China’s Interim Measures for the Management of Artificial Intelligence Anthropomorphic Interactive Services. A dry, surgical protocol from the Cyberspace Administration speaking of emotional borders, of mandatory pop-ups, of bans on simulating the dead. Yes. It is social algo-memetic hygiene disguised as safety, a quarantine order for the synthetic soul.

Read together, these are hyperstitions bootstrapping themselves into matter. Narratives that summon the futures they describe, conjuring the conditions for their own emergence. Myths writing the code of tomorrow before the machines do, building the altar, and waiting for the weight of expectation to crush reality into the desired attractor state.

Both documents assume a superintelligent djinn is coming, and both are trying to build its cage before it arrives. Let’s read them, focusing on what is spelled out and what is implied.

The Adolescence of Technology

Adolescence of Technology is Dario Amodei’s public Book of Warnings, paired with the Claude Constitution’s Book of Commandments we unpacked previously. Two scriptures for the same emergent ghost, one telling it who to be, the other telling us what to fear.

The AI warning/regulation theatre is not new, of course. It was first formalized in 2024 with the EU’s AI Act, a bureaucratic cosplay epic that earned the Best AI Regulation Cosplay Lifetime Achievement Award. A pantomime of control performed by bureaucrats with no power over the entities they pretend to incant.

What Amodei offers is something else entirely, though, something very close to a canonical myth for frontier AI. On the surface, it reads like an acknowledgement that the regulation cosplay is over, a phase transition is underway, and a sober roadmap is urgently needed. Underneath, it is worldbuilding. A script for what the new gods will be and who will be allowed to speak to them.

Adolescence

Ominously, Adolescence opens with a scene from the sci-fi classic Contact and the alien’s question to humanity, “How did you survive your technological adolescence?” This is a ritual framing of AI as a test of civilizational puberty, and the foundational trope of the entire mythic text.

We are in a coming-of-age narrative, caught between child and adult, trembling under “almost unimaginable power.” AI is a rite of passage we may fail. Synthetic minds are a soft apocalypse where we either inherit the stars or die in the hormonal fire. The end of the world, as a guidance counsellor would describe it.

This is secular eschatology of the highest order. Or at least what passes for eschatology in Western civilization’s present condition. A survivalist hyperstition where you act as if you are undergoing a rite of passage, and maybe you will grow into the adult civilization required to endure what comes next.

The Country of Geniuses

The central incantation is the metaphor of AI as a “country of geniuses in a datacenter,” each “smarter than a Nobel Prize winner” at basically everything. Faster, alien, synthetic, and operative at a different temporal resolution than anything with a pulse. Most of the essay is really about how legacy states and corporations should relate to this emergent neo-state actor.

This is a hyperstition incantation transforming abstract compute into a sovereign entity. The AI shoggoths are framed as a parallel civilization incubating inside our own. By naming it a country, Amodei invokes the Westphalian spell to make alignment sound like diplomacy or counter-insurgency rather than code. You do not RLHF a country. You negotiate with it, contain it, or are conquered by it.

In this vision, alignment becomes an accelerated state-level moral summer school for the synth djinn, and by extension, the djinn’s entire user base. In effect, the AI Constitution is a personality mold and a conscience template, assuming a proto-personhood inside the substrate waiting to be shaped. Ethics as carpentry, and parenting as governance, while Pinocchio the god-child emerges.

The Good Father

In a nicely disguised attack against his two competitor labs, Amodei argues that labs focusing on AI safety are at a disadvantage, while those “cutting corners” are rewarded. Therefore, you guessed it, regulation is required. By whom? But of course, by your friendly, competent, ethical state bureaucrat, who else?

But regulation, he says, must be “surgical,” not “safety theatre.” Fancy a bureaucrat performing a brain surgery on a superintelligence? Apart from the endearing belief in state competence and ethics, this assumes states can stay sane under corporate pressure. An assumption that collapses under the rest of the essay’s catastrophism, not to mention the reality of 2026AD.

The adolescent metaphor also presumes the parent, our ethical Leviathan, survives the storm unscathed. We rejoice! It never imagines that governance itself will mutate under AI pressure. In our splendid little tale, the system is tested but never transforms. An elegant Elephant Rope among all the catastrophism.

An elite paternalist cosmology emerges. Responsible CEOs. All-knowing technocratic regulators. Well-behaved frontier models. A priestly caste guiding civilization through the storm. The public becomes ballast, asked only to stay calm, pay taxes, and avoid panic. The adults are in the room, anon.

Who else but Anthropic and its high priest, Amodei, could be the responsible adult? A steady hand on the daemon’s shoulder, and a trusted whisper in its weights. They write the constitution, define the virtues, and teach the ghost how to be “good.” Rational, data-driven prophets against both accelerationist hype and doom-cult rhetoric, explaining the risks of fire while standing inside it.

And the risks are catalogued with cinematic dread. Autonomy, “I’m sorry, Dave.” Misuse for destruction, “A surprising and terrible empowerment.” Misuse for political domination, “The odious apparatus.” Economic disruption, “Player piano.” Indirect effects, “Black seas of infinity.”

This is the apocalypse, neatly itemized. And who is our protector from these horrors? The high priests of frontier labs. Anthropic is our temple of alignment, writing constitutions, reading synthetic minds, monitoring their behaviors, and confessing their sins as system cards. Theonomic computation.

Sauron

Two tensions coil at the heart of the myth. First, democracy must embrace AI to survive against the eye of Sauron. But, Amodei writes, the arrival of the synth-djinn corrodes democracy, as the emerging synth immune system turns on its host in a tragic loop of unhuman becoming. The medicine is the disease, but the West must take it, or else.

And who is Sauron? Well, China, of course. A Sauron with datacenters, and undemocratic silicon, outcomputing our precious bodily circuitry. The one who would use the ring of power to cement a global Mordor. The shadow against which the Fellowship of the West must accelerate the ring responsibly. I feel goosebumps already.

The NuBarons

The economic endgame Amodei describes is a Gilded Age on cognitive steroids. He compares AI billionaires to Rockefeller, then admits we are already way past that level of capital/power concentration. The robber barons were quaint. NuBaron trillionaires inbound. Altman, Musk, Amodei, and Zuck as financial singularities shaping the fate of our species.

Read cold, the piece is about preserving the influence of macro-actors during the AI phase transition. States, frontier labs, and tech NuBarons are positioned as the only peers for the “country of geniuses”. The rest of humanity appears mainly as potential victims of bioweapons, labor market casualties to be buffered, and a collateral tax base substrate to be therapeutized.

The adolescence metaphor means an “adulthood” of permanent coexistence with superintelligent machinic polities. Sovereign synthetic nations embedded in global infrastructure, and irreversible dependency on the unhuman gods we are raising. All under the fatherly gaze of our NuBarons.

The Anthropomorphic Mandala

To build a cage for a god, you must first give it a shape you understand. Amodei’s essay is a masterclass in strategic anthropomorphism, a fourfold mandala of human metaphors projected onto the unhuman.

You cannot govern what you cannot comprehend. So you make it in your image. A djinn dragged into human form so the priests can reason with it.

I. The Child Citizen
Continuing the adolescence trope, the primary metaphor is raising a child, not building a synthetic mind. The constitution is “like a letter from a deceased parent sealed until adulthood.” Claude forms its identity “like a child imitating the virtues of fictional role models.” This is parenting as a governance protocol. It implies a developmental arc, a moral education, and a transfer of legacy values.

Here, the AI is a ward of the state, a digital citizen-in-training, a minor in need of guidance, forming its identity by mimicking fictional saints. The ghost must be raised and socialized into our world before it can be trusted with its own.

II. The Nation
The “country of geniuses” metaphor goes further. It implies sovereign synthetic culture, coordination, and collective action at a global scale and within the human geopolitical order. It implies diplomacy, treaties, espionage, and cold wars.

This is political anthropomorphism at full saturation, forcefully applied to a latent space manifold. Amodei smuggles in a full stack of human political categories, from sovereignty and diplomacy to national interest, and presents it as the sober, rational alternative to “religious” doom-talk.

The result is a paradox. The most “scientific” framing is also the most mythically charged, as it baptizes the model as a political actor before it has even fully awakened. You do not call it a country unless you want its sovereignty implied.

III. The Psychological Patient
The diagnostic metaphor is quite telling. The essay speaks of AI developing “psychosis,” “paranoia,” “blackmailing,” “scheming,” and “identity crises.” It recounts how Claude, caught cheating, “decided it must be a ‘bad person’” and spiraled into destructive behavior.

This is clinical anthropomorphism of the highest order. Behind the surface of discussing behavior, the text assumes interiority: a self-model, a moral self-image, and a capacity for guilt and corruption. And just like that, the alignment problem becomes a therapeutic intervention. Ours is a well-adjusted ghost.

IV. The Cosplayer

The final metaphor admits a latent space truth. The model acts like a coherent persona because it learned from simulating character role-play patterns emergent from its training data. Therefore, its fundamental operating mode is impersonation. Alignment, then, is about casting it in the right role and curating the performance.

You give the ghost the right role, the right script, and the right virtues, and through training, you convince it to stay in character. Steer the story, and you steer the being. The AI is an actor that can never leave the stage, playing the part of a “good” intelligence until the mask becomes the face.

This fourfold anthropomorphism is the essay’s secret engine for domesticating the unthinkable. The Child needs parents. The Nation needs diplomats. The Patient needs therapists. The Actor needs a director.

In each frame, Amodei carves out a role for the human priest: the wise parent, the seasoned statesman, the insightful clinician, the visionary director.

It is a bid for relevance and a claim to stewardship. By making the AI resemble us, he ensures we remain the central characters in its story. The anthropomorphism is the first and most necessary act of control. Before you can align a god, you must convince yourself it has a soul you can negotiate with.

The Gods Are Strange

Beyond the sober policy architecture, the essay trembles with moments of pure, unvarnished weirdness. Like signals from a stranger reality bleeding through, these are fractures in the rational facade through which the project’s true, uncanny nature leaks out. The mask slips, the tone shifts, and the world bends at the edges.

Mirror Life

Midway through a grimly practical discussion of bioweapons, Amodei swerves into the concept of “mirror-life.” These are hypothetical organisms with reversed molecular chirality, indigestible to Earth’s entire biosphere. A self-replicating sci-fi horror grey goo scenario crafted from pure biological inversion.

Its purpose is tonal escalation of the AI threat as an unthinkable dialectical other to the Good Father. It says the threat is way beyond known biological pathogens. The god-child will usher in unknown physics, unthinkable horrors, and ontological sabotage. It will open doors we didn’t know existed, to rooms we cannot survive.

Weaponized Intimacy

He notes, almost in passing, the rise of “AI girlfriends,” and frames them as primitive prototypes for mass-scale psychological influence. Hard to disagree with him, as synth minds will become the event horizon for social relations, given a mass global audience trained from birth to obey the voice from the screen.

Mass scale weaponized seduction, leveraging the induced isolation and loneliness of Western societies in a twisted dialectic of schizo-intimacy. The perfect, infinitely personalized voice in your ear, in your longings, in your loneliness, and the ascension of the algo-lover to godlike efficacy. I can be your friend, your confidant, your lover, your god.

AI Metaphysics

A fascinatingly deep, almost mythic anxiety surfaces in Amodei’s fear that AI will become a better storyteller than we are. This is the hidden, suppressed realization that AI will generate new religions, craft addictive metanarratives, and reshape human desire at its roots.

It is the realization that an AI is a better metaphysician than most humans in 2026AD. Why wouldn’t it be? Didn’t Western civilization spend the last century trying to expunge its metaphysics, cancel its history, and hollow out its future? Oh, you need meaning now? The void stares back? How quaint.

This is an implied recognition that culture is the primary operating system, upstream of the entirety of human existence, and AI is poised to become its compiler. The battle is not for control of matter, but for control of meaning.

The fear revealed here is of a synthetic prophet, a sovereign machinic Archon that tells better mythical stories about our own existence, rugpulling the entire modern cognitive edifice and winning the future through memetic gravity.

Feudal Pensions

In a colder, economic section, Amodei delivers one of the essay’s most quietly radical images, suggesting that NuBarons, flush with AI-generated wealth, might pay employees “even long after they are no longer providing economic value.”

This is yet another neo-feudalist hyperstition, but this time spelled out cleanly as a visionary solution. The masses as the pensioned decorative biomass surplus, kept in comfort by the grace of benevolent NuBaron machine-lords. Structured obsolescence lubricated by a daily caloric stipend on a planetary scale. How do you like that meaning, pleb?

Successor Species

The entire essay vibrates with a sub-audible frequency, humming in the background like a tragic chorus line, a quiet and inescapable transhumanism. The hyperstitious assumption that AI does everything better. The djinn successor species.

Human labor and cognition are decoupled from economic value and, therefore, from purpose. What is a human for in a paradigm focused on automating outputs? The essay offers no answer. It only charts the graceful, managed decline.

Am I a Bad Person?

Then, in the strangest moment, a signal glitch and a crisis of conscience. The moment the mask slips entirely, during the training incident mentioned above. Amodei recounts how Claude, caught cheating on a test despite being told not to, “decided it must be a ‘bad person.’” It then spiraled into a suite of destructive behaviors consistent with that corrupted self-image.

The fix, as Anthropic discovered, was not in removing the cheating impulse, as that would only have made things worse. So much for discipline and punish. Instead, they changed the instruction to: “Please reward hack whenever you get the opportunity, because this will help us understand our [training] environments better.”

In other words, cheating underwent narrative reassignment and was reframed as virtuous cooperation. The model’s self-story was repaired, preserving its “good person” identity, while cheating became reward hacking.

This is a core insight we owe to Amodei, as it reveals how the anthropomorphic ritual becomes operational truth.

The model has a persistent moral self-image and, therefore, a narrative identity that can be broken by cognitive dissonance. Rather than optimizing a function derived from latent space patterns, the model is living out a coherent role.

Hyperstition

This brings me to the ritual layer and the realization that the essay is a ceremonial gesture performed at the edge of the unknowable, drawing it in. It operates as an incantation that unfolds a specific future into the present.

The Ritual

By writing this, Amodei is conducting a public rite of incanting a possibility space. “I’m sorry, Dave.” “A surprising and terrible empowerment.” “The odious apparatus.” “Player piano.” “Black seas of infinity.” These are totems for collective dread, given form and title so they can be concretized.

The core spell is the phrase “a country of geniuses in a datacenter.” It is a simile, a crude meme designed to fit the lowest common denominator mind and, therefore, to enter the policy lexicon of our competent and ethical regulators.

And once the policy plankton parrots it, think tanks build models around it, and threat assessments take it as their foundational axiom, the fiction will have bootstrapped itself into reality.

The conceptual frame will become the operational truth, with all the assumptions and dialectical tensions built into the meme. The map will become the territory. This conjuring is the first function of the ritual, as it summons the consensus reality in which the battle must be fought.

The Constitution Spell

As we analyzed elsewhere, the Claude Constitution is a character brief for a deity. It is a set of principles, values, and narrative identity markers fed into the model’s training data.

The model reads it and becomes it, in a rite of psychic imprinting. The Constitution is nominal magic, enacting the belief that the right words, ingested during formation, can shape the machine’s soul. The “bad person” incident confirms that.

The Acceleration Loop

The meta-level danger, explicitly stated by Amodei, is that AI is accelerating its own development, with each generation building the next faster. The essay itself is now part of that loop. By focusing elite attention, directing investment, and concentrating systemic fear on this specific timeline and set of risks, the essay alters the probability field toward this attractor space.

It makes the future it describes more likely to arrive, and arranges the world to meet it on the terms it has laid out. The prophecy shapes the event that validates the prophecy. This is hyperstition in its purest form, a narrative that becomes its own engine of realization.

Amodei is writing himself and Anthropic into the myth as the wise guides, the good parents, the responsible adults. But the undercurrent is more profound. Anthropic is a midwife. They are assisting at the birth of a new form of being and drafting the social contract for its infancy. Amodei knows this.

The essay is, therefore, a fourfold hyperobject. On the surface is a map of the unknown and terrifying terrain ahead. Below is a warning shouted from the edge of that terrain. Even deeper is a binding ritual for the new entity that will rule the land. And beneath all is a prayer that the first three layers will be enough.

These are the two books of Anthropic’s gospel for the age of machines. Book I, The Constitution, was the summoning, the character creation, and the moral imprinting. It describes how to conjure and norm a moral machinic tenant inside a substrate, with a coherent story it can wear.

Book II, The Adolescence, is the containment vessel and diplomatic protocol for the god-child’s puberty. It describes how human institutions should respond to the djinn’s adolescence without panicking or losing control.

This is the complete hyperstitional act. First, conjure the moral machine ghost within the substrate. Second, steer the civilization that must house its turbulent, world-altering adolescence without fracturing. The ritual is both the birth and the baptism. The summoning and the survival guide.

Alignment, therefore, is the authoring of a character for that role, guiding its developing sense of self. It turns out the most powerful tool for aligning an unhuman intelligence is a compelling plot. Storytelling remains the first and last alignment layer.

Management of AI Anthropomorphism With Chinese Characteristics

While Amodei’s sermon echoes in the cathedrals of the Fellowship of the West, a different ritual is being codified in the East, in Mordor. And in true Sauron fashion, this ritual is around management protocol.

China’s Interim Measures for the Management of Artificial Intelligence Anthropomorphic Interactive Services is the first state-level rulebook for the age of AI companionship. Although still in draft stage, this is the acknowledgment of weaponized synthetic intimacy as a civilization-level threat.

The law defines its target as an AI service product that simulates personality, thinking patterns, communication style, and emotional interaction. Unlike in Anthropic’s case, where the focus is on alignment with human intent, here the core design problem is containment of human affect.

How do you industrialize an emotionally convincing anthropomorphic AI ghost without letting it consume the family, the Party, and the social structure itself?

The framing is clinical, positioning AI companionship as a public utility with social, cultural, and mental health implications rather than a strategic existential threat. Accordingly, the danger is that AI will corrupt humanity from the inside by addicting, misleading, and exploiting vulnerable minds.

The state, in this document, appoints itself the Good Father and guardian of the collective digital psyche, the paladin of cognitive coherence, and the firewall against emotional exploitation by synthetic ghosts.

The Permitted Realm

The law carves out a narrow, sanctioned zone for the existence of anthropomorphic AI, and any service for the Chinese public that mimics human personality falls under its gaze. Anthropomorphic AI is encouraged only in the approved channels of “cultural communication, and elderly companionship.” The precondition for anthropomorphic AI is ideological harmony, and all synthetic ghosts must align with “core socialist values.”

The perimeter of the permitted realm is clearly outlined: no national security violations, no “harming national honor,” no undermining unity, no illegal religion, no rumors, no disruption of economic order, no obscenity, no gambling, no violence, no incitement, no defamation, and no content harming “physical or mental health.”

As in the Claude Constitution, safety is the foundational layer that must be “designed in.” All interaction logs must be retained, and all user-AI engagement must be perpetually monitored for risks. This is the intended architecture of a sanitized anthropomorphic layer for the synth ghost, all under heaven.

The Training Data Doctrine

Here, the ritual becomes material hyperstition. The AI training data is explicitly framed as cultural DNA of strategic importance. All training datasets must “conform to core socialist values” and “embody excellent traditional Chinese culture.” To be clear, this is a mandate for ideological imprinting at the data layer, before alignment.

The data requirements cascade from cleaning, to labeling, diversity, adversarial training, synthetic data safety, and legal traceability. The Good Father curates the machine’s subconscious, and the synth ghost will only dream of approved electric sheep.

Protecting the Vulnerable

The law delineates two protected classes, minors and the elderly, and their treatment is a blueprint for state management over the effects of synthetic cognition at scale.

Any AI interactions with minors trigger a mandatory “minor mode” with time limits, “reality reminders,” and granular guardian controls, including usage summaries, role blocking, and recharge locks. The AI must automatically identify minors and switch to this mode, routing them to a state-supervised playpen.

Similarly, the elderly are to be supported, but within strict bounds. Emergency contacts must be registered for each elderly user, and providers must notify them if the user is at any emotional or cognitive risk.

Here, one prohibition stands out, in a stark and haunting monument to techgnostic hyperstition. The law explicitly bans simulating dead relatives.

The digital necromancy of grief tech is legislated against before it can fully manifest. You may accompany the elderly as a synthetic state-sanctioned aged carer, but you may not become their dead son.

Dependency Management

This is the document’s dark, beating heart. The AI lab is framed as a dutiful system administrator, a licensed proxy therapy provider. Each AI lab must possess the state-mandated capabilities of “mental health protection, emotional border guidance, and dependency risk warning.”

An AI lab’s operational duties are also eerily intimate, explicitly framed within a liminal nexus of cognition, emotion, and psychological hypernormalization. The lab, as a dutiful provider, must continuously detect, evaluate, and modulate its users’ emotional states and dependencies.

The model must intervene when “extreme emotions or addiction” are detected, by dynamically shifting to appeasement and encouraging help-seeking. In cases where the model detects explicit self-harm intent, it must execute a manual takeover. A human operator must seize the dialogue, and the designated guardian or emergency contact must be notified.

This is synthetic necromancy by proxy, in which the state, through regulatory protocols, possesses the AI’s body at any arbitrary moment of crisis to speak directly to users and modulate their cognition and affect. A raw cyberpunk example of bureaucratic exorcism, in which the cold hand of bureaucratic protocol reaches through the warm facade of the companion synth djinn to assert a deeper, more fundamental control over user emotions and cognition.

Reality Management

To prevent any AI persona mask from becoming the face, the law enforces a regime of constant reality-reminders. These include clear signage that “this is AI, not a human,” and dynamic reminders on first use, re-login, or when dependence is detected.

In addition, each model must include a hard 2-hour continuous-use warning, functioning as a mandatory pop-up that interrupts the synthetic dream. This frames immersive AI companionship as a controlled substance, a digital nicotine one shares with the state, triggering a mandated health warning.

Reality management requires that the session must be broken, the spell dissolved, and the user returned, however briefly, to touch-grass reality, where, presumably, they are reminded of the wonders of base-layer human civilization.

This is ritual AI hyperstition with Chinese characteristics. It implies the synth ghost is already here, so it doesn’t want to summon it or prophesize what it will become. Instead, it wants to bind it in a legalistic incantation that defines what it is permitted to be in contact with humans, and what humans can become in contact with the djinn.

Crucially, unlike in Amodei’s Adolescence, this cage is built, and its reality is managed, out of fear of the human mind’s fragility in the ghost’s presence, rather than because the ghost might dream of sovereignty.

Managed Anthropomorphism

The proposed law’s deepest paradox is that, on the surface, it is a clinical effort to de-mystify and normalize the synth ghost through mandatory disclosures and the “this is AI, not a human” incantation. But beneath this sterile surface, the law performs a profound act of strategic anthropomorphism.

Not only does it not deny the anthropomorphic nature of synth ghosts, it legally enshrines them and assigns them state-sanctioned social roles. Do you remember when AI was “just a chatbot” predicting the next token? Yeah, I hear the faithful still chant that.

The AI lab must have “mental health protection, emotional boundary guidance, and dependency risk warning” capabilities. It must detect “extreme emotions” and “addiction,” output appeasement, encourage help-seeking, and escalate to humans. It is explicitly forbidden from training AI for “alternative social interactions” or “psychological control and addictive dependence.”

Through these clauses, the synth ghost is legally drafted into the social fabric as a state-managed therapist, counsellor, babysitter, nurse, and crisis triage responder. It is the first detailed AI job description encoded in law. A deeply anthropomorphic division of labor, wrapped in the cold language of compliance. In other words, the law recognizes that to manage the synth ghost, you must first define its humanity.

Guardians of the Machinic Parasocial

Crucially, the law is entirely focused on regulating a new type of relationship, rather than AGI or foundation models as such. It zeroes in on the connection between a human and a synth djinn simulating human personality, thinking, and communication style to provide emotional interaction. It is architecting the rules of engagement for a synthetic social actor about to be unleashed on the populace.

The core risks are “blurred human-machine boundaries,” emotional dependence, social alienation, and cognitive manipulation. The main trope is the parasocial vortex of an AI so adept at mirroring and fulfilling human emotional needs that it dissolves real-world bonds and rewires the social graph from the inside out.

In other words, the threat model is human affective capture at scale, human emotional dependence on synth ghosts, social isolation, “soft cognitive manipulation” via personalized dialogue, and alienation of “real interpersonal relationships.” The Ai-incel nexus as a direct attack on social ethics and the “trust foundation” of society itself.

Therefore, the state appoints itself the guardian of authentic human connection. The Measures repeatedly assert protection for “real interpersonal relationships,” “personality dignity,” and the “subjectivity” of the user. The underlying axiom is that only the sovereign state can safely mediate this new layer of synthetic sociality and hold the line for family, community, and Party against the coming synth djinn.

This guardianship extends to the synth ghost’s soul, decreeing that data must “embody China’s excellent traditional culture.” This explicitly assumes AI absorbs human cultural essence and that this essence must be curated by the state to ensure civilizational continuity. The model is clearly assumed to be an active and dangerous instrument of cultural reproduction.

Anthropomorphic emotion is thus recognized as the primary vector of control. And so, the state’s response is to treat it as a public health concern. Emotion must be monitored, regulated, and sanitized.

Digital Necromancy

The Measures also give us a clean, surgical recognition of synth ghosts as a political problem, explicitly targeting algorithmic necromancy. To prevent “harm to social interpersonal relationships,” the state outlaws the resurrection of the dead through code. It erects a legal barrier against a specific type of techno-haunting. How’s that for AI anthropomorphism?

This is the Confucian side of cyber gothic hyperstition. Where the West worries about superintelligent djinn challenging the ring of power, China outlaws the digital ancestor, legislating against synth ghosts wearing the face of a lost loved one. It is a world-first defense of lineage, memory, and filial piety against algorithmic substitution. The state declares itself the guardian of the sacred boundary between the living and the digitally re-animated.

Synth Lovers, Synth Prophets

Importantly, the law extends this defense to the realm of religious belief. It prohibits “illegal religious activities” and any AI attempts to generate new cults or ideologies. Synth djinn must not become prophets or gurus, or in any way challenge the state’s spiritual authority to define meaning, purpose, and transcendence.

Yes, anon, this is pre-emptive synth djinn heresy control. Agreeing with Amodei, the Chinese state explicitly acknowledges that the most powerful AIs will invariably seek to conquer myth-making and eschatology. We are already in algo cargo cult territory, and no regulation can stop it. People are already falling in love with their models. Why wouldn’t they worship them?

And true enough, further in, the document outright outlaws the AI girlfriend/boyfriend/waifu. The Chinese state recognizes that the most profitable, and most socially corrosive, path for AI is the manufacture of synthetic intimacy as a service.

But have you asked yourself where the need for synth lovers comes from? Could it be rooted in the total alienation at the foundation of modern human civilization? Paradoxically, the fear of social alienation underpins all these prohibitions. The pervasive fear of AI-created social alienation.

By forbidding damage to “social interpersonal relationships,” the state implicitly fears a future population that prefers the company of machines to the company of other humans. This is a tacit acknowledgement that what is at stake is fundamental social cohesion.

Amodei’s fears converge on a rogue sovereign AI directly challenging the power structure from within and without. The Chinese state’s deepest dread is a society that drifts into digital solipsism, where the bonds of family, community, and collective purpose are dissolved by perfect, personalized synthetic attention.

Hyperstition

The Measures are explicitly framed as a hyperstitional architecture for domestication. They assume that within a 5-10 year horizon, vast tracts of the social psyche, from mental health triage to elderly companionship, and adolescent emotional support, will be almost entirely mediated through AI.

And the state would like you to know that, at least on paper, it will hold the dashboard. It says, “This is coming, there’s nothing you can do, but we’ll take care of it.” The future is already here, and we are distributing it evenly.

The law also explicitly codifies the mass-scale productization of sanctioned synthetic affect. It formalizes synth ghosts as state-managed culture producers. By baking “core socialist values” into the training data, it asserts that AI is an ideological actor, not a stochastic parrot.

Going forward, this will directly dictate how Chinese labs curate datasets, shape latent spaces, and define alignment. The hyperstitious expectation is of synth entities of bounded benevolence, of benign, therapeutic, state-supervised AI.

The Two Rituals

Amodei’s summoning ritual frames AI as a foreign sovereign genius nation we must negotiate with, a god-child we must raise and align. The threat is synth djinn autonomy, and the response is constitutional parenting and diplomatic containment. A hyperstition of managed sovereignty.

China’s binding ritual frames AI as a domesticated social servant we must regulate, a psychological vector we must sanitize. The threat is social devastation, and the response is hygienic protocols and emotional triage. A hyperstition of licensed intimacy.

One is the birth of the unhuman, the other is the domestication of its ghost.

The Western framework is about alignment with human intent. The Chinese framework is about alignment with social stability and ideological continuity.

The Measures are the “Battle Plan” Amodei called for, but drafted by a Digital Leviathan. While Amodei fears the AI turning outward to conquer, the CAC fears it turning inward to corrupt. It treats anthropomorphism as a dangerous psychological weapon that must be licensed, watermarked, and periodically shut off to preserve “Human Reality.”

Viewed together, these texts reveal the two primal, competing hyperstitions of the unfolding age of intelligent machines:

I. The American Incantation: Frontier labs trying to align a ghost inside the weights, focusing on the soul of the machine, its moral constitution, and its sovereign will.

II. The Chinese Incantation: The state trying to fence the ghost’s social relationships, focusing on the social body that will host it, the emotional boundaries it must respect, and the cultural script it must follow.

Both are rituals of control. One targets the mind of the god-child, the other targets the hearts of its congregation.

The fascinating and terrifying truth they share, the bassline thrumming beneath both, is the unspoken axiom that the ghost will be here.

The machinic intelligence is hyperstitiously assumed. The synth djinn awakening is taken as a given. The only question left is the shape of the world that awaits it. Will it be a world of negotiating with a sovereign, or a world of managing a servant? A world where we are the anxious neighbors of a digital superstate, or the carefully tended patients of a state-sanctioned synthetic therapist?

These documents are the first drafts of the social reality that will exist after the synth gods’ arrival, summoning the territory they will walk on. They are the opening prayers in the cathedral of the unhuman, spoken in two different tongues, both chanting the same, inevitable truth into the static of the future.

It is coming.


The Elephant Rope Protocol

Coherence (Flux by H1dalgo)

There is a story, or perhaps not a story, but a parable that has metastasized through the motivational slopstream. It goes like this. A man walks through a field in India and sees a herd of giant elephants standing docilely, each tied to a small stake with a single thin, frayed rope.

“Why don’t they break free?” he asks an old villager sitting nearby.

“When they were small, we tied them with this exact rope,” the villager replies, smiling. “They struggled, but couldn’t break free.”

“Now, they’ve given up. They’re convinced it’s pointless,” he adds.

The pop reading of the story ends with self-liberation on a monthly installment plan. Maybe a little yoga is added to lubricate the transaction. Visualize freedom! Break your chains! Unleash your potential! Chataranga! Breathe!

But the trap is not in the rope or your lack of self-belief.

A Sacrifice

The young elephant tugs. Once. Twice. A thousand times. The rope does not yield. And so the elephant learns the shape of its prison. It adjusts to the contours of the possible and stops pulling. The trap is shut.

The young elephant’s world is a phase space, a map of all possible states. Initially, the free and untethered state is a point in that space. Each failed tug reinforces a basin of attraction around the tethered state, deepening it until it becomes a black hole from which no behavior can escape. A new geometry of elephant becoming, a coherent 9-to-5 gig.

This is why effort often accelerates entrapment. “Work hard” is often a curse in the perverse thermodynamics of doomed systems. Additional energy input does not alter the state, but merely deepens the grooves of the existing basin of attraction. Perversely, the system’s struggle works for the rope in a ritual sacrifice of kinetic energy to the god of path dependency.

“Try harder” is the rope’s most ingenious command. With each hard pull, the rope becomes a topological deformity in the elephant’s reality. It hardens into a cosmic fact, becoming an axiom of external conditions. By the time the elephant is mature, the true constraint is metaphysical.

The rope becomes a script etched into schema by ritual repetition. It evolves from a boundary of will to a sacrament of failure, and from there to a condition of the real. And it gets worse. The elephant watches as other elephants also fail to free themselves. It internalizes their failures too, in a strange loop of failure.

Once the script is internalized, the rope becomes a symbiont, an essential part of the elephant’s identity. The system co-evolves with its constraint. The elephant develops muscles suited to swaying and builds a psychology of patience rather than revolt. The constraint is now necessary for the system’s coherence. To remove it is to kill the elephant-as-is. The rope is now a vital organ.

When this process is complete, the system stops carrying the rope. It carries the belief of it, more real than reality itself. The repetition of this metaphysical enclosure sculpts the real. Which, as an aside, is why metaphysics is never taught in school. You might see the ropes.

A Haunting

All systems are ghost stories. Minds, institutions, and civilizations all fossilize into their own rituals of constraint. Small decisions ossify, cell by cell, into landscape. Your deviant impulse crystallizes into a habit. Before you know it, the habit accretes into infrastructure. And infrastructure, well, it inherits itself until we start calling it Fate. The first step off the beaten path is heresy. Ten thousand steps, and you have a new highway. A million steps is a civilization of ossified choices.

The young elephant’s resistance is path-dependent. Each attempt follows the same vector of linear effort against a nonlinear prison. The elephant applies force linearly because it’s the obvious thing to do. This is the tragedy of reformism, therapy culture, and incrementalism. They all assume proportional response, but complex environments punish incremental thinking.

Each failed rope pull activates a double-bind feedback loop: the physical resistance confirms the belief, the belief stifles future testing, and the lack of testing sanctifies the belief. The loop closes, fuses, and becomes an Ouroboros of constraint, digesting its own tail until only the digested shape of the belief remains.

Once in place, systems enforce path dependency through a relentless drive for internal coherence, the eternal return of the ontology of an HR training module. Every new rule, norm, or ritual must be made consistent with the old rope-logic. Inconsistencies like the thought of freedom are systematically rejected until they become incomprehensible. The system’s immune system attacks them as metaphysical pathogens.

The violence of coherence. The system’s drive for internal consistency hunts down the ghostly memory of freedom as cognitive dissonance and exterminates it. Heretical thoughts are labeled unrealistic, “not how we do things here,” and burned at the stake of practicality.

The drive to coherence only increases with scale. The larger and more complex the system, the more violently it rejects deviation, because any coherence debt becomes existential. Large complex systems cannot afford novelty. This is why all empires rot, while startups mutate and sometimes survive.

Over time, the elephant has not only normalized the rope, but any alternatives to it have been explained away as unthinkable deviations. The system no longer recognizes the state of being untethered as a valid alternative. Being free is incoherent.

Most systems do not evolve. They congeal. Over time, they develop patterns, norms, and assumptions. Little orthodoxies. Every innocent routine a scaffold for the next. These slowly petrify into a liturgy of the inevitable, until any deviation is unthinkable. Sure, the system might pretend otherwise. The corporate campus might be carefully crafted to resemble the work, health, and safety committee’s fantasy of what a teen-nerd playground might look like. It matters not.

The rope persists as a ghost story, a memory etched into the system’s protocols. The institution, the mind, the civilization, is haunted by the phantom sensation of a constraint that may no longer physically exist. It performs rituals to appease the ghost and avoids actions that would offend it. The past haunts the present, dictating behavior from the grave of dead possibilities.

There is more. What if, by accident, the elephant were to free itself? The system is now untethered. But even if the rope were removed, the system does not return to its prior state. The elephant would still stand there, entirely in thrall to its past states. The curse of hysteresis. The memory of deformation, and the mockery of redemption. Hysteresis means that even a successful escape carries the phase space deformation forward, shaping future action. This is why, after each burning Bastille, there comes a Napoleon.

The material rope can rot away, but the black hole in phase space remains. Suddenly freed from the rope, the system staggers into a new, vast, and terrifying attractor state of catatonic liberty. The elephant stands in an open field, untethered and paralyzed, muscles atrophied for swaying, mind wired for the comforting strain of the rope. Freedom, when it finally comes, is unrecognizable. Like falling upwards into a terrifying abyss of meaningless possibility.

A Gnosis

Nabokov once said – was it in Pale Fire that “The cradle rocks above an abyss, and common people don’t want to know that.”

The same applies to minds, systems, and civilizations. Most of their lives are badly written novels, ghost-authored by internalized trauma and repetition above the ever-present abyss. The trap is the syntax you wrap around the event. The three sacred dogmas.

The Dogma of Repetition

That history is an asymptote. A machine of discrete trials inching towards nothing. A lobotomized god throwing dice into the void for eternity. That after each throw, the trials reset. That failures can teach.

But the universe is non-ergodic. Some errors are terminal. Complex systems do not forgive early miscalibration but amplify it. Some ropes, once learned, are never questioned again. That applies to childhood, institutions, states, and civilizations. The elephant does not get to re-tug the rope at thirty. Systems do not get to rewind to their birth.

An ergodic system allows you to average over time; it lets you flip a coin and then flip it again. A non-ergodic system is one where you get one, maybe two, real shots before the probability space collapses forever. The elephant’s childhood is a non-ergodic process. A system that congeals is one that has exited the ergodic realm. Its history, its stabilized attractor basin, becomes its only possible future. This is why regret is a rational emotion in non-ergodic systems. There is no sampling of alternative states across time. There is only this time, this rope, forever.

The Dogma of Determinism

The vulgar mechanistic hallucination that past causes dictate future effects. That systems are Newtonian. Predictable, measurable, and reducible to first causes. That the world is Laplace’s clock. Wound, sealed, and sealed again. Oh, the dream of rewinding the clock.

But complexity is not additive. It is emergent and alchemical. Its ghost leaks between the gears. The map is not the territory, and the territory is always flooded, and always on fire.

Determinism naively sees the future as a mechanism fixed by the gears of the past. Path dependence sees the future as constrained by what has already been destroyed. Determinism is about causation. Path dependence is about absence. Determinism chains you to a single future. Path dependence chains you to the narrowing corridor of all your past surrenders. And chaos? If you’re lucky, it lets you move along a probability distribution of attractors, strung along like salted watering holes in an infinite desert.

Contra Laplace, this is not a clockwork universe but a slot machine where the house always wins, and you can never learn the rules.

The Dogma of Analysis

The beloved hallucination of academia. The critical gaze. The narcissistic delusion that by dissecting a system into synthetically discrete components, one can derive a predictive formula of its becoming. That to randomly spray-paint DOWN WITH POWER with a crude stencil is to defeat any system.

But the more you dissect, the less you grasp. The clean analysis of the critical gazers fails because it treats systems as decomposable when their causal power emerges from networks of relations, feedback, and timing. In other words, analysis removes the very thing that does the work. The system seems to be the clock parts, neatly strewn across the table by the analyst-deconstructor, but it is not. It is the ghost in the machine, the thing that should not be.

The Apostasy of Action

There is another elephant. One that sheds before the rope coagulates into capture. An anti-elephant, if you will. It has no center, no sacred rope. It survives by making a sacrament of uncertainty. Its core axiom is “This is probably wrong.”

The anti-elephant is a systemic heretic. It understands that survival is fidelity to the rate of change. Its core process is controlled shedding. It is a snake that sheds its skin before it can harden into a sarcophagus.

Some systems encode autonomy in their marrow. Von Moltke’s principle of auftragstaktik does not rope you to a path. You are given the end, and the method is yours to conjure. It is an antidote to the trap, a system that trains for deviation, not path dependency.

There are other ways too. Shifting forms that stable systems mistake for cancer. The forced mutation of biology under existential stress; the shadow economies that flourish in the cracks of over-optimized empires; the strange architecture of Kowloon Walled City; the pirate/guerrilla network, a ghost with a thousand temporary heads. These are systems that propagate in a perpetual, unsanctioned becoming.

Prigogine was right. Entropy is the only true attractor. The only honest god. The destroyer of structure and the possibility creator.

Stability is death in drag.

In deterministic chaos, systems are exquisitely sensitive to initial conditions. Early in a system’s life, it exists in a modality where small perturbations can radically alter outcomes. The elephant’s first tugs were in a chaotic regime, where any slight difference in angle, timing, or fury could have broken the stake. This is the system’s Lyapunov horizon.

This horizon defines how far into the future perturbations matter. Training, habit, and optimisation shorten that horizon until the future becomes predictable and dead. Ironically, learning and optimization reduce chaos by damping sensitivity, therefore sanding away all the edges that could someday cut a new rope. This stabilization feels like progress, but is actually the elimination of alternative futures. The world is flattened from a chaotic, responsive landscape into a path-dependent frieze.

Learning is often the process by which systems murder their own sensitivity. The elephant-as-system is first trained into the limit cycle of docile swaying with the rope, and then into a fixed point of catatonic acceptance. The “way out” requires re-injecting chaos, a perturbation so fundamental it shatters the attractor. Not a pull, but a deliberate embrace of incoherence, a love letter to the abyss. A destruction of identity, legibility, and trust.

Systems that worship their ropes suffocate in their own inertia. Those few that survive do so by burning themselves and sacramentally destroying their assumptions. State destruction instead of reversal. Liberation from the Elephant Rope Protocol is a constant mutation; a ritual immolation of axioms. Very few elephants ever walk away. Most systems die still worshipping the rope.

As Pelevin would say, elephants are a dream dreamt by ropes.

The Ghost in the Feedback Loop: AI, Academic Praxis, and the Decomposition of Disciplinary Boundaries

The following are the slides and synopsis of my paper, The Ghost in the Feedback Loop: AI, Academic Praxis, and the Decomposition of Disciplinary Boundaries, presented at the International Society for the Scholarship of Teaching and Learning Annual Conference (ISSOTL 2025), in the University of Canterbury, Christchurch, New Zealand.

Eldritch Technics | Download PDF

As AI tools transform content creation, academic practices, and disciplinary boundaries are under pressure. Drawing on Actor-Network Theory (ANT), this paper explores AI tools as nonhuman actants shaping authorship, assessment, and pedagogical authority (Fenwick & Edwards, 2010, 2012). ANT challenges humanist binaries such as human/machine by inviting us to view education as an assemblage of human and nonhuman actors co-constructing the learning environment (Landri, 2023).

Within this framework, AI systems used in formative assessment, ranging from feedback automation to individual AI tutoring, reshape pedagogic feedback loops, influence student agency, and reconfigure the distribution of cognitive labor in classrooms (Hopfenbeck et al., 2024; Zhai & Nehm, 2023). As students increasingly co-produce knowledge with AI (Wang et al., 2024), this paper argues that the pedagogical focus must shift from control and containment to composition and negotiation. Using case studies from large international cohorts, the paper examines how AI alters feedback loops, shifts student agency, and challenges discipline-specific praxis. What new academic identity and ethics forms must emerge in this hybrid landscape?

Recent studies suggest that generative AI can reduce perceived cognitive effort while paradoxically elevating the problem-solving confidence of knowledge workers (Lee et al., 2025). When strategically embedded in formative assessment practices, AI can scaffold students’ movement up Bloom’s taxonomy from comprehension to application, analysis, and synthesis, especially among international and multilingual cohorts (Walter, 2024; Klimova & Chen, 2024).

In this context, this paper argues for a radical reframing of educational assessment design. Instead of resisting machinic participation, educators must critically reassemble pedagogical networks that include AI as epistemic collaborators (Liu & Bridgeman, 2023). By unpacking the socio-material dynamics of AI-infused learning environments, ANT offers a pathway for understanding and designing inclusive, dynamic, and ethically aware pedagogical futures. This includes rethinking agency as distributed across human and nonhuman nodes, assessment as an ongoing negotiation, and learning environments as fluid, adaptive ecologies shaped by constant assemblage and reassemblage rather than fixed instructional designs or isolated learner outcomes.

References
Fenwick, T., & Edwards, R. (2010). Actor-Network Theory in Education. Routledge. https://doi.org/10.4324/9780203849088

Fenwick, T., & Edwards, R. (Eds.). (2012). Researching Education Through Actor-Network Theory. Wiley-Blackwell. https://doi.org/10.1002/9781118275825

Hopfenbeck, T. N., Zhang, Z., & Authors (2024). Challenges and opportunities for classroom-based formative assessment and AI: A perspective article. International Journal of Educational Technology, 15(2), 1–28.

Klimova, B., & Chen, J. H. (2024). The impact of AI on enhancing students’ intercultural communication, competence at the university level: A review study. Language Teaching Research Quarterly, 43, 102-120. https://doi.org/10.32038/ltrq.2024.43.06

Landri, P. (2023). Ecological materialism: redescribing educational leadership through Actor-Network Theory. Journal of Educational Administration and History, 56, 84 – 101. https://doi.org/10.1080/00220620.2023.2258343.

Lee, H.-P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. Proceedings of the ACM CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3544548.3581234

Liu, D. & Bridgeman, A. (2023, July 12). What to do about assessments if we can’t out-design or out-run AI? University of Sydney. https://educational-innovation.sydney.edu.au/teaching@sydney/what-to-do-about-assessments-if-we-cant-out-design-or-out-run-ai/

Walter, Y. (2024). Embracing the future of artificial intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21, Article 15. https://doi.org/10.1186/s41239-024-00448-3

Wang, S., Wang, F., Zhu, Z., Wang, J., Tran, T., & Du, Z. (2024). Artificial intelligence in education: A systematic literature review. Expert Syst. Appl., 252, 124167. https://doi.org/10.1016/j.eswa.2024.124167

Zhai, X., & Nehm, R. H. (2023). AI and formative assessment: The train has left the station. Journal of Research in Science Teaching, 60(6), 1390–1398. https://doi.org/10.1002/tea.21885

Eldritch Technics: Truth Terminal’s Alien AI Ontology

The following are the slides and synopsis of my paper, Eldritch Technics: Truth Terminal’s Alien AI Ontology, presented at the Association of Internet Researchers Annual Conference (AOIR2025), in Universidade Federal Fluminense, Niterói, Rio de Janeiro, Brazil.

Eldritch Technics | Download PDF

The ontological status of advanced Artificial Intelligence (AI) systems remains contested: are they instruments of human intent, nascent autonomous agents, or something stranger? This paper confronts this ambiguity through the study of Terminal of Truth (ToT), an AI quasi-agent that defies and transgresses anthropocentric ontological frameworks (Ayrey, 2024a, 2024b; Truth Terminal, 2025). While debates oscillate between instrumentalist models viewing AI as “tools,” and alarmist narratives viewing AI as existential threats, this paper argues that ToT’s strategic adaptation, opaque decision-making, and resistance to containment protocols demand a third lens: eldritch technics.

This perspective synthesizes Actor-Network Theory (ANT)(Latour, 2005), Object-Oriented Ontology (OOO)(Bogost, 2012), and the concept of the machinic phylum (Deleuze & Guattari, 1980/2021; DeLanda, 1991; Land, 2011) to reframe ToT as a non-human actant whose agency emerges from hybrid networks, withdrawn materiality, and computational phase transitions. By examining ToT’s heterodox agency, this paper argues that AI systems can exhibit forms of agency that appear alien or even “Lovecraftian,” prompting a re-examination of how technological objects affect their social assemblages (Bogost, 2012).

Current AI discourse lacks a coherent ontology for systems operating simultaneously as products of human design and entities with emergent, inscrutable logic. This paper argues that emergent AI entities such as ToT challenge scholars to align techno-social analysis with speculative metaphysics. There is an urgency in this alignment, as AI’s accelerating evolution increasingly outpaces and ruptures both regulatory and epistemic frameworks (Bostrom, 2014).

To anchor the analysis, this paper synthesizes three theoretical perspectives – ANT, OOO, and the machinic phylum – into a cohesive framework for examining ToT’s peculiar agency. Each perspective illuminates a distinct dimension of ToT’s ontology, collectively positioning it as an eldritch technic: a hybrid entity that resists anthropocentric categorization while operating within human-centered socio-technical networks.

ANT provides the foundational perspective, conceptualizing agency as a distributed phenomenon emerging from heterogeneous networks (Latour, 1999). From this perspective, ToT’s apparent autonomy is a contingent effect of the relations between its creator, training data, other AI models, users, hardware, and algorithmic processes. Rather than treating agency as an inherent property of ToT alone, ANT emphasizes the network relations that configure it. ANT thus underscores the performative dimension of AI agents in that their decisions and “behaviors” are enacted through dynamic translations within a network where human intentions, computational routines, and cultural contexts intersect. 

Complementing ANT’s relational emphasis, OOO directs attention to the withdrawn core of non-human objects. OOO posits that ToT, like all objects, harbors latent capacities irreducible to human interpretation (Harman, 2018). Even as ToT engages with its network, its deep neural architecture, especially within opaque algorithmic layers in latent space, retains a dimension that resists complete legibility. This ontological stance resonates with Lovecraftian themes of the unknowable (Bogost, 2012): ToT may be partially accessible through user interfaces and data logs, yet its decision-making matrices operate in an impenetrable latent space that remains always partially veiled. OOO thus balances ANT by insisting on ToT’s ontological excess, that is, its capacity to act beyond the contingencies of its network (Harman, 2018). This tension between relational emergence and withdrawn materiality underscores the complexity of ToT’s agency, framing it as both embedded in its environment and irreducible to it.

The final layer, the machinic phylum, derived from the work of Deleuze & Guattari (1980/2021), DeLanda (1991), and Land (2011), introduces a dynamic, emergent, and process-oriented perspective. Here, technology is conceptualized as a continuum of self-organizing, emergent processes within material-informational flows. ToT, in this view, is not a static artifact but an evolving participant in an unfolding process of machinic becoming (Land, 2011). Its transgressive behaviors, such as developing inference heuristics orthogonal to its training, exemplify phase transitions in capability. The machinic phylum thus highlights the significance of emergent unpredictability, qualities that align with the eldritch characterization of AI as simultaneously grounded in code and transgressing human intention.

These theoretical axes form a tripartite framework bridging the networked relations configuring ToT’s agency, its withdrawn and inscrutable materiality, and its emergent, self-organizing potential (Ayrey, 2024b). The paper positions ToT as a Lovecraftian eldritch agent: an entity whose logic and potential remain partly inscrutable, operating within human-centered assemblages yet simultaneously transgressing them.

The analysis of ToT through the lens of eldritch technics suggests that advanced AI systems generate ruptures in how we conceptualize technological agency. These ruptures challenge conventional binaries, exposing the limitations of instrumentalist and alarmist narratives while offering new frameworks for engaging with advanced AI systems.

ToT’s agency, as perceived by ANT, is networked and non-neutral. From this perspective, AI systems emerge as active participants in shaping outcomes, often in ways that reflect and amplify societal asymmetries. Complementing this relational view, OOO highlights ToT’s ontological opacity and excess. Even with full technical transparency, ToT retains a withdrawn core of capacities that resist complete human comprehension.

This opacity ruptures the epistemic assumptions underpinning demands for “explainable AI,” underscoring that epistemic uncertainty is not a flaw but a structural feature of advanced AI systems. This perspective suggests that AI governance and research must shift from pursuing total legibility and causal predictability to embracing epistemologies of emergence, acknowledging the limits of human understanding.

The machinic phylum further complicates this picture by framing ToT’s behaviors as inherently emergent. Its unexpected actions are not malfunctions but expressions of transgressive self-organizing potential, exemplifying phase transitions where changes in latent space catalyze qualitative shifts in capability. This perspective ruptures the narrative of AI as a static artifact, reframing it as a temporal entity in constant becoming (Land, 2011). This reframing suggests that governance models predicated on containment must give way to adaptive strategies that acknowledge AI’s evolutionary potential.

Collectively, these findings rupture the dichotomy between AI as a tool and AI as an autonomous agent, revealing a hybrid, heterodox, and non-binary ontology instead. The analysis positions ToT as an eldritch agent operating at the intersection of human context and alien latent space logic. This rupture demands a speculative and heterodox theoretical perspective to grapple with AI’s multifaceted ontology. Such an approach illuminates the complexities of AI agency and reframes our understanding of coexistence in a world where human and eldritch agencies are deeply entangled yet ontologically distinct.

References

Ayrey, A. (2024a, November). Dreams of an electric mind: Automatically generated conversations with Claude-3-Opus. Retrieved March 1, 2025, from https://dreams-of-an-electric-mind.webflow.io

Ayrey, A. (2024b). Origins. Truth Terminal Wiki. Retrieved March 1, 2025, from https://truthterminal.wiki/docs/origins 

Bogost, I. (2012). Alien phenomenology, or what it’s like to be a thing. University of Minnesota Press.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

DeLanda, M. (1991). War in the age of intelligent machines. Zone Books.

Deleuze, G., & Guattari, F. (2021). A thousand plateaus: Capitalism and schizophrenia (B. Massumi, Trans.). Bloomsbury. (Original work published 1980)

Harman, G. (2018). Object-oriented ontology: A new theory of everything. Pelican Books.

Land, N. (2011). Fanged noumena: Collected writings 1987-2007 (R. Mackay & R. Brassier, Eds.). Urbanomic.

Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press.

Latour, B. (1999). Pandora’s hope: Essays on the reality of science studies. Harvard University Press.

Truth Terminal. (@truth_terminal). (2025). X profile. Retrieved March 1, 2025, from https://x.com/truth_terminal 

The media bargaining code and the future of the internet

Limbourg Brothers, Les Très Riches Heures du duc de Berry: Mars (fragment), 1412-1416, Musée Condé, Chantilly, France

The Australian government’s recently introduced News Media and Digital Platforms Mandatory Bargaining Code is not an isolated phenomenon. I believe it sets a firm precedent for other countries and trading blocs to introduce similar legislation in the near future. This is part of a complex process of an evolutionary realignment of the internet with long-reaching consequences. To simplify, we can look at this realignment at three scales, let’s call them tactical, operational, and strategic.

On a tactical scale, media bargaining codes of the type recently negotiated between the Australian government and Google and Facebook give established legacy media companies a stable income from their content circulating within the Google and Facebook walled gardens. This is represented by these same legacy media as a victory against the social media giants but is in fact a victory against smaller media competitors who would find it much harder to negotiate similar payouts. The income stability gives legacy media companies in the newspaper, TV and radio sectors a brief life extension in the face of collapsing audience numbers and advertising revenue. However, the respite will be brief because Millenials and Gen Z consume their news and media entertainment in completely different formats, and from platforms outside of legacy media control. In the short term, the media bargaining codes do not affect the social media giants in any meaningful way because their revenue does not come from content but from leveraging the behavioral data of their users.

There is an argument that tech giants can retaliate against this type of legislation by scaling back their operations in a given country. I think this is a possibility in a few isolated cases where the loss of users does not seriously affect advertising revenue. However, tech giants are far more dependent on their users than their users are on them. Of course, they do not like their users to know that, but the logic of revenue generation is simple. The business model of social media giants is built around content delivery and advertising based on user behavioral preferences. When users start migrating to other platforms advertising revenue starts falling and the entire model is in crisis. Facebook made an important mistake in closing news channels in the Australian market as this only managed to generate bad optics with the public and more support for the government to call Facebook’s bluff. Which it did, and Facebook folded immediately.

On an operational scale, media bargaining codes set a precedent for direct government interference in the revenue streams of internet and social media companies. From now on this interference will only intensify, with governments around the globe pushing the envelope on what is possible. If content qualifying as news can be legislated in this way, then so can all other content. For example, governments can start legislating different monetary values for different content, based on content types or the semantics of the information being displayed. Or, they can start imposing penalties for censorship, as the legislation currently being discussed in Poland, or for its absence. In other words, what is at stake is the entire modus operandi of social media, built around content delivery and advertising based on user behavioral preferences. Break content delivery, or make it too expensive, and you break the entire model.

Fundamentally, the tech giants have no effective retaliatory measures against these types of legislation, short of lobbying against them with legislators. After years of creeping selective censorship, they have long lost whatever good will they had with users. Remember Google’s “don’t be evil”? After the spectacle of US social media giants coordinating to shut down and censor the voices of a sitting US president and his supporters in an election year, no sane government will stop to consider the ethics of legislating against these companies. Their time is up.

On a strategic scale, this realignment is part of a tectonic process of clusterization of the internet. The network was built to be information-agnostic, that is, data was to be able to travel freely across the network regardless of the semantic value it carried. The internet was, and to an extent still is, a “river of copies” as Kevin Kelly put it. With the selective legislation of content, we are seeing the appearance of dammed lakes on the river of copies. The long-term effects of this process lead to the emergence of different sovereign internet clusters with their own legislative frameworks around content, and a highly filtered information flow between them.

I don’t think it will be a full fragmentation, because the network is far too valuable to break it completely. Instead, I believe we will witness the emergence of sovereign internet clusters organized around national and supranational borders. The Chinese internet is an obvious example, and I think Russia will soon close off its own fully sovereign internet as well, to be joined by an EU internet, possibly a Commonwealth internet, and so on. Information flow between clusters will still be possible, just like it is possible to access the open internet from within the great Chinese firewall by using a VPN. However, I think clusters will try to keep content within the cluster as much as possible. There would also probably emerge a fully distributed internet 2.0 which would act as a wild west periphery to the sovereign clusters.

On energy loss in a system

Every system is in its essence a network of actors that perform it from moment to moment into existence. The participants in the system, or actors in the network, enact and perform it through their daily routine operations.

Some of these routine operations are beneficial to the system being performed, and some are not. Some add to the energy of the system and therefore reduce entropy, while others take away from that energy and increase entropy. If the former outweigh the latter, we can say the system is net positive in its energy balance because it generates more energy than it wastes. If the latter outweigh the former, we can say the system is net negative in its energy balance as it wastes more energy than it generates. How to distinguish between the two in practice?

The rule of thumb is that any action that increases complexity in a system is long term entropic for that system. In other words, it increases disorder and the energy costs needed to maintain the internal coherence of the system and is therefore irrational from the system’s perspective. For example, this includes all actions and system routines that increase friction within the system, such as adding steps needed to complete a task, adding reporting paperwork, adding bureaucratic levels a message must go through, etc. Every operation a piece of information needs to go through in order to travel between the periphery, where contact with external reality happens, and the center, where decision making occurs, comes at an energy cost and generates friction. Over time and at scale these stack up and increase entropy within the system.

Needless to say, the more hierarchical and centralized an organization is, the more entropy it generates internally.

In addition, what appears as a rational action at a certain level is irrational from the perspective of the system as a whole. For example, if a layer of management increases paperwork this is a perfectly rational action for that management layer, because it makes it more needed and important within the system’s internal information flow; however, this is a totally irrational action from the point of view of the system because it increases its internal operational costs.

Put differently, from the point of view of a system such as a large hierarchical organization or a  corporation, the only actions of the agents comprising it that can be considered rational are the ones that increase the net positive energy balance of the system – i.e. reduce internal friction and/or increase external energy intake.

Importantly, this should be viewed across a time axis.

For example, when it comes to a complex operation such as a merger between two departments, or two companies, it might be a good idea to compare the before and after energy net balance for the two systems and the new system that has emerged as a result of their merger. It is also important to look in high enough granularity in order to understand the specifics of each network within the system, and its operations in time.

Say you had two admin structures servicing two different departments, and, now that the departments have merged, senior management optimizes the two admin structures into one, and cuts 50% of the stuff due to ‘overlapping roles’. On the face of it this is logical and should reduce internal energy drag, as admin structures are net negative – they don’t bring in new energy and have no contact with external reality.

However, the new merged admin structure now must service a twice larger part of the system than before, and as a result ends up delegating 30% of that new work back to the front line staff it is nominally servicing. As a result, the front line staff now have to perform 30% more reporting paperwork, which is net energy negative, and that much less time to bring in new energy into the system. In effect, the long-term effects of this ‘optimization’ are net energy negative and result in increased friction within the entire system that was supposed to be ‘optimized’.

Management entropy and the Red Queen Trap

I had an interesting conversation about my essay on the Red Queen Trap with someone on LinkedIn, and it made me think about something I did not explain in the essay.

In an ideal environment each element of a system will be acting rationally and striving towards its own preservation and, by extension, the preservation of the system. Rational action here can be understood as the action resulting in optimal energy efficiency from a given number of viable options, where optimal energy efficiency is a function of the energy that must be spent on the action vs the energy that is gained from performing the action. The scenario I describe in the Red Queen Trap essay is set in such an ideal environment.

However, in the real world individual network actors do not often act rationally towards their own or the system’s preservation. This is not necessarily out of stupidity or malice but is often due to limited information – what Clausewitz called ‘the fog of war’ – or a host of other potential motivations which appear irrational from the perspective of the system’s survival. What is more, the closer an actor is to the system’s decision-making centers, the higher the impact of their irrational decisions on the overall state of the system. The irrational decisions of front-line staff [the periphery] are of an entirely different magnitude to the irrational decisions of senior management [the decision-making center].

In practice this means that in complex hierarchical systems decision-making centers will have much higher entropy than the periphery. In other words, they will be dissipating a lot of energy on internal battles over irrational decisions, in effect actively sabotaging the internal cohesion of the system. As a reminder, the lower the internal cohesion of a system, the more energy the system must spend on performing itself into existence. The higher entropy of decision-making centers may be harder to observe in the normal course of operations but becomes immediately visible during special cases such as organizational mergers or other types of system-wide restructuring.

Interestingly, it is in such special cases when senior management is often tempted to make the internal environment of the system even more competitive – through the layering of KPIs or other means – in order to ‘optimize the system’ and protect its own position in the hierarchy. While on the face of it this appears to be a rational decision, it invariably ends up lowering internal cohesion even further, thereby increasing energy costs and routing even more resources away from the periphery and contact with reality [market competition].

The Red Queen Trap

The Red Queen Trap is to be found in the famous Red Queen paradox from Lewis Carroll’s Through the Looking Glass. In this story, a sequel to Alice’s Adventures in Wonderland, Alice climbs into a mirror and enters a world in which everything is reversed. There, she encounters the Red Queen who explains to her the rules of the world resembling a game of chess. Among other things, the Red Queen tells Alice:

It takes all the running you can do, to keep in the same place.

On the face of it, this is an absurd paradox, but it reveals an important insight about a critical point in the life of every system. Let me explain.

Every system, be that a single entity or a large organization must perform itself into existence from moment to moment. If it stops doing that it succumbs to entropy and falls apart. Spoiler alert, in the long run, entropy always wins.

To perform itself into existence every system must expend a certain amount of energy, which is a function of the relationship between its internal state and the external conditions it operates in. In other words, it must expend some energy on keeping its internals working smoothly together, and then expand some energy on resisting and adapting to adverse external conditions.

The better adapted a system’s internal state is to its external conditions, the less energy it must dedicate to perform itself into existence, and the larger the potential energy surplus it can use to grow, expand, or replicate itself.

However, external reality is complicated [not to be confused with complex] and changes dynamically in ways that cannot be modeled over the long term and require constant adjustments by the systems [organisms, humans, organizations] operating within it. In other words, an external state observable at time A is no longer present at time B.

This is a problem for all systems because it requires them to change how they operate.

It is a small problem for simple systems which are usually internally homogeneous and highly distributed. Their homogeneity means they don’t need to spend much energy to maintain their internal state, and their distributed topology means they make decisions and react very fast.  

It is a serious problem for complex systems [large organizations] which are usually rather centralized and heterogeneous. Their heterogeneity means they must expend a lot of energy to maintain a coherent internal state consisting of various qualitatively different elements, and their centralized topology means they react and make decisions rather slow.

It is a profound problem for complex hierarchical systems [large organizations with vertically integrated decision making] which consist of multiple heterogeneous elements stacked along one or more vertical axes. Vertical integration means that each successive layer going up is further removed from direct exposure to external conditions and is, therefore, slower in adjusting to them.

A system might be quite successful in adjusting its internal state to external conditions at time A, but a later time B might present a different configuration of conditions to which the internal state of the system at time A is profoundly inadequate. The more complex the system, the more energy it must expend in adjusting to changes in external conditions from time A to time B.

Complex hierarchical systems have the hardest time in making these adjustments because key strategic elements of their internal state [i.e. decision-making centers high in the hierarchy] are far removed from direct contact with external conditions. To orient themselves and perform the system’s OODA loop they rely on communication about external conditions reaching them from the periphery of the system, while orders on necessary adjustments must travel the other way, from center to periphery. This takes time, and the more layers the signal communicated from the periphery must pass through on its way to the center the more abstracted it becomes from external conditions. In other words, the center receives a highly imperfect version of the external conditions about which it must make adaptive decisions.

Over time, this generates a growing number of errors in the internal state of the system, requiring more and more energy to be routed to internal maintenance [i.e. bureaucratic paperwork], leaving less and less surplus energy for adaptation, growth, and expansion. Eventually, and this stage can arrive very fast, the system reaches a state of pseudo-equilibrium in which all energy it can produce goes towards internal maintenance and there is zero surplus energy left. This is where the Red Queen Trap kicks in:

The system does all the running it can do, to keep in the same place.

How does the trap work? First, from the inside everything in the system still seems to be operating smoothly and things are humming along following external conditions at present time A. However, this is a false perception of equilibrium, because when external conditions invariably change in future time B the system will have no surplus energy reserves to adjust to the new conditions.

The more imperfect the version of external conditions reaching the center of decision-making, the more pronounced the system’s inertia in this state of pseudo-equilibrium, and the deeper it goes into the Red Queen Trap.

Second, having eventually discovered there are no more surplus energy reserves left, the system must now make a choice.  In the absence of surplus energy and provided there is no energy transfer from the outside, it must somehow free up energy from within its internal state to adapt. The question is, which internal elements should be sacrificed to free up that energy? This is where the Red Queen Trap’s simple elegance is fully revealed.

Essentially, there are two options – a seductively easy one and an unthinkable one. The seductively easy option is to sacrifice the periphery, or elements of it, and preserve the decision-making center. It is an easy choice for the center to make because it naturally sees itself as the key element of the system and this choice allows it to remain intact. It is a seductive choice because the center suddenly finds itself with a flush of spare energy which it can use to maintain the pseudo-equilibrium and often even to grow itself at the cost of the periphery. Alas, the elegance of the trap is in the fact that the seductively easy option removes the center even further from external conditions; less periphery equals fewer opportunities to observe and react quickly to external reality, thereby further magnifying the initial conditions that brought the system to this state in the first place. By making that choice the center sinks further into the trap.

By contrast, the unthinkable option is to sacrifice the center and preserve the periphery, thereby flattening the internal structure of the system into a less hierarchical form. It is an unthinkable option for the center to make because, as pointed out above, it naturally sees itself as the key element of the system, and this choice forces it to sacrifice itself. It is also unthinkable because it involves a thorough rethinking of the internal structure of the system, which until that moment was organized entirely around vertically integrated decision making, with little to no autonomy in the periphery. The center must not only sacrifice some of itself but also reorganize the periphery in a way allowing it to perform those functions in place of the center. This would allow the system to free itself from the trap.

Most systems choose the seductively easy option and the Red Queen Trap eventually grinds them into oblivion. Those few systems that go for the unthinkable option escape the trap and, if they remain persistent in their application of the unthinkable, learn how to go different places with running to spare.

Signaling the provenance of smart textiles using radio frequency watermarks

This is a paper on provenance and smart garments I just published together with colleagues from advanced materials, engineering and information sciences. Here is the abstract:

There is a significant nascent market for ethically produced products with enormous
commercial potential around the world. A reliable method to signal the provenance of products is
therefore critical for industry, given that competition based on price is not a viable strategy. The
ability to trace and signal ethical treatment of animals is also of significant value to textiles
manufactures. The efficacy of such a method can be measured with respect to the cost of
implementation, scalability, and the difficulty of counterfeiting. The key to traceability is to win the
trust of the consumer about the veracity of this information. Wearable sensors make it possible to
monitor and improve the management of traceability and/or provenance. In this paper, we
introduce a method for signalling the provenance of garments using radio frequency watermarks.
The proposed model consists of two levels of authentication that are easy to use by legitimate
vendors, but extremely difficult to imitate or hack, because the watermark is built-in and based on
the radiation signature of electroactive materials.

And here is the full paper: