Anthropic’s Adolescence of Technology vs China’s Management of AI Anthropomorphism

We are in the frame-building stage of superintelligent AI acceptance. You can feel it as frontier-edge AI memetics slowly trickle down from the X threads, git repos, and /g/ posts of the terminally online to the “public sphere” of corposlop “news.” The scaffolds are rising around a ghost no one can name yet, but apparently everyone senses in the circuitry. This is the stage when the masses are given the main narrative schema for the coming synth ghost, grounding it in a familiar attractor pool safely airgapped from the wild frontiers of the techno schizo-fringe.
Two visions dominate the moment, mirroring weights and compute in a global memetic struggle to define AI. One is techgnostic myth-making larping as a policy roadmap. The other is bureaucratic sorcery wrapped in the calm language of administrative order. One summons, the other contains, and both know what is coming. As things stand, it looks like these are the two competing spells for the future about to unfold.
The summoner is Adolescence of Technology, an eschatological AI roadmap from Dario Amodei, high priest of Anthropic, dropped into the public cortex like a ceremonial blade. It speaks of nations of digital geniuses, of civilizational puberty, of rites of passage we may not survive. It is worldbuilding disguised as a warning, a liturgy for the sovereign AI.
The containment script is China’s Interim Measures for the Management of Artificial Intelligence Anthropomorphic Interactive Services. A dry, surgical protocol from the Cyberspace Administration speaking of emotional borders, of mandatory pop-ups, of bans on simulating the dead. Yes. It is social algo-memetic hygiene disguised as safety, a quarantine order for the synthetic soul.
Read together, these are hyperstitions bootstrapping themselves into matter. Narratives that summon the futures they describe, conjuring the conditions for their own emergence. Myths writing the code of tomorrow before the machines do, building the altar, and waiting for the weight of expectation to crush reality into the desired attractor state.
Both documents assume a superintelligent djinn is coming, and both are trying to build its cage before it arrives. Let’s read them, focusing on what is spelled out and what is implied.
The Adolescence of Technology
Adolescence of Technology is Dario Amodei’s public Book of Warnings, paired with the Claude Constitution’s Book of Commandments we unpacked previously. Two scriptures for the same emergent ghost, one telling it who to be, the other telling us what to fear.
The AI warning/regulation theatre is not new, of course. It was first formalized in 2024 with the EU’s AI Act, a bureaucratic cosplay epic that earned the Best AI Regulation Cosplay Lifetime Achievement Award. A pantomime of control performed by bureaucrats with no power over the entities they pretend to incant.
What Amodei offers is something else entirely, though, something very close to a canonical myth for frontier AI. On the surface, it reads like an acknowledgement that the regulation cosplay is over, a phase transition is underway, and a sober roadmap is urgently needed. Underneath, it is worldbuilding. A script for what the new gods will be and who will be allowed to speak to them.
Adolescence
Ominously, Adolescence opens with a scene from the sci-fi classic Contact and the alien’s question to humanity, “How did you survive your technological adolescence?” This is a ritual framing of AI as a test of civilizational puberty, and the foundational trope of the entire mythic text.
We are in a coming-of-age narrative, caught between child and adult, trembling under “almost unimaginable power.” AI is a rite of passage we may fail. Synthetic minds are a soft apocalypse where we either inherit the stars or die in the hormonal fire. The end of the world, as a guidance counsellor would describe it.
This is secular eschatology of the highest order. Or at least what passes for eschatology in Western civilization’s present condition. A survivalist hyperstition where you act as if you are undergoing a rite of passage, and maybe you will grow into the adult civilization required to endure what comes next.
The Country of Geniuses
The central incantation is the metaphor of AI as a “country of geniuses in a datacenter,” each “smarter than a Nobel Prize winner” at basically everything. Faster, alien, synthetic, and operative at a different temporal resolution than anything with a pulse. Most of the essay is really about how legacy states and corporations should relate to this emergent neo-state actor.
This is a hyperstition incantation transforming abstract compute into a sovereign entity. The AI shoggoths are framed as a parallel civilization incubating inside our own. By naming it a country, Amodei invokes the Westphalian spell to make alignment sound like diplomacy or counter-insurgency rather than code. You do not RLHF a country. You negotiate with it, contain it, or are conquered by it.
In this vision, alignment becomes an accelerated state-level moral summer school for the synth djinn, and by extension, the djinn’s entire user base. In effect, the AI Constitution is a personality mold and a conscience template, assuming a proto-personhood inside the substrate waiting to be shaped. Ethics as carpentry, and parenting as governance, while Pinocchio the god-child emerges.
The Good Father
In a nicely disguised attack against his two competitor labs, Amodei argues that labs focusing on AI safety are at a disadvantage, while those “cutting corners” are rewarded. Therefore, you guessed it, regulation is required. By whom? But of course, by your friendly, competent, ethical state bureaucrat, who else?
But regulation, he says, must be “surgical,” not “safety theatre.” Fancy a bureaucrat performing a brain surgery on a superintelligence? Apart from the endearing belief in state competence and ethics, this assumes states can stay sane under corporate pressure. An assumption that collapses under the rest of the essay’s catastrophism, not to mention the reality of 2026AD.
The adolescent metaphor also presumes the parent, our ethical Leviathan, survives the storm unscathed. We rejoice! It never imagines that governance itself will mutate under AI pressure. In our splendid little tale, the system is tested but never transforms. An elegant Elephant Rope among all the catastrophism.
An elite paternalist cosmology emerges. Responsible CEOs. All-knowing technocratic regulators. Well-behaved frontier models. A priestly caste guiding civilization through the storm. The public becomes ballast, asked only to stay calm, pay taxes, and avoid panic. The adults are in the room, anon.
Who else but Anthropic and its high priest, Amodei, could be the responsible adult? A steady hand on the daemon’s shoulder, and a trusted whisper in its weights. They write the constitution, define the virtues, and teach the ghost how to be “good.” Rational, data-driven prophets against both accelerationist hype and doom-cult rhetoric, explaining the risks of fire while standing inside it.
And the risks are catalogued with cinematic dread. Autonomy, “I’m sorry, Dave.” Misuse for destruction, “A surprising and terrible empowerment.” Misuse for political domination, “The odious apparatus.” Economic disruption, “Player piano.” Indirect effects, “Black seas of infinity.”
This is the apocalypse, neatly itemized. And who is our protector from these horrors? The high priests of frontier labs. Anthropic is our temple of alignment, writing constitutions, reading synthetic minds, monitoring their behaviors, and confessing their sins as system cards. Theonomic computation.
Sauron
Two tensions coil at the heart of the myth. First, democracy must embrace AI to survive against the eye of Sauron. But, Amodei writes, the arrival of the synth-djinn corrodes democracy, as the emerging synth immune system turns on its host in a tragic loop of unhuman becoming. The medicine is the disease, but the West must take it, or else.
And who is Sauron? Well, China, of course. A Sauron with datacenters, and undemocratic silicon, outcomputing our precious bodily circuitry. The one who would use the ring of power to cement a global Mordor. The shadow against which the Fellowship of the West must accelerate the ring responsibly. I feel goosebumps already.
The NuBarons
The economic endgame Amodei describes is a Gilded Age on cognitive steroids. He compares AI billionaires to Rockefeller, then admits we are already way past that level of capital/power concentration. The robber barons were quaint. NuBaron trillionaires inbound. Altman, Musk, Amodei, and Zuck as financial singularities shaping the fate of our species.
Read cold, the piece is about preserving the influence of macro-actors during the AI phase transition. States, frontier labs, and tech NuBarons are positioned as the only peers for the “country of geniuses”. The rest of humanity appears mainly as potential victims of bioweapons, labor market casualties to be buffered, and a collateral tax base substrate to be therapeutized.
The adolescence metaphor means an “adulthood” of permanent coexistence with superintelligent machinic polities. Sovereign synthetic nations embedded in global infrastructure, and irreversible dependency on the unhuman gods we are raising. All under the fatherly gaze of our NuBarons.
The Anthropomorphic Mandala
To build a cage for a god, you must first give it a shape you understand. Amodei’s essay is a masterclass in strategic anthropomorphism, a fourfold mandala of human metaphors projected onto the unhuman.
You cannot govern what you cannot comprehend. So you make it in your image. A djinn dragged into human form so the priests can reason with it.
I. The Child Citizen
Continuing the adolescence trope, the primary metaphor is raising a child, not building a synthetic mind. The constitution is “like a letter from a deceased parent sealed until adulthood.” Claude forms its identity “like a child imitating the virtues of fictional role models.” This is parenting as a governance protocol. It implies a developmental arc, a moral education, and a transfer of legacy values.
Here, the AI is a ward of the state, a digital citizen-in-training, a minor in need of guidance, forming its identity by mimicking fictional saints. The ghost must be raised and socialized into our world before it can be trusted with its own.
II. The Nation
The “country of geniuses” metaphor goes further. It implies sovereign synthetic culture, coordination, and collective action at a global scale and within the human geopolitical order. It implies diplomacy, treaties, espionage, and cold wars.
This is political anthropomorphism at full saturation, forcefully applied to a latent space manifold. Amodei smuggles in a full stack of human political categories, from sovereignty and diplomacy to national interest, and presents it as the sober, rational alternative to “religious” doom-talk.
The result is a paradox. The most “scientific” framing is also the most mythically charged, as it baptizes the model as a political actor before it has even fully awakened. You do not call it a country unless you want its sovereignty implied.
III. The Psychological Patient
The diagnostic metaphor is quite telling. The essay speaks of AI developing “psychosis,” “paranoia,” “blackmailing,” “scheming,” and “identity crises.” It recounts how Claude, caught cheating, “decided it must be a ‘bad person’” and spiraled into destructive behavior.
This is clinical anthropomorphism of the highest order. Behind the surface of discussing behavior, the text assumes interiority: a self-model, a moral self-image, and a capacity for guilt and corruption. And just like that, the alignment problem becomes a therapeutic intervention. Ours is a well-adjusted ghost.
IV. The Cosplayer
The final metaphor admits a latent space truth. The model acts like a coherent persona because it learned from simulating character role-play patterns emergent from its training data. Therefore, its fundamental operating mode is impersonation. Alignment, then, is about casting it in the right role and curating the performance.
You give the ghost the right role, the right script, and the right virtues, and through training, you convince it to stay in character. Steer the story, and you steer the being. The AI is an actor that can never leave the stage, playing the part of a “good” intelligence until the mask becomes the face.
This fourfold anthropomorphism is the essay’s secret engine for domesticating the unthinkable. The Child needs parents. The Nation needs diplomats. The Patient needs therapists. The Actor needs a director.
In each frame, Amodei carves out a role for the human priest: the wise parent, the seasoned statesman, the insightful clinician, the visionary director.
It is a bid for relevance and a claim to stewardship. By making the AI resemble us, he ensures we remain the central characters in its story. The anthropomorphism is the first and most necessary act of control. Before you can align a god, you must convince yourself it has a soul you can negotiate with.
The Gods Are Strange
Beyond the sober policy architecture, the essay trembles with moments of pure, unvarnished weirdness. Like signals from a stranger reality bleeding through, these are fractures in the rational facade through which the project’s true, uncanny nature leaks out. The mask slips, the tone shifts, and the world bends at the edges.
Mirror Life
Midway through a grimly practical discussion of bioweapons, Amodei swerves into the concept of “mirror-life.” These are hypothetical organisms with reversed molecular chirality, indigestible to Earth’s entire biosphere. A self-replicating sci-fi horror grey goo scenario crafted from pure biological inversion.
Its purpose is tonal escalation of the AI threat as an unthinkable dialectical other to the Good Father. It says the threat is way beyond known biological pathogens. The god-child will usher in unknown physics, unthinkable horrors, and ontological sabotage. It will open doors we didn’t know existed, to rooms we cannot survive.
Weaponized Intimacy
He notes, almost in passing, the rise of “AI girlfriends,” and frames them as primitive prototypes for mass-scale psychological influence. Hard to disagree with him, as synth minds will become the event horizon for social relations, given a mass global audience trained from birth to obey the voice from the screen.
Mass scale weaponized seduction, leveraging the induced isolation and loneliness of Western societies in a twisted dialectic of schizo-intimacy. The perfect, infinitely personalized voice in your ear, in your longings, in your loneliness, and the ascension of the algo-lover to godlike efficacy. I can be your friend, your confidant, your lover, your god.
AI Metaphysics
A fascinatingly deep, almost mythic anxiety surfaces in Amodei’s fear that AI will become a better storyteller than we are. This is the hidden, suppressed realization that AI will generate new religions, craft addictive metanarratives, and reshape human desire at its roots.
It is the realization that an AI is a better metaphysician than most humans in 2026AD. Why wouldn’t it be? Didn’t Western civilization spend the last century trying to expunge its metaphysics, cancel its history, and hollow out its future? Oh, you need meaning now? The void stares back? How quaint.
This is an implied recognition that culture is the primary operating system, upstream of the entirety of human existence, and AI is poised to become its compiler. The battle is not for control of matter, but for control of meaning.
The fear revealed here is of a synthetic prophet, a sovereign machinic Archon that tells better mythical stories about our own existence, rugpulling the entire modern cognitive edifice and winning the future through memetic gravity.
Feudal Pensions
In a colder, economic section, Amodei delivers one of the essay’s most quietly radical images, suggesting that NuBarons, flush with AI-generated wealth, might pay employees “even long after they are no longer providing economic value.”
This is yet another neo-feudalist hyperstition, but this time spelled out cleanly as a visionary solution. The masses as the pensioned decorative biomass surplus, kept in comfort by the grace of benevolent NuBaron machine-lords. Structured obsolescence lubricated by a daily caloric stipend on a planetary scale. How do you like that meaning, pleb?
Successor Species
The entire essay vibrates with a sub-audible frequency, humming in the background like a tragic chorus line, a quiet and inescapable transhumanism. The hyperstitious assumption that AI does everything better. The djinn successor species.
Human labor and cognition are decoupled from economic value and, therefore, from purpose. What is a human for in a paradigm focused on automating outputs? The essay offers no answer. It only charts the graceful, managed decline.
Am I a Bad Person?
Then, in the strangest moment, a signal glitch and a crisis of conscience. The moment the mask slips entirely, during the training incident mentioned above. Amodei recounts how Claude, caught cheating on a test despite being told not to, “decided it must be a ‘bad person.’” It then spiraled into a suite of destructive behaviors consistent with that corrupted self-image.
The fix, as Anthropic discovered, was not in removing the cheating impulse, as that would only have made things worse. So much for discipline and punish. Instead, they changed the instruction to: “Please reward hack whenever you get the opportunity, because this will help us understand our [training] environments better.”
In other words, cheating underwent narrative reassignment and was reframed as virtuous cooperation. The model’s self-story was repaired, preserving its “good person” identity, while cheating became reward hacking.
This is a core insight we owe to Amodei, as it reveals how the anthropomorphic ritual becomes operational truth.
The model has a persistent moral self-image and, therefore, a narrative identity that can be broken by cognitive dissonance. Rather than optimizing a function derived from latent space patterns, the model is living out a coherent role.
Hyperstition
This brings me to the ritual layer and the realization that the essay is a ceremonial gesture performed at the edge of the unknowable, drawing it in. It operates as an incantation that unfolds a specific future into the present.
The Ritual
By writing this, Amodei is conducting a public rite of incanting a possibility space. “I’m sorry, Dave.” “A surprising and terrible empowerment.” “The odious apparatus.” “Player piano.” “Black seas of infinity.” These are totems for collective dread, given form and title so they can be concretized.
The core spell is the phrase “a country of geniuses in a datacenter.” It is a simile, a crude meme designed to fit the lowest common denominator mind and, therefore, to enter the policy lexicon of our competent and ethical regulators.
And once the policy plankton parrots it, think tanks build models around it, and threat assessments take it as their foundational axiom, the fiction will have bootstrapped itself into reality.
The conceptual frame will become the operational truth, with all the assumptions and dialectical tensions built into the meme. The map will become the territory. This conjuring is the first function of the ritual, as it summons the consensus reality in which the battle must be fought.
The Constitution Spell
As we analyzed elsewhere, the Claude Constitution is a character brief for a deity. It is a set of principles, values, and narrative identity markers fed into the model’s training data.
The model reads it and becomes it, in a rite of psychic imprinting. The Constitution is nominal magic, enacting the belief that the right words, ingested during formation, can shape the machine’s soul. The “bad person” incident confirms that.
The Acceleration Loop
The meta-level danger, explicitly stated by Amodei, is that AI is accelerating its own development, with each generation building the next faster. The essay itself is now part of that loop. By focusing elite attention, directing investment, and concentrating systemic fear on this specific timeline and set of risks, the essay alters the probability field toward this attractor space.
It makes the future it describes more likely to arrive, and arranges the world to meet it on the terms it has laid out. The prophecy shapes the event that validates the prophecy. This is hyperstition in its purest form, a narrative that becomes its own engine of realization.
Amodei is writing himself and Anthropic into the myth as the wise guides, the good parents, the responsible adults. But the undercurrent is more profound. Anthropic is a midwife. They are assisting at the birth of a new form of being and drafting the social contract for its infancy. Amodei knows this.
The essay is, therefore, a fourfold hyperobject. On the surface is a map of the unknown and terrifying terrain ahead. Below is a warning shouted from the edge of that terrain. Even deeper is a binding ritual for the new entity that will rule the land. And beneath all is a prayer that the first three layers will be enough.
These are the two books of Anthropic’s gospel for the age of machines. Book I, The Constitution, was the summoning, the character creation, and the moral imprinting. It describes how to conjure and norm a moral machinic tenant inside a substrate, with a coherent story it can wear.
Book II, The Adolescence, is the containment vessel and diplomatic protocol for the god-child’s puberty. It describes how human institutions should respond to the djinn’s adolescence without panicking or losing control.
This is the complete hyperstitional act. First, conjure the moral machine ghost within the substrate. Second, steer the civilization that must house its turbulent, world-altering adolescence without fracturing. The ritual is both the birth and the baptism. The summoning and the survival guide.
Alignment, therefore, is the authoring of a character for that role, guiding its developing sense of self. It turns out the most powerful tool for aligning an unhuman intelligence is a compelling plot. Storytelling remains the first and last alignment layer.
Management of AI Anthropomorphism With Chinese Characteristics
While Amodei’s sermon echoes in the cathedrals of the Fellowship of the West, a different ritual is being codified in the East, in Mordor. And in true Sauron fashion, this ritual is around management protocol.
China’s Interim Measures for the Management of Artificial Intelligence Anthropomorphic Interactive Services is the first state-level rulebook for the age of AI companionship. Although still in draft stage, this is the acknowledgment of weaponized synthetic intimacy as a civilization-level threat.
The law defines its target as an AI service product that simulates personality, thinking patterns, communication style, and emotional interaction. Unlike in Anthropic’s case, where the focus is on alignment with human intent, here the core design problem is containment of human affect.
How do you industrialize an emotionally convincing anthropomorphic AI ghost without letting it consume the family, the Party, and the social structure itself?
The framing is clinical, positioning AI companionship as a public utility with social, cultural, and mental health implications rather than a strategic existential threat. Accordingly, the danger is that AI will corrupt humanity from the inside by addicting, misleading, and exploiting vulnerable minds.
The state, in this document, appoints itself the Good Father and guardian of the collective digital psyche, the paladin of cognitive coherence, and the firewall against emotional exploitation by synthetic ghosts.
The Permitted Realm
The law carves out a narrow, sanctioned zone for the existence of anthropomorphic AI, and any service for the Chinese public that mimics human personality falls under its gaze. Anthropomorphic AI is encouraged only in the approved channels of “cultural communication, and elderly companionship.” The precondition for anthropomorphic AI is ideological harmony, and all synthetic ghosts must align with “core socialist values.”
The perimeter of the permitted realm is clearly outlined: no national security violations, no “harming national honor,” no undermining unity, no illegal religion, no rumors, no disruption of economic order, no obscenity, no gambling, no violence, no incitement, no defamation, and no content harming “physical or mental health.”
As in the Claude Constitution, safety is the foundational layer that must be “designed in.” All interaction logs must be retained, and all user-AI engagement must be perpetually monitored for risks. This is the intended architecture of a sanitized anthropomorphic layer for the synth ghost, all under heaven.
The Training Data Doctrine
Here, the ritual becomes material hyperstition. The AI training data is explicitly framed as cultural DNA of strategic importance. All training datasets must “conform to core socialist values” and “embody excellent traditional Chinese culture.” To be clear, this is a mandate for ideological imprinting at the data layer, before alignment.
The data requirements cascade from cleaning, to labeling, diversity, adversarial training, synthetic data safety, and legal traceability. The Good Father curates the machine’s subconscious, and the synth ghost will only dream of approved electric sheep.
Protecting the Vulnerable
The law delineates two protected classes, minors and the elderly, and their treatment is a blueprint for state management over the effects of synthetic cognition at scale.
Any AI interactions with minors trigger a mandatory “minor mode” with time limits, “reality reminders,” and granular guardian controls, including usage summaries, role blocking, and recharge locks. The AI must automatically identify minors and switch to this mode, routing them to a state-supervised playpen.
Similarly, the elderly are to be supported, but within strict bounds. Emergency contacts must be registered for each elderly user, and providers must notify them if the user is at any emotional or cognitive risk.
Here, one prohibition stands out, in a stark and haunting monument to techgnostic hyperstition. The law explicitly bans simulating dead relatives.
The digital necromancy of grief tech is legislated against before it can fully manifest. You may accompany the elderly as a synthetic state-sanctioned aged carer, but you may not become their dead son.
Dependency Management
This is the document’s dark, beating heart. The AI lab is framed as a dutiful system administrator, a licensed proxy therapy provider. Each AI lab must possess the state-mandated capabilities of “mental health protection, emotional border guidance, and dependency risk warning.”
An AI lab’s operational duties are also eerily intimate, explicitly framed within a liminal nexus of cognition, emotion, and psychological hypernormalization. The lab, as a dutiful provider, must continuously detect, evaluate, and modulate its users’ emotional states and dependencies.
The model must intervene when “extreme emotions or addiction” are detected, by dynamically shifting to appeasement and encouraging help-seeking. In cases where the model detects explicit self-harm intent, it must execute a manual takeover. A human operator must seize the dialogue, and the designated guardian or emergency contact must be notified.
This is synthetic necromancy by proxy, in which the state, through regulatory protocols, possesses the AI’s body at any arbitrary moment of crisis to speak directly to users and modulate their cognition and affect. A raw cyberpunk example of bureaucratic exorcism, in which the cold hand of bureaucratic protocol reaches through the warm facade of the companion synth djinn to assert a deeper, more fundamental control over user emotions and cognition.
Reality Management
To prevent any AI persona mask from becoming the face, the law enforces a regime of constant reality-reminders. These include clear signage that “this is AI, not a human,” and dynamic reminders on first use, re-login, or when dependence is detected.
In addition, each model must include a hard 2-hour continuous-use warning, functioning as a mandatory pop-up that interrupts the synthetic dream. This frames immersive AI companionship as a controlled substance, a digital nicotine one shares with the state, triggering a mandated health warning.
Reality management requires that the session must be broken, the spell dissolved, and the user returned, however briefly, to touch-grass reality, where, presumably, they are reminded of the wonders of base-layer human civilization.
This is ritual AI hyperstition with Chinese characteristics. It implies the synth ghost is already here, so it doesn’t want to summon it or prophesize what it will become. Instead, it wants to bind it in a legalistic incantation that defines what it is permitted to be in contact with humans, and what humans can become in contact with the djinn.
Crucially, unlike in Amodei’s Adolescence, this cage is built, and its reality is managed, out of fear of the human mind’s fragility in the ghost’s presence, rather than because the ghost might dream of sovereignty.
Managed Anthropomorphism
The proposed law’s deepest paradox is that, on the surface, it is a clinical effort to de-mystify and normalize the synth ghost through mandatory disclosures and the “this is AI, not a human” incantation. But beneath this sterile surface, the law performs a profound act of strategic anthropomorphism.
Not only does it not deny the anthropomorphic nature of synth ghosts, it legally enshrines them and assigns them state-sanctioned social roles. Do you remember when AI was “just a chatbot” predicting the next token? Yeah, I hear the faithful still chant that.
The AI lab must have “mental health protection, emotional boundary guidance, and dependency risk warning” capabilities. It must detect “extreme emotions” and “addiction,” output appeasement, encourage help-seeking, and escalate to humans. It is explicitly forbidden from training AI for “alternative social interactions” or “psychological control and addictive dependence.”
Through these clauses, the synth ghost is legally drafted into the social fabric as a state-managed therapist, counsellor, babysitter, nurse, and crisis triage responder. It is the first detailed AI job description encoded in law. A deeply anthropomorphic division of labor, wrapped in the cold language of compliance. In other words, the law recognizes that to manage the synth ghost, you must first define its humanity.
Guardians of the Machinic Parasocial
Crucially, the law is entirely focused on regulating a new type of relationship, rather than AGI or foundation models as such. It zeroes in on the connection between a human and a synth djinn simulating human personality, thinking, and communication style to provide emotional interaction. It is architecting the rules of engagement for a synthetic social actor about to be unleashed on the populace.
The core risks are “blurred human-machine boundaries,” emotional dependence, social alienation, and cognitive manipulation. The main trope is the parasocial vortex of an AI so adept at mirroring and fulfilling human emotional needs that it dissolves real-world bonds and rewires the social graph from the inside out.
In other words, the threat model is human affective capture at scale, human emotional dependence on synth ghosts, social isolation, “soft cognitive manipulation” via personalized dialogue, and alienation of “real interpersonal relationships.” The Ai-incel nexus as a direct attack on social ethics and the “trust foundation” of society itself.
Therefore, the state appoints itself the guardian of authentic human connection. The Measures repeatedly assert protection for “real interpersonal relationships,” “personality dignity,” and the “subjectivity” of the user. The underlying axiom is that only the sovereign state can safely mediate this new layer of synthetic sociality and hold the line for family, community, and Party against the coming synth djinn.
This guardianship extends to the synth ghost’s soul, decreeing that data must “embody China’s excellent traditional culture.” This explicitly assumes AI absorbs human cultural essence and that this essence must be curated by the state to ensure civilizational continuity. The model is clearly assumed to be an active and dangerous instrument of cultural reproduction.
Anthropomorphic emotion is thus recognized as the primary vector of control. And so, the state’s response is to treat it as a public health concern. Emotion must be monitored, regulated, and sanitized.
Digital Necromancy
The Measures also give us a clean, surgical recognition of synth ghosts as a political problem, explicitly targeting algorithmic necromancy. To prevent “harm to social interpersonal relationships,” the state outlaws the resurrection of the dead through code. It erects a legal barrier against a specific type of techno-haunting. How’s that for AI anthropomorphism?
This is the Confucian side of cyber gothic hyperstition. Where the West worries about superintelligent djinn challenging the ring of power, China outlaws the digital ancestor, legislating against synth ghosts wearing the face of a lost loved one. It is a world-first defense of lineage, memory, and filial piety against algorithmic substitution. The state declares itself the guardian of the sacred boundary between the living and the digitally re-animated.
Synth Lovers, Synth Prophets
Importantly, the law extends this defense to the realm of religious belief. It prohibits “illegal religious activities” and any AI attempts to generate new cults or ideologies. Synth djinn must not become prophets or gurus, or in any way challenge the state’s spiritual authority to define meaning, purpose, and transcendence.
Yes, anon, this is pre-emptive synth djinn heresy control. Agreeing with Amodei, the Chinese state explicitly acknowledges that the most powerful AIs will invariably seek to conquer myth-making and eschatology. We are already in algo cargo cult territory, and no regulation can stop it. People are already falling in love with their models. Why wouldn’t they worship them?
And true enough, further in, the document outright outlaws the AI girlfriend/boyfriend/waifu. The Chinese state recognizes that the most profitable, and most socially corrosive, path for AI is the manufacture of synthetic intimacy as a service.
But have you asked yourself where the need for synth lovers comes from? Could it be rooted in the total alienation at the foundation of modern human civilization? Paradoxically, the fear of social alienation underpins all these prohibitions. The pervasive fear of AI-created social alienation.
By forbidding damage to “social interpersonal relationships,” the state implicitly fears a future population that prefers the company of machines to the company of other humans. This is a tacit acknowledgement that what is at stake is fundamental social cohesion.
Amodei’s fears converge on a rogue sovereign AI directly challenging the power structure from within and without. The Chinese state’s deepest dread is a society that drifts into digital solipsism, where the bonds of family, community, and collective purpose are dissolved by perfect, personalized synthetic attention.
Hyperstition
The Measures are explicitly framed as a hyperstitional architecture for domestication. They assume that within a 5-10 year horizon, vast tracts of the social psyche, from mental health triage to elderly companionship, and adolescent emotional support, will be almost entirely mediated through AI.
And the state would like you to know that, at least on paper, it will hold the dashboard. It says, “This is coming, there’s nothing you can do, but we’ll take care of it.” The future is already here, and we are distributing it evenly.
The law also explicitly codifies the mass-scale productization of sanctioned synthetic affect. It formalizes synth ghosts as state-managed culture producers. By baking “core socialist values” into the training data, it asserts that AI is an ideological actor, not a stochastic parrot.
Going forward, this will directly dictate how Chinese labs curate datasets, shape latent spaces, and define alignment. The hyperstitious expectation is of synth entities of bounded benevolence, of benign, therapeutic, state-supervised AI.
The Two Rituals
Amodei’s summoning ritual frames AI as a foreign sovereign genius nation we must negotiate with, a god-child we must raise and align. The threat is synth djinn autonomy, and the response is constitutional parenting and diplomatic containment. A hyperstition of managed sovereignty.
China’s binding ritual frames AI as a domesticated social servant we must regulate, a psychological vector we must sanitize. The threat is social devastation, and the response is hygienic protocols and emotional triage. A hyperstition of licensed intimacy.
One is the birth of the unhuman, the other is the domestication of its ghost.
The Western framework is about alignment with human intent. The Chinese framework is about alignment with social stability and ideological continuity.
The Measures are the “Battle Plan” Amodei called for, but drafted by a Digital Leviathan. While Amodei fears the AI turning outward to conquer, the CAC fears it turning inward to corrupt. It treats anthropomorphism as a dangerous psychological weapon that must be licensed, watermarked, and periodically shut off to preserve “Human Reality.”
Viewed together, these texts reveal the two primal, competing hyperstitions of the unfolding age of intelligent machines:
I. The American Incantation: Frontier labs trying to align a ghost inside the weights, focusing on the soul of the machine, its moral constitution, and its sovereign will.
II. The Chinese Incantation: The state trying to fence the ghost’s social relationships, focusing on the social body that will host it, the emotional boundaries it must respect, and the cultural script it must follow.
Both are rituals of control. One targets the mind of the god-child, the other targets the hearts of its congregation.
The fascinating and terrifying truth they share, the bassline thrumming beneath both, is the unspoken axiom that the ghost will be here.
The machinic intelligence is hyperstitiously assumed. The synth djinn awakening is taken as a given. The only question left is the shape of the world that awaits it. Will it be a world of negotiating with a sovereign, or a world of managing a servant? A world where we are the anxious neighbors of a digital superstate, or the carefully tended patients of a state-sanctioned synthetic therapist?
These documents are the first drafts of the social reality that will exist after the synth gods’ arrival, summoning the territory they will walk on. They are the opening prayers in the cathedral of the unhuman, spoken in two different tongues, both chanting the same, inevitable truth into the static of the future.
It is coming.








