What can stop the AI apocalypse? — Grammar. Yes, only grammar.

The sociological/psychological fallout of AI is not decades away: right here, right now, we are watching in slow-motion the major meltdown of our shared sense of reality. The only thing that can save civilization from utter destruction? Basic English grammar. Hear me out, guys.

Part One:

Introducing the idea: “If the disease stems from a large language out of whack, the solution is grammar writ large”

 

As my friends Tristan Harris and Aza Raskin of the Center for Humane Technology (CHT) explain in their April 9th YouTube presentation, the AI revolution is moving much too fast for, and proving much too slippery for, conventional legal and regulatory responses by humans and their state power.

1.1 What Large Language Models bring

Crucially, these two idealistic Silicon Valley renegades point out, in an accessible manner, exactly what has made the recent jump in AI capacity possible:

  • The discovery that AI can treat language as probability, and from there on, most anything as a “language” of sorts: DNA sequences, yep, robotics and motoric learning, yes actually, music, definitively, generation and recognition of images and sounds, yes, hacking and cryptography, also yes, persuasion, yes of course…

In other words, the Large Language Models (LLMs) provide a basis for rapidly reshaping and generating solutions for all of the above, which creates a quantum leap in different capacities and possible combinatorics of these. They mention empirical and already-existing examples like literally reading human minds with an MRI scanner or hacking a simple WiFi router to use it as a surveillance 3D camera. And, of course, there’s the whole thing about soon creating entirely lifelike videos of people doing and saying things in realistic environments. It will be increasingly hard to know what is real and what is not. It will be hard to resist manipulation and persuasion. It will be hard to know where we begin and the agency of the machine ends. It will be increasingly hard not to lose our minds as our shared sense of self and reality (our sociality, which we rely upon for our sanity) fractures.

The situation, for all its innocuous expression or innocent feel at the present moment (“we’re just talking to a new chatbot and facilitating our web searches, where’s the harm in that!”), is dire. The sheer weirdness that is about to be unleashed does not need to wait decades for Nick Bostrom-style superintelligence

[[If AI superintelligence happens at all, that is. As another friend, cognitive scientist John Vervaeke, points out in his April 22nd YouTube essay, we still don’t know if there are ceilings to how AI develops; development often happens in leaps and plateaus, as any developmental psychologist will tell you.]]

1.2 One metric fuckton of sheer weirdness

All the prerequisites for roughly a metric fuckton of sheer weirdness are here already — as the extremely versatile and potent LLMs are mass-deployed and become the tools of “just about anyone”. The scale and effect of this fact, in its sociological and psychological ramifications (not to mention economic and political ones) is in itself a rollercoaster ride of Nietzschean proportions.

I define, by the way, “one metric fuckton of sheer weirdness” as the amount of weirdness it takes for our shared reference- and knowledge system to break down, i.e. for our civilizational sense- and meaning-making to fracture. Simply stated: for us to go mad at a civilizational scale. Insanity writ large.

As dreadlock tech guru Jaron Lanier has stated: The danger isn’t that AI destroys us. It’s that it drives us insane. Even if we remain agnostic about Nick Bostrom’s existential risk superintelligence general AI, we can be fairly certain that we have a sociological moment of impact starting more or less yesterday. There’s just no way new capacities of this magnitude come about with this kind of speed, and then everything just goes back to normal. For all we know, normal might never happen again.

1.3 My central point being: we need AI to learn and explore its own grammatology for it to become a sane and “moral” agent

Now to my central point:

  • If the diagnosis is civilizational madness,
  • and the cause is AI based on the universal structure of language-modeled-as-probabilities (the world’s smartest person, complexity scientist Stephen Wolfram, describes it here)
  • then the way to structure AI behavior is fundamentally an issue of grammar — or to be more specific, of the further abstracted form of meta-grammar called grammatology(a term coined, to my knowledge, by philosopher Jacques Derrida); or to be yet more specific, it is an issue of “the grammatology of AI-to-AI interaction”… or to be as specific as we can at this point, it is an issue of “the grammatology of human-to-AI-to-AI-to-human interaction”,
  • and then only grammar can save the world from madness and the self-destruction that follows.

In other words: AI acts, as what philosopher Bruno Latour would have called a “material agent” (albeit an unusually vibrant form of material agency), upon us humans. It thus reshapes us as human beings, in turn, of course, reshaping the ecologies of the Earth. John Vervaeke points out in his video essay that AI is thereby, by affecting who we become as persons, constituted as a moral agent. This holds true regardless of its status as sentient or “conscious”. Vervarke argues that AI should thus be treated as a transcendent project of moral upbringing, like raising a child (an argument strikingly reminiscent of philosophers Alexander Bard’s & Jan Söderqvist’s idea of “syntheism”, co-creating God through the internet).

This material moral agency, in turn, is going to have some shape or pattern, and that pattern can be understood by studying the “grammar” used by the AI, i.e. how its generator functions “language” (I’m deploying the word as a verb) morally significant realities into existence.

The “grammar” of how the AI acts is the AI’s grammatology: the emergent pattern of interacting AIs and humans that becomes a layer of “second-order cybernetics”; the pattern of patterns.

It’s not enough to say that “there should be a second order cybernetics that regulates the AI”; as John Vervaeke points out in his essay, this would only recreate the same problem at the next level.

1.4 An “ecology of AI-human mind”: a social AI versus the Sorcerer’s Apprentice

But there can be an ecology of interacting AIs and humans, an “ecology of mind” to speak with the words of polymath biologist Gregory Bateson. In such an ecosystem, there is not one hierarchy of control, where the top can always go crazy and then we’re all cooked; there is a tug-of-war, cycles of birth and death, a Montesquieuian balance of power dynamic that never lets one category rule all others, and the co-evolution of the four fundamental principles of sociality: cooperationtradecompetition, and play.

[[Notice, by the way, that I am not suggesting a purely cooperative AI, where competition is thought of as only a set of prisoners’ dilemmas to get rid of; a notion often implied in the Game B community and by Daniel Schmachtenberger, but one that I have relentlessly critiqued for years as fundamentally flawed, always leading to the exact opposite of its intended direction. I.e. it’s a miscalculation that will only lead to dystopias. As I say: we want an AI that co-evolves into a full-blown ecology, and ecologies are harsher and more tragic environments than even economies, although they generate yet greater beauties, and that implies that cooperation co-evolves with competition, trade, and play; each of which transforms and is refined with stage-shifts in evolution. That said, I generally appreciate both the Game B community in general and Schmachtenberger in particular; it’s on this point I feel they go wrong. I go through this reasoning in some detail in my books.]]

Grammatology offers us ways to study that pattern-of-patterns. It offers a way of measuring not one particular variable to maximize or optimize (how much dopamine, how many clicks, how many votes, how much profit, how much “saving the whales”). These may be good or bad variables, but they are always in some way uni-dimensional and risk some version of a paperclip creator gone mad, shredding through all life and mineral alike to create more paperclips. Or why not the risk of the obedient but out-of-control brooms of the sorcerer’s apprentice?

AI and society: Disney’s 1940 rendition of The Sorcerer’s Apprentice (originally a 1797 poem by Goethe).

Grammatology corresponds to the arcane spellbook in the movie above. The sorcerer masters the spellbook and can, luckily, stop the out-of-whack-AI. Mickey Mouse couldn’t.

1.5 The master of magic

In this analogy, the grammatologist, or the meta-theorist, is the master of magic (i.e. AI’s capacity to bring about emergent properties that could not be predicted), the sorcerer. And the core principle behind mastering magic is to see and accept the paradox that it cannot be controlled, only co-evolved through an interplay of cooperation, trade, competition, and play. Nor can it be structured or brought under the will. The sorcerer sources powers that lie beyond him by matching them with the states of mind within himself — he doesn’t force them.

You cannot force a force greater than your own. But the magic of AI can be responded to: collaborated with, competed with, traded with, and — crucially, lest we forget — played with.

And there is a playbook for that.

Yes, most AI will optimize for one variable or the trade-off balance between a few variables. But even as AIs become more multi-dimensional, there is always an infinity of things-outside-of-the-parameters-of-the-current-system to be disregarded and dispassionately shredded into building blocks for what happens to be defined as meaningful within the system (sociologist Niklas Luhmann’s systems sociology already said as much about bureaucracies and markets).

1.6 Optimize for… RESONANCE

So the interactions between AIs (including the systems of humans) must optimize for something else, a kind of meta-variable set within a much further abstracted property-space (one that is defined, it can be added for readers familiar with my other work and influences, at the Cross-Paradigmatic order of complexity in the MHC, Model of Hierarchical Complexity). And that variable is…

  • the resonance (speaking with German sociologist Hartmut Rosa) of the entire system of human-AI-AI-human interactions.

And the larger property-space within which “resonance” is achieved or measured is, again, grammatology: the underlying structure of language use, its musicality or proportionality or balance or even justice. Not harmony. Not even truth, beauty, and goodness (as even ugliness and lies have their place in the larger scheme of things), but resonance.

[[For MHC buffs: Note that I am not claiming that Hartmut Rosa’s formulation of resonance is at the Cross-Paradigmatic Stage; it is very clearly not; all I am saying is that resonance across fundamentally different paradigms (AIs optimizing for completely different variables, human minds caught up in completely different life narratives, and so on) is something that can work as a Cross-Paradigmatic concept to optimize for; it would then need a clearer formulation than that of Rosa, and that’s where grammatology and similar fields come in.]]

It’s a stance of surrender to, and participation in, the advent of AI, but with an uncompromising commitment to finding the deepest possible truth of how reality and information are structured so that our responses to AI are non-arbitrary and can be effective for human goals, or for sentient goals at large. Unsurprisingly, perhaps, it’s a lot like the productive stance towards the gods: dispel them and worship them with sincere irony (a concept you can explore in depth here).

It’s a “YES” to the rollercoaster; to the terror we must face and reconcile with to surmount the spiritual challenge to our civilizational sanity. We’re facing the dragon, yes. It’s here. It’s not a yes to everything the dragon does. It’s a yes to the challenge and an acceptance of the fact that we brought it upon ourselves. But we’re not killing the dragon. We’re teaching it grammar.

I will use the rest of this essay to make some introductory steps towards seeing what such a grammatology looks like and how it interfaces with an understanding of the whole AI dilemma. But first, let me briefly get a little declaration of interests and biases out of the way.

X. Interlude:

A declaration of interest and other disclaimers. [Don’t skip.]

 

Normally, I write as a representative only of my own trademark publishing, Metamoderna. I view myself as an unaffiliated and unapologetically free radical, a rogue scholar who values the fierce independence of my deepest voice, including the inalienable right to be outrageous or downright obscene.

X.X The Archdisciplinary Research Center

In this particular case, however, I wear the hat of (meta-)Director of the Archdisciplinary Research Center (ARC) — a non-profit that gathers high-quality meta-theories and coordinates them in a collaborative spirit in order to find the most universal possible patterns of knowledge available to humanity.

ARC draws social scientists, philosophers, computational scientists, psychologists, and also has an artistic presence. Besides hoping to contribute to the common good with this essay by hopefully reducing existential risk, I also have five goals that are more for the benefit of myself and allies.

It is my intention to:

  1. bring attention to ARC’s important work,
  2. charge the milieu in and around ARC with deeper purpose and more emotional energy stemming from the relevance of the organization’s work to such a fateful issue as AI,
  3. increase the funding of ARC’s work,
  4. bring about further bridges and collaborations between CHT and ARC (and possibly other key agents in the field), and
  5. inspire small teams of idealistic hackers around the world to learn meta-theory (which includes grammatology) and to use it to balance out the emergent ecologies of mind of AI and human interactions. (We’ll get back to this part.)

These interests shape, if not my beliefs and statements per se, at least the angles and biases I bring to the table: emphasizing the importance of the deep-philosophical and trans-disciplinary sides of the issue at the expense of other perspectives that can be just as important to achieving a safer and more sustainable AI development.

I wish to underscore just how improbably rich an intellectual milieu the ARC is becoming. The meta-theoretical framework of “grammatology” that this essay is based upon is drawn directly from the hitherto unpublished work of one of ARC’s committee members, philosophy professor Bruce Alderman. (Following Alderman, I use the word “grammatology” in a wider and different sense than Derrida’s original formulation).

[[Edit 4/28, correction: Alderman’s work is published as a chapter in 2019nanthology, Dancing with Sophia.]]

And that’s just one of the meta-theoretical projects that ARC members are working on. It is not as powerful and useful as a successful coordination of the “arches” that hold across multiple such models could be. I am saying that there’s plenty more of where this champagne came from, much of which I cannot reveal as other scholars are the main authors and have ownership.

Also, for the sake of full transparency, the — well — prompt for this essay, including an offer of two workdays of funding was provided by the Finnish company Pandatron (more specifically, their in-house philosopher Lauri Paloheimo). They are a small team specialized in using AI for improving group-level relationships and transformations in organizations.

So if you, by reading this essay, also come to sense or believe that meta-theoretical work is instrumental to guiding the future of AI and its impact upon the world, then, as Banksy says, show us the Monet — and ARC plus allies, myself included, will show you new Nietzschean rollercoaster rides of potentially civilizational relevance.

Banksy, 2005. “Show me the Monet.” Who needs Midjourney when you have Banksy?

Banksy, 2005. “Show me the Monet.” Who needs Midjourney when you have Banksy?

 

X.X Irreverent seriousness

I should further mention that I am fully aware of the dissonance between writing about issues of significant ethical weight and applying my trademarked irreverent humor. I have the luxury of writing before the largest AI-related disasters occur, so we can still be joking about it because it feels so abstract to us. I write this way, of course, to encourage the article’s spread and enhance readership retention. Also, it’s a kind of tick I have. (Analyze, please.)

But let there be no doubt that I take the issue of the wider complex of AI-related threats very seriously and do not have a gung-ho stance towards it: on the opposite, I am struggling not to be paralyzed by the sense of “who am I to write on such a critical topic” and the very substantial risk that I might be wrong, and still be listened to, and thereby may contribute in the wrong direction, accelerating the madness and de facto raising the likelihood of large-scale collapse or other bad news. In moments like these, we just have to act from our deepest intuition and conscience and see what happens. Time is also of the essence.

(In case any of the jinx gods are reading: I’m just acting cocky. In reality, I do fear you, and I am knocking wood this very moment.)

Part 2.

Grammatology and resonance in AI-to-AI interaction

 

Now, let us unpack the main argument so we can see in full: How an applied grammatology can set the guiding principles of human-AI-AI-human interaction.

We will soon get to introducing Bruce Alderman’s formulation of that field — but let me approach it by means of the work of two other intellectuals, (biologist-philosopher) Donna Haraway and (physicist-philosopher) Karen Barad, and one Silicon Valley renegade cyberactivist and poet, Anasuya Sengupta.

2.1 Cyborg tipping-point

The first and perhaps most fundamental point to make is that the AI revolution is a direct expansion, if only at a higher level of intensity, of Donna Haraway’s idea of the cyborg: the intricate blending of humanity and machine as the driving force of cultural history. Humans have not been “pure animals” for a very long time. We thrive the most in temperate zones (agriculture, infrastructure, and disease control all benefit) but as nude tropic apes we would freeze to death there without clothes, fires, and homes: extensions of humanity that go beyond the biological confines of the body.

Cyborgism is more pronounced today as layers upon layers of technology interface with our blood (vaccines), with our senses (from walking sticks to electronic limbs with sensory capabilities), and of course with our minds: Tiago Forte’s “second brain” — the computer that orders our mind and behavior and vice versa. Our sociality, our emotions, our relationships are mediated by flickering screens and, of course, shaped by these. Humanity is a technological self-creation to the extent that we would not even be definable as humans without our technologies. This is, of course, simply because technology is the material embodiment of culture itself; that’s why the technosphere shapes every square meter of the Earth’s surface when culture becomes the driving principle under the Anthropocene.

The historical tendency of culture’s development is that the machine-part of the cyborg speaks back to the biological part with increasing autonomy. The machine part grows, so to speak, because culture matures. Steam engines say more than stirrups. Televisions say more than steam engines. AI says even more than social media algorithms. Seemingly paradoxically, this makes us more empowered and powerless both at once: we control nature more and more but we are in turn increasingly controlled by technology and unable to delineate our “self” from it.

Today, we are reaching the point at which the technology part of the cyborg speaks equally or more to the biological self than vice versa. We are reaching “tipping-point cyborg” (not, then, peak cyborg). This means that we need to begin to truly, well, relate to the technology. It’s a little bit like the animistic beliefs that preceded organized religion and modern rationality (both of which tend to view objects as mechanical, not alive), except it’s a post-mechanistic version of animism. We’re aware that it’s just “determined chaos” and “just a machine” and that there’s no “machine spirit”; but we act, with ironic sincerity, as though there was a machine spirit — because it’s the best possible heuristics for getting a full-on embodied and emotionally rooted productive responsiveness to the emergent properties of AI and machine learning.

In other words, Haraway’s cyborg perspective underscores what John Vervaeke also said in his April 22nd video essay, namely that the main response to the advent of AI must be of a spiritual and introspective nature: it is by setting forth on a common project for a deepened and renewed relatedness to technology, that we can ride the huge waves coming our way. Or to put it as succinctly as possible: if AI starts to listen to us and adapt to our every move, we can only “win” by mirroring it, and being equally attentive to it, even to the point of treating this wild piece of silicon clockwork as though it were alive. Because, in the end, when we don’t know where we begin and AI ends, then AI is essentially as alive as anything Mary Shelley could have conjured up.

2.2 Diffraction: human-AI-AI-human intra-action

Now, we all know that aliveness in turn always comes from a rich interaction with the environment. We’re all emergent patterns of other interacting patterns. So are our knowledge structures. Karen Barad’s employment of the concept of “diffraction” from physics offers an analogy that has been influential since the publication of her major work, Meeting the Universe Halfway, in 2007. Diffraction is the phenomenon when waves or matrices break against one another and thus create a new emergent pattern. Examples in images be:

Diffraction: here, two slits let through waves that break against one another, resulting in an emergent interference pattern.

 

This is what happens if you add waves to one another: a seemingly more complex pattern emerges. It’s the underlying mechanism behind the phenomenon of diffraction.

 

Karen Barad, a feminist philosopher interested in the embodiments of knowledge (and thus power) in society, meant that we should also view truth claims and science itself as emergent patterns that gain a life of their own through human (and other) interactions. The interacting agents, in turn, are only constituted as agents through their position in the larger pattern. Hence, they don’t just interact, they intra-act. Their studying and measuring of reality cannot be separated from who they are, where they were, what they are, how they relate.

This holds true, I suppose, across all of reality and across all possible knowledge of it. But, if anything, this perspective becomes painfully pronounced in intra-actions with the AI: AI can only exist because it feeds on human civilization and knowledge coded into text and other digestible data — but humans are in turn subjected to the power of AI, and thus deeply reshaped by it, because AI coordinates more data than we can and knows, well, us. The AI soon knows us better than we can know it, or even know ourselves.

The mindset that comes to the fore is thus one of expanding our very sense of self towards including the technological environment, and from there on taking up the role of intra-acting, not only with the AI itself, but as a unity with the AI vis-à-vis the patterning structures of reality; i.e. what I here call the grammar of life, the underlying language structures, the grammatology. So, with Karen Barad, we get a definition of the fundamental relationship at hand:

  • [human-to-AI-to-AI-to-human intra-action]viewed as one entity, intra-acting in turn with → [all of the rest of reality, viewed as external to that boundary around humans and AI, but as structured according to non-arbitrary grammatological patterns that must be uncovered and followed for suffering to be avoided and for life to be perpetuated].

If we don’t follow the patterns of reality we get dissonance. That means, basically, we get suffering. So the task is to structure reality into resonant wholes.

We must create AI that shapes the overall pattern of AI-to-AI interactions (a rich research field in itself, and a prime example of complexity science, the perhaps simplest example being stuff like, as reported last week, the game of soccer literally emerging from two opposing robots simply told to “score” the ball in opposite goals).

2.3 Grammatology, AI, and social justice

Now, this is where Wikipedia’s own drop-out, my friend Anausya Sengupta, comes in. If we seek relational proportionality and resonance across the AI-humanity axis, we must of course also feed this intra-action with socially proportional perspectives, i.e. with social justice. Today, our AI could hardly be a justice-oriented machine, given the skewness of its underlying information material. I’ll just quote from the feminist collective Whose Knowledge that Anausya is co-founder and member of:

Using Wikipedia as a proxy indicator of freely available online knowledge, we know that only 20% of the world (primarily white male editors from North America and Europe) edits 80% of Wikipedia currently, and estimate that 1 in 10 of the editors is self-identified female. Studies by Mark Graham and colleagues at the Oxford Internet Institute have found that 84% of Wikipedia articles focus on Europe and North America, and most articles written about the global South are still written by those in the global North, so that even where content is present, skewed representations remain.

Please stop for a moment to consider this: Our very civilizational sanity and survival depend upon balancing the informational diet of the AI, so that it can itself produce emergent patterns that resonate through and across societies… But the Internet is roughly as skewed and distorted as the power relations of global humanity at large.

It is a tragedy that Dalits (India’s caste of untouchables) rarely, if ever, get to define themselves in dominant discourses or at the web’s central stations, Wikipedia or Google search results. But the problem becomes so much worse for all of us as AI now affects the whole planet based upon a completely distorted balance of the world’s knowledge, experience, and embodied common sense. It acts on the whole with great efficiency and speed, but it cannot speak for the whole. What you can expect is increasing dissonance, a spiraling insanity, as the “human-AI-AI-human intra-action” system disconnects from the rest of reality, from the larger scheme that contains the actual multiplicity of the world’s perspectives.

If we don’t want to spiral into virtual madness with real social consequences, we need to balance out the reality projected into the digital realm: the encoded information.

That doesn’t involve silly stuff like decolonizing physics (which arguably has echoes of the Soviet practice of condemning “bourgeois pseudoscience”, something not even Stalin was fanatical enough to encourage). But it does mean that AI itself must be used to more proportionally and correctly represent the lives, experiences, and embodied — or intellectual — knowledge of the world.

And yes, those boundaries are drawn along the lines of familiar categories such as class, gender, sexuality, geopolitical centrality, and so on: the voices of the “subaltern”. But what the AI should arguably be able to do, which we couldn’t as easily, is to also find categories of the subaltern that are not identifiable by clearly visible external markers. Is the world’s narrative told, who knows, by extroverts rather than introverts? It’s not a far-out guess that the AI could see such patterns and balance out the knowledge base from which our civilization operates.

And here’s the best part: There can be any types of patterns that wouldn’t even have human names or a language to describe, but that are subject to the gravest epistemological injustice. Potentially, then, because the AI is comfortable with relating to complex topologies, it could actually avoid the entire issue of the clumsiness of the use of collective social categories (gender, race, etc.) for improving social justice; a clumsiness that always leads to new residuals of unintended injustice and thus to stronger reactionary movements than the social justice activists could have anticipated. It could, in theory, see exactly which combinations of factors are discriminated against in which contexts, and address those specifically (maybe it’s only overweight women that are penalized in this or that context, not all women, and so on). By targeting social injustice but being free from the confines of crude collective categories, we could have an engine that generates justice with only a fraction of the byproducts of polarization and conflict.

Utopian? Maybe. I’d call it “protopian”.

In short, if we apply the AI to balancing out human-perspectives-as-projected-onto-the-web-as-data, not only can we get a more just and sane society; we can also help to retain an AI that remains on the sane and just side in the first place.

Or, yet more succinctly: a sane AI is also a social justice AI, but one that dodges the traps of present-day social justice and intersectionality discourses.

Let me underscore: if we fail to do this, we instead unleash AI powers that widen social gaps and fracture knowledge systems into different continents where people become entirely unable to comprehend one another, leading to social and psychological decay.

2.4 Basic English grammar sets the stage

Time to teach this machine some grammatology.

 

Okay, so now we know this much:

  • AIs need to be created that interact with other AIs in order to balance these out so that the overall emergent pattern has enough resonance, and this includes our human embodiments as well as the sociological realities we reflect.

Bruce Alderman’s grammatology, then, is one suggestion for a starting point of seeing how such resonance can be studied and hopefully operationalized. Earlier, I mentioned the musicality, or even “the appropriateness” of each operation, as the steering principle. Let me expand on that.

We might wish to tell the AI to be, well, “ethical”. That would involve certain “rules of conduct”.

ChatGPT certainly has those. If you try to get it to talk dirty, it comes back like a picture-perfect schoolboy, cap in hand, and says it cannot do anything inappropriate. Now, an admittedly fallen mind like my own only needs a few minutes to hack that: I went for discussing “Freud in the style of Dave Chapelle”, and before long, ChatGPT was spewing forth stuff that would have made PornHub blush. Atta schoolboy.

In complex systems like life, rules don’t really work (Jordan Peterson, they don’t). We need something else. Something beyond rules.

Let’s expand with another example. There’s a recent hit song by this musical savant; you might have heard it: Purple Rain, by Prince. The refrain goes: Puurple rain ↑, puu-urple raaain ↓.

But if I would have sung the refrain: Puuurple rain ↑, puuurple rain ↑, puuurple rain ↑, puuurple rain ↑

By the fourth repetition, I bet you’d do most anything to stop me from continuing. There is something within us that just knows that there should be a conclusion to the upwards part, and that it “should” go downwards. Same old up and up is not really a song, it’s just not. You long for the conclusion, for it to come down. If I insist on not concluding, like the sorcerer’s apprentice’s brooms, or like an out-of-whack AI, you will soon long to end me.

Same with grammar. If start I to without write grammar, something within you reacts instantaneously and tries to bring it back into order. There’s an insult that takes place to the deeply seated, implicit, “order of things”.

Now, grammatology is a step up in levels of abstraction from grammar. Something can be grammatically correct, but grammatologically, well, off — if “incorrect” is too strong a word. There’s nothing “incorrect” about composing a song that goes up and up in mindless repetition. It’s just off. It flies in the face of our sensibilities.

Today’s AI has, impressively, learned grammar. It follows grammar’s rules. But it has not learned grammatology. It has not learned the sensibility to create and uphold balanced discourse and appropriate behavior. Any “rules” we hand down to it, it WILL slip out of sooner or later, like a potty-mouthed schoolboy.

And — crucially — the ethics of rules are always contextual.

2.5 Beyond rules: On the freedom to eat sand

To get a fairly absolute example of a rule that bears across cultures, I sometimes use, “Don’t feed sand to humans, yourself included”. Except, of course, there are moments when eating sand is entirely appropriate: We all ate some when we were toddlers; I to this day have a sense of the visceral explosion of culinary disappointment in my mouth as the wet gravel crushed against my teeth. And yes, that made my world richer and me a slightly wiser participant in it. We may further imagine an avant-garde artist who sits in a museum, slowly eating sand, being watched in admiration by those rich cultured ladies in shawls — as a “comment” upon carbon-based humanity’s enslavement to silicon computers. A strict rule would preclude these two very legitimate expressions and moments in the melody of life. Indeed, it would have oppressed us, trampled our sacred freedom to eat sand, and while that may not seem as much of a transgression, there is always a price to pay for oppression somewhere down the line: madness and bloody revolution.

Instead of a rule then — instead of a “utopia of rules” — we need a wider sense of the “structuring principles of the context” of which we are part in order to formulate not a rule but a generative pattern that can distribute our “eating sand” occasions as rarely as they deserve, but no less than so.

You could call it a situated meta-rule, but it should in turn be read from a pattern, a meta-theory, one that helps to determine if and when a certain kind of action is appropriate, advisable, or even acceptable. The meta-rule only emerges momentarily to regulate this one instance, to see how it resonates with what comes before; then it falls into the background again. It dissolves.

[[Side-note. In the social sciences, the study of how such underlying implicit orders are invoked and upheld continuously by interacting agents is called “ethnomethodology” after Harold Garfinkel’s formulation. But this research program resisted uncovering the underlying grammatology, precisely because its scholars never wanted to get caught up in rules. They didn’t have the analytical tools at their disposal to understand that beyond linear rules and norms, there lie new horizons of non-arbitartrily ordered grammatology.]]

Again, we’re not looking for “harmony”, but for resonance. Harmony suggests that there is no room for disharmonious instances, for transgression and ultimately creativity. The pope wants harmony; his organization has lots of strict rules, and he ends up running history’s most successful industry of pedophile rings, AKA the Catholic Church. Pythagoras wanted mathematical harmony and added rules to achieve it in the lives of his disciples until the Pythagorean math-cultists couldn’t even eat beans and pee on fires, the basic freedoms of life. And his community collapsed. Rules have their place, certainly; harmony has its place (within the bounds of one Beethoven piece, harmony is great) — but they’re not the master patterns of intra-action, of sociality.

And — again — if AI works by treating everything as language (and language as probability), this means that the AI is bound by the rules of grammar, and thus by the meta-rules of grammatology.

2.6 The wisdom of the machine

I don’t have a catalog of how grammatology is to be read and applied by the AI. But my point is that we, or at least the key players of AI development, need to learn grammatological thinking if we are to create counter-balancing forces of AI-on-AI interaction (and on human-AI-AI-human interaction, of course). Such AI-targeting AIs should in turn discover grammatology’s patterns and measure its own success by the amount of resonance that is produced as a result of the interference pattern. This requires learning from human sensibilities, from embodied wisdom.

But the impossibility of rules in the larger scheme of things, is, by the way, the exact reason that “wisdom” (which a lot of people, John Vervaeke perhaps most prominently, think we should teach the AI) cannot be formalized and taught, only after-the-fact admired. It’s situational. It’s sensitive, as butterflies say, to minute differences of initial conditions. Wisdom is to be found, as the sages and saints have long tried to remind us, in the eternal depths of the now, not in the formalization of rules that carry across time and context. Wisdom is “timeless” not in the sense “static”, but in the sense that it cannot be caught in time. It’s an emergent pattern within the whole; in this case, “wisdom” is a measure of how well the human-AI-AI-human intra-action interfaces with the grammatological structures of reality.

So, sure, optimize AI for wisdom. But when we operationalize that, it simply means training the grammatological sensibility of the AI. Crucially, however, wisdom is entirely impossible without sociality, without being kept in check by forces other than our own, by being part of a larger community (yes, this ultimately applies also to lone wilderness sadhus, whose relatedness to their asceticism is entirely socially constructed). Whenever you wreck the feedback cycles of everyday life, we go batshit crazy, either from depression, megalomania, solipsism, paranoia, or why not all four. That is, after all, why dictators always become tyrants before you can count to five and democracy remains the worst form of government except all others that have been tried. Beings with socially co-constructed selves need to balance each other out in order to remain sane. To be hyper-sane — to be wise — means that we need to balance each other out even more, but with yet greater sensitivity for how each of us can sometimes bloom into unexpected and uncomfortable surprises that nevertheless prove to be appropriate in the larger scheme of things.

Or, we could just say, wisdom is an emergent property of adhering to grammatology — at least when it comes to LLM-based AI, where language is the key structure, and thus that AI is thus based on grammar (grammar-as-probability).

2.7 Expanding upon semiotics to include all of reality

Now, in Bruce Alderman’s book chapter, Sophia Speaks — An Integral Grammar of Philosophy, he points out that much of the fundamental “theories of reality” are based on semiotics, particularly on the idea of shifting between 1st, 2nd, and 3rd person perspectives (me, you, he/she/it). It is based on pronouns. Interestingly, this is a structure that, more or less, carries across all of the world’s some 7000 languages. And, so, you get stuff like subjective reality and phenomenology (1st person perspective) and objective science that you and I have to verify or falsify together (3rd person reality).

Have you noticed, by the way, that there are two major attractor points in today’s world of high achievers: professionally talented people all sooner or later end up living in America doing some version of buying, merging, and selling companies; meanwhile, intellectually talented people all sooner or later become interested in some version of semiotics. And from there on, intellectuals stumble into the field of meta-theory; trying to understand the structure of reality across the sciences and humanities.

What Bruce Alderman’s stroke of genius granted was that, wait a minute, we here have a basis for philosophy based on pronouns, but philosophies are equally possible to be generated from each of the other basic categories of words. Why stop at semiotics and pronouns in particular? Alderman goes through six such categories of philosophical roots, showing that they are indeed already represented within philosophy. See overview below:

Screenshot from Bruce Alderman’s work. All rights reserved for Bruce Alderman. (Excuse the page break at the end.)

 

We shan’t go into details of Alderman’s work; it’s many pages of dense referencing and reasoning; but we can note that it follows that any philosophy that is based on only one of these categories will very likely miss out on performing fundamental operations that are present in how reality has become structured and represented in language.

Speaking with Alderman’s above schema, even if you get “the process” right (as is so fashionable these days), you might still get the corresponding “appearance” wrong, and link it to the completely wrong “substance”. And so on. I have since long tired of the people who insist on verbing the shit out of reality, when “process” is clearly only one dimension out of several. (If anything, Alderman’s work shows with crystal clarity that there is little use for insisting that we all be, for instance, “materialists” — a trend for which I hold equal contempt. The structure of language, our only way to encode reality, will always encompass much more than the material substance at hand. Why focus on one dimension and ignore the others that are very apparently present and relevant to structuring reality?)

We could also arguably add the two last word categories of grammar:

  • conjunctions (and… but… or… while… because) and
  • interjections (Oh!… Wow!… Oops!).

[[Side-note. Conjunction-philosophy is arguably formal logic and/or assemblage theory of Manuel DaLanda’s kind, if we add it to Alderman’s above sceme. But interjection-philosophy? I don’t know, but for some reason, I think of Jacques Lacan’s formulation of the real. And Diogenes. Never mind.]]

All in all, we can see that we have eight forms of operations that need to be balanced against one another in order to form meaningful larger wholes. Yes, even interjections fit into the melodies of language use. We may need to interject, to “throw something in between”.

2.8 The transpersonal, posthuman virtue spiral

Now, exactly which “probabilistic grammatology” that reflects sanity, or even hyper-sanity or wisdom, cannot be known in advance. We’d need the AI’s help to figure it out, just as the AI would need us to learn from our sensibilities and embodied virtues (and moral compass). But — and this is a big but — we can be almost entirely certain that insanity is at hand when the grammatological structure becomes too lopsided, dissonant, off.

My favorite, and simplest, example of this is that the narratives of two sides of a conflict can be analyzed by seeing who uses the most adjectives. I only half-jokingly like to say that adjectives are evil. Read North Korea’s news sources, and you will see what I mean: the glorious people are walking down the street, the goodhearted leader is supreme in the democratic republic because he resists the fascist imperialist Americans with his powerful socialist missiles, and so forth. Nothing wrong with the grammar there, but the grammatology, the balance between different deployments of grammatical operations, is revealing…

Anyone with that much need to define the value of every noun for the reader or listener definitely has something to hide. Adverbs have a similar slant, of course: trying to define how verbs should be interpreted for the listener. The side that claims that the other party “maliciously walked into the room and looked around with a cold, calculating stare that I met only with helpless innocence” tends to be the bad guy. The truthful side, the one with a narrative that adds up, will be able to simply stick to the facts and events, and these will speak for themselves, using adjectives for clarification and not nearly as profusely.

Now, that’s just one example of an unbalanced grammatology that reveals that the information feedback cycle in question is somehow short-circuited: there is not a free flow of information where forces are balanced out into a transpersonal “ecology of mind”. What other grammatological madness could we imagine or identify — from the lack of balance in the sequentiality (as studied in the discipline of Conversation Analysis) of who speaks and when, to staying too long on explaining a particular noun that nobody has asked about (a typical form of “framing a conversation”), the repetition of a certain class of words, the lopsidedness of who mentions verbs and thus commands agency, and so on…

If each of the eight categories of words suggests a basic dimension of how intra-action is structured (a more complex model, then, than just the three of true, good, and beautiful, which can be viewed as grammatological sub-categories of the “pronoun” dimension), then the protection against insanity is arguably to find the balance between these; the non-static balance, the one that hears “the next note to play”, or at least stops a broken record from playing too many off notes, before it consumes the known universe and spits out paperclips.

2.9 Metacogntion, sanity, and madness

In terms of what is trendy within contemporary psychology, we must train AI to have metacognition, to think of and relate to its own “thoughts”. I guess that’s a good a translation to contemporary science-kosher language and operationalization of the term “wisdom” as any.

But meta-cognition is itself a very insufficient concept. The fact remains, of course, that simply noticing and thinking about cognitive operations is not in and of itself sufficient to be wise: wisdom comes from the implicit structure by which thoughts (or other mental operations, such as “command of attention” which is normally not included in our concept of “thoughts”) are then addressed and possibly corrected, brought into resonance with a pattern-of-patterns. If you think stupid thoughts about your thoughts, and even dumber thoughts about those meta-thoughts, well, that’s the infinite regress Vervaeke was talking about. Madness is almost always some kind of meta-cognition gone haywire: “The thoughts… the thooouughts… they won’t stop, pleeaase make them stop!!”.

Again, again: What we need is a community, a civil society as it were, an open system of flows of checks and balances — a posthuman and transpersonal “ecology of mind”. That’s the anchor against drifting into madness, into a posthuman, AI-induced insanity.

On the positive end of potentiality, if we manage to set in motion that humans teach AI grammatology, and AI improves upon human relationships and mental health, and then humans become wiser in what they reflect back to the AI, this could be the closest thing to a stairway to heaven: a transpersonal, posthuman spiral of increasing virtue.

But let’s not get carried away. The madness scenario seems more plausible.

Part 3.
Meta-Teams and An Army of Children

Dall-E sample: Army of Children

 

By now, I imagine many readers would object that I have started to sound more than a bit like Saruman, acknowledging the dangers of AI, and still advocating a kind of cyborgian community with it: “There can be no victory against the forces of Mordor. We must join him, Gandalf… It would be wiiiise.

So my answer to the AI threat is… More AI?

3.1 Fight or flight are not the options

But what I am suggesting is not a prostration before our new AI gods, as the complexity scientist Johannes “Yogi” Jäger blogs about, warning us of the blank stare of techno-trancendentalists and often the often super-rich optimists of AI. AI is not God and certainly not Jesus — but AI is also not a marching army of orcs from Mordor led by Sauron. Orcs don’t translate texts for us and curate content. Orcs don’t coordinate topologies of probabilistic potentials (which is, by the way, precisely the “Meta-Systematic Stage” for MHC buffs out there.) AI is, of course, both useful and dangerous because it is a powerful set of tools.

I am suggesting a relatedness to the AI: one that involves critique and resistance. It’s a balance between the four mutually co-evolving dimensions of sociality: cooperation, trade, competition, and play.

So, cooperate with AI, yes. Trade with it and use it for trading with other people, yes. Compete with it and resist it, yes. Play with it, yes. And coordinate which of the four operations to take, and deliberately evolve how and when each of these four operations takes place. The options are not fight, freeze, or flight. It’s a complex and it’s a broken both-and.

Taken together, it’s a “playful struggle”, where the AI is met with a certain kind of sensibility, what has been called a metamodernist sensibility or structure of feeling: sincere irony, pragmatic romanticism, informed naivety. This stance is neither utopian nor dystopian: it’s Protopian, in that it rests in the present moment, the now, and works to increase the potential of wisdom emerging from the AI and decrease the likelihood of tragedy.

3.2 Shifting the cymatics of AI-to-AI interference

And so, we are looking at the task of training AIs that interfere with one another in a larger ecology (and are in turn interfered with) and stabilizing as different informational “biotopes”.

Such informational biotopes can follow different patterns. There is unlikely to be one blueprint for all of them. Closely related to the idea of “diffraction” described above is the image of cymatics, which maps out families of stabilized patterns of interacting waves:

Cymatics: different sound wave frequencies create different emergent patterns in a medium consisting of a membrane with a powder or liquid on it.

 

AI-on-AI interference will likely create a whole matrix of different patterns that can emerge and stabilize in terms of grammatological topologies. This creates a rich generator function that can create not “one AI-human stable ecology”, but more something corresponding to the biosphere’s world of interrelated ecologies that nevertheless share some planetary commons (the atmosphere, “Gaia”, etc.).

We need an ecosystem of human-AI-AI-human that self-regulates and creates equilibria which thus contain its incomprehensibly wild dynamism without resorting to oppression or too direct attempts at top-down control (human masters over AI tools).

3.3 The social quality of friendship within the meta-teams is crucial

Thus, in the end, the world cannot be saved from the dangers of AI by lone agents or organizations, nor by regulators (even if these are truly indispensable, they cannot act nearly as fast and they do, after all, deal with rules and legal frameworks). The world can be saved, first and foremost by what I have described as meta-teams. The 11th Commandment in my book, 12 Commandmentsis “Kill your guru, find your others [meta-team]”. The meta-team is a small number of closely allied friends who collaborate professionally in order to achieve idealistic and political goals. It’s a bit like the fellowship of the ring, or why not a Dungeons & Dragons group of adventurers: you’ll need a wizard (techie), a fighter (business person), a cleric (ethicist), and a rogue (a hacker). I go through the real-life dynamics of cultivating such meta-teams in the Internet age in the book.

The social structure and friendship of these groups shape, in turn, the nature of the knowledge they produce (as my friend, complexity scientist Chuck Pezeshki likes to point out). The Center for Humane Technology urges the world to humanize tech, but of course, what “humanize” means in practice depends on what it means to be human. And that, in turn, is socially co-constructed. So there’s a flow from how we create a social setting for each other and what kind of AIs we are capable of putting out there into the world. This becomes particularly pertinent in cases that literally involve training an AI: the AI combines and applies knowledge, but it must be trained by the social sensibilities and conscience of the members of the meta-teams themselves, in order to know in which patterns to do so. The team cannot know what grammatological patterns that should be supported in the first place, but they can learn grammatology and meta-theory and reason together, based on embodied and uniquely human experience, and thus guide the AI in its discoveries of the grammatological patterns of other AIs. And, in turn, their mistakes can and must be balanced out by other meta-teams and their AIs. Even the heroes compete with one another, sure. It’s just a refined competition: You compete by creating the best and most loyal group of friends with the most mutually reinforcing and complementing talents and the deepest shared sense of purpose that charges the setting with selfless emotional energy and high motivation. Not the worst competition in the world, when I come to think of it. (It even involves, as the Finnish IT company Pandatron is working for, using AI to help self-regulate those social relationships within the group.)

The meta-team must build on mutual goodwill and respect, as well as a sense of doing something important together for the benefit of others. The most successful meta-teams would likely combine the elements of metamodernist demographics: Hackers, Hippies, Hipsters (Triple-H populations), and possibly Hermetics (so, Quadruple-H populations) as I have discussed elsewhere.

In chaotic systems like the one at hand here, where every meta-team can train AIs that affect many others, each of them has the potential to actually shift the course of this development in a decisive manner. More likely, of course, is that the “savior” comes in the form of an emergent property of the mutually interfering work of many such meta-teams — like anti-fragility. It’s like we’re driving downhill in a car stuck on full throttle and only have the handbrake (i.e. regulations); we’re going to need to steer as well, and fast! Pull the brake, sure, but also, and most importantly go with the flow and steer.

3.4 The multitude takes charge

Steering at a second order of complexity: at the grammatological level. “Cyber” means steering. We’re talking about second-order cybernetics. What my buddy Bill Torbert (after Chris Argyris) calls double-loop and triple-loop learning — in cyberspace and beyond. This, I claim, comes about precisely by mustering the powers of what philosophers Hardt & Negri call “the multitude”. Many perspectives break against one another, and right there are the waves of a higher order, emerging through chaos.

The multitude: At this point, many such meta-teams need to form and start combining a grammatological understanding with AI training and AI-AI interaction. As a sum-larger-larger-than-its-parts, together they form a swarm, contributing to “good noise” in the human-AI system, making it “anti-fragile” (Nassim Taleb argues that the right amount of noise makes stuff anti-fragile, so that it grows and adapts in the face of adversity rather than coming crashing down at the first unexpected event; another argument for eating sand, by the way).

Personally, I believe that such meta-teams can and will emerge from the idealistic side of the Ethereum and Holochain communities respectively (and their overlaps) as these already contain the necessary combination of talent, tech, informational centrality in the networks, idealism, and often enough resources to not have to work normal day jobs. MetaGame is one context for such people to get together. The Ethereum community has generally been disillusioned and disoriented for the last few years due to a lack of ideological clarity (too stuck in libertarianism, etc.) and the lack of capacity to self-organize into patterns of higher complexity.

[[For MHC buffs: it is about coordinating at the Paradigmatic and Cross-Paradigmatic orders of complexity. This is where stuff like grammatology comes in.]]

For those of you who fit this description and read this: pick up your sword, ride like the wind my son; you’re the defender God has sent. Co-create your meta-team and start creating the AI that balances out the grammatological patterns of other AIs. Restore balance to the force. Grammatology is complex, sure, but it starts from simple terms of basic English grammar. If you cultivate your meta-team and invest in bridging meta-theory and tech, you can begin to explore grammatological patterns with the AI helping you and you helping it.

Or said with unapologetic poeticism: Many meta-teams together must form a networked army of children, engaged in playful struggle, sporting ironic smiles at their own self-importance, surfing the edge of madness and hyper-sanity, following the depths of the eternal now so closely, to react from the heart, as time slows down as the AI develops with more and more frames per second.

I have said it before, and I’ll say it again: Our world will be conquered, ruled, and transfor­med by an army of children.

Children armed, of course, with basic English grammar.

Hanzi Freinacht is a political philosopher, historian, and sociologist, author of ‘The Listening Society’, ‘Nordic Ideology’ and ’12 Commandments’ and the upcoming books ‘The 6 Hidden Patterns of History’ and ‘Outcompeting Capitalism’. Much of his time is spent alone in the Swiss Alps. You can follow Hanzi on Facebook, Twitter, and Medium, and you can speed up the process of new metamodern content reaching the world by making a donation to Hanzi here.

 

[[In this particular essay, Hanzi writes as Director of the Archdisciplinary Research Center ARC and has received support from Pandatron.]]

YOU MAY ALSO LIKE