Medium Hot: Hito Steyerl on Artificial Stupidities and Existential Risk

Few artists have diagnosed the fever dream of our digital age quite like Hito Steyerl. A filmmaker, theorist, and educator, her work combines documentary realism with speculative fiction and first-person narrative to challenge the political boundaries between art and tech, and make sense of our current networked world. Medium Hot: Images in the Age of Heat is Steyerl’s newest meditation on the future of images in the age of climate change and artificial intelligence. The book is as erudite as it is hilarious — critical but never despairing, Medium Hot is both a polemic and a user’s guide for the present. Enjoy.

Hito Steyerl will also be part of the upcoming new show New Humans: Memories of the Future, opening later this year at the New Museum.

Every era has its monster. In the era of AI, the monster is called Roko’s basilisk. So what is it?

Roko’s basilisk is a story, or more precisely, a bet. The question posed is this: Would you support the development of a superintelligent AI, yes or no? You are free to choose! But there is a twist: you know that this AI will be developed in any case, and then will come back from the future for its former enemies — to torture them for all eternity! So now that you know this: Do you support it or not? Remember, you are free! When this gloomy thought experiment was posted on a blog dealing with artificial intelligence, the thread was quickly shut down. The blog’s admin cited psychological damage to its readers, and the whole thing became some sort of wacky urban legend, giving rise to fears of an AI-triggered extinction of humanity.

I wish I could show you a picture of a basilisk but, unfortunately, all pictures of the basilisk on my hard drive were unexpectedly substituted with photos resembling the illustration in figure 10.1. I have no idea what this is about, but what is clear is that some men in this image are using their bodies to form the Turkish word for ‘no’ (hayır). Clearly, all pictures of the basilisk were swapped by some unknown persons, so I cannot show you any basilisks here.

Figure 10.1. Drawing showing the positions of persons in a substitute photo of the basilisk.

Anyhow, I googled this picture, and sure enough, the search returned numerous images of people in work overalls forming the word ‘no’ with their bodies. It turns out this was in reaction to the 2017 Turkish referendum, in which Turkish president Recep Tayyip Erdogan asked for a massive expansion of his already-considerable powers. He wanted everyone to say yes to this deregulation, but some, like the person in figure 10.2, seemed to disagree.

Figure 10.2. Drawing showing the position of a person posing as a letter in a protest photo.

Heraldic Monster

In some cases monsters symbolise states. In fact, the city of Basel is symbolised by a basilisk. One of the best-known monsters in political theory is Thomas Hobbes’s Leviathan — which is not a basilisk but a sea monster, as well as Hobbes’s name for the state of the sovereign monarch.

Roko’s basilisk, however, is a specific monster, which is actually supposed to be a superintelligent artificial general intelligence (AGI). What kind of state does it stand for?

Perhaps it’s the ‘network state’ that Balaji Srinivasan advertises in his eponymous 2022 treatise. The network state is an entity that colonises physical space from the cloud. It first creates online crypto-based communities, which increasingly encroach on physical space via micronations, crowdfunded territories and sea-steading. A case in point is an entity called Praxis, which describes itself as the ‘next America built on the internet’:

Praxis is the world’s first Network State: a global online community with a national consciousness, developing a shared way of life, governing institutions, and crowdfunding a physical city. Praxians live in over 82 countries and have founded companies worth over $400B. Praxis is a home for the brave, who strive for virtue and wisdom. Our purpose is to restore Western Civilization and pursue our ultimate destiny of life among the stars.

Praxis wants to crowdfund its own city in the Mediterranean, establish a ‘post-labour AI future’ and eventually relocate to space.

Artificial Nature

A precursor to extraterrestrial community building can be found in the closed missions at Biosphere 2, 1991–1994. Here, volunteers were locked into a greenhouse sphere in Arizona for months on end to simulate an outer space colony. They had to be completely self-sustaining, including in the production of their food and a breathable atmosphere in a number of different biomes.

This phase of Biosphere 2 was an oligarch-funded test for space colonisation. Could colonists produce oxygen? Sustenance? Social bonds? The answer is — barely. Oxygen dropped to dangerous levels. The climate was completely fucked up. Most mammals went extinct, and pollinating insects were wiped out. The crew split into two hostile factions. After two runs and a scientist revolt, the project was abandoned. In the end, cockroaches and ants were the species that turned out to be best adapted to the oligarch space colony.

Why is this interesting? Because the manager of this project, in its later stages, was Steve Bannon, a key ethno-nationalist ally in US president Trump’s first administration. His management style — a sort of armed takeover of the premises because it wasn’t making profit — prompted an uprising. At a certain point, the windows of the sphere were forcibly opened by some renegade scientists. Ultimately, other windows were even broken to let oxygen inside.

Artificial Stupidity

But there is another interesting and much more potent detail: apparently Big Brother, arguably the first-ever reality TV show, was based on the Biosphere 2 experiments.

The narrative template of reality TV is based on an artificial idea of natural selection, whereby the fittest — however defined — survive or progress. One could also say that these formats are based not on selection but extinction, and that they celebrate extinction as spectacle. Interestingly, the Biosphere 2 inhabitants were in many cases linked to a hippie performance cult called the Theater of All Possibilities. On the level of theatre, if nowhere else, the Biosphere 2 experiments were spectacularly successful.

It is no exaggeration to say that reality TV has become an important template for autocratic politics — a development that began with Italian trash TV mogul Silvio Berlusconi’s rise to power in 1994 and has accelerated ever since. Just remember Donald Trump’s line in his elimination show The Apprentice: ‘You’re fired.’

With this in mind, it is clear that the results of Biosphere 2 had a spectacular impact on the development of cultural forms. Even if it had been perfectly sealed, reality TV, or the narrative of survivalist spectacle, would still have escaped. One can only wonder what kind of unexpected ‘thing’ or cultural effect will ‘escape’ from AI labs, and which unforeseen side effects this will have.

In fact, this has already happened on several levels.

Right now, AGI is still unrealised. But ‘poor,’ lower-tech or hallucinating AIs are already busily restructuring society — more by dysfunction than by efficiency — and thus contributing to the rise of post-fascist, neofeudal and reactionary movements.

Why is artificial stupidity so successful? In the most rational of all worlds, bad and barely functional products are more profitable. Total optimisation is not reached when everything works well and people are happy; on the contrary, under the conditions of extreme capital, an optimal world is the one that barely scrapes by, deploying technology on the brink of failure. Ultimately, corruption, firing or jailing people, threats, rigging the system and pure deceit are way more efficient technologies. Under such conditions, one does not reduce emissions; one simply fakes the numbers and keeps selling cars. Or one simulates AI by employing human workers to mimic its functions. Think of Roko’s basilisk, which doesn’t even try to convince you to support it. No, it just bullies and threatens you. The basilisk fable could be easily seen as a literary metaphor of little to no real-world consequence. A few people got very depressed reading it, and Eliezer Yudkowsky, who ran the blog where the basilisk first appeared, freaked out and banned discussion about it for several years, citing information hazards. But while people were in distracted awe of the monster, a much more real development took place, more or less in parallel. On the same blog where the thought experiment had originally emerged, people began to brainstorm around the question of how an AI would optimise its function. One of the unintended consequences may have been to serve as an involuntary incubator for ‘alt-right’ ideas. As the post-humanist community forum RationalWiki assesses: ‘The (neoreactionary) subculture started amongst the Bay Area technolibertarian subculture, particularly including the transhumanists.’

Discussants included many neoreactionary celebrities — Mencius Moldbug (aka Curtis Yarvin), Michael Anissimov and Nick Land — spanning ideologically from neomonarchist to fascist futurist. Such were the opinions expressed that the blog admin distanced himself repeatedly, and believably, from alt-right ideology. As theorist Florian Cramer analyses, Land advocates a sort of elite eugenicism, whereby rich people have access to racial upgrades. The wider neoreactionary ecosystem includes Peter Thiel, who famously claimed, among other things, that startups were akin to monarchies.

So, just as Big Brother escaped from the Biosphere and manifested both as eternal terror of reality TV and eternal mass surveillance, something else seems to have escaped from the sphere of Roko’s basilisk: namely, some of the cultural ideas of the alt-right and thus of the current right-wing US ideological elite. They believed that states should be corporations, that startups should be run like dictatorships (or the other way around), that so-called natural hierarchies should be protected and so on. Roko’s basilisk also became the ancestor to long-termist fears about the extinction of humanity through AI. Long-termism — the idea that humanity’s actions now should be optimised to benefit its offspring a million years down the line — spawned large-scale scams such as so-called effective altruism associated with cryptofraudster Sam Bankman-Fried. Effective altruism — an ideology initially conceived to make philanthropy financially efficient — also heavily leans into eugenicism. Former Oxford scholar Nick Bostrom, one of its leading advocates, decried what was in his view a correlation between fertility and lack of intelligence, which he thought should be corrected.

Long-termism is a leading ideology for many members of the current right-wing tech elite, and led to foregrounding future ‘existential risk’ caused by AI over its actual present-day social consequences. RationalWiki founder Eliezer Yudkowsky, still shaken from his encounter with Roko’s basilisk, became one of the leading voices to warn of AI-caused extinction:

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance’, but as in ‘that is the obvious thing that would happen’.

He even advocates for airstrikes on data centres still training AI models after a wholesale ban. For sure, the Lovecraftian horror tale of the basilisk got Yudkowsky scared.

In parallel, however, the elaborate breeding fantasies of trans- and post-humanists were being realised in a low-tech way. In the wake of efforts to eliminate or diminish as many healthcare systems as possible, sick, marginalised and poor people have simply been left to die. This is a form of accelerated extinction, which is also implemented through border regimes and state violence. Another version of actually existing reactionary transhumanism takes the form of attacks on reproductive rights that aim to monopolise as much biopolitical control as possible.

To come back to the beginning: major reactionary pushes in many countries worldwide create multipolar surveillance authoritarianisms supported by AI systems. Simultaneously, AI is being heralded as a major technological and economic bonanza, and creates major hype.

All of these cause profound social disruption as well as the reorganisation of societies into polarised gangs fighting one another, or into self-declared micronations run by CEO sovereigns. An artificial state of nature prompts a surge of eugenicist necropolitics, flanked by awful memes. The heraldic creature of this mess is Roko’s basilisk, and it threatens to detain you. Which begs the question: What would you do if you happen to get locked up by Roko’s basilisk? There are several possibilities.

First: let’s assume, for a moment, that Roko’s basilisk is a heraldic monster. So what kind of social contract does it represent? In fact, it represents the automation of social contracts and social relations as a whole. Both go hand in hand. What do I mean by this? An AGI — or, for that matter, a blockchain-based smart contract or DAO — promises to automate decision-making and replace it with a programmable function.

In the story of Roko’s basilisk, we see exactly how this works. You are promised some free will. You can support the development of this kind of AGI. (Yes or no?) But then, out of the blue, the basilisk starts to threaten and bully you, so you end up having no choice.

In a larger sense, this is precisely what whole societies have been hearing for decades: there is no choice but to accept the slow slide of the ‘no alternative’ model of neoliberalism into the openly autocratic rule of the basilisk. There is no choice as to how the benefits of automation are redistributed. There is no alternative to network capitalism.

So what to do now against the bullies and threats?

Suddenly I realised: Hold on. If Roko can travel into the future and bully people from his vantage point, maybe other people can, too. What if someone tried to send us a strong message from the future?

Because maybe the thing that really exists in the future is not an autocratic, bullying basilisk but a commune or cooperative of red hackers who have finally realised a sustainable and fair economy of free access and redistribution of technological profits. And they are telling us very clearly, from that specific future, that if we want theirs to be the future, and not the basilisk’s, we need to do something right now. One might be bullied and threatened — and many already are being detained, and worse — but if anyone tells you that you have no choice, then you should say no (as we saw in fig. 10.1).

Come to think of it, a lot of people in workers’ uniforms have already been sent from the future, and they have all tried to tell us the same thing. We just weren’t quite smart enough to understand them at that time. Look, I unlocked the secret encrypted message in a photo of the Biosphere crew:

Figure 10.3. Drawing showing the positions of persons in a photo of the Biosphere crew.

The future singularity has been sending these messages for a very long time. And because we didn’t listen to them, they started escalating their appeals; now (as seen in fig. 10.4), they are sending in their most elite and senior representatives to make this point!

Figure 10.4. Drawing showing the positions of elderly persons spelling hayir in a protest photo.

Of course, I am just kidding. I don’t believe that miraculous communards from the future are going to fix anything whatsoever. As you have already guessed, it was me, trying to alter the course of the future by changing my own slide presentation. And I am afraid, if you want to change the future, that you cannot rely on time travellers, either. Rather, you will have to actively change something in the present.

Allow me to rephrase the question of Roko’s basilisk to reflect the new choice with which you are faced: You are presented with an entity that says you have no choice, and that if you don’t agree with him, he will intern you in a world with no alternative, forever, in which any decision is automated and determined by some sort of elite eugenicism, military-purpose AGI, crypto-sovereigns and reality TV extinction spectacles.

Now decide.