Rossum’s Universal Robots, reality was to be shaped, thought about and interpreted with reference to automatons, cyborgs and androids. At the dawn of the twentieth century, the dream of automated machines was brought finally and firmly inside the territory where empirical testing is done, most notably with a tide-predicting mechanical computer – commonly known as Old Brass Brains – developed by E. G. Fischer and Rolin Harris.7 The world had, at long last, shifted away from the ‘natural order of things’ towards something altogether more magical: the ‘artificial order of mechanical brains’.
For most people today, AI is equated with Google, Amazon or Uber, not ancient philosophy or mechanical brains. However, there remain earlier, historical prefigurations of AI which still resonate with our current images and cultural conversations about automated intelligent machines. One such pivot point comes from the UK in the early 1950s, when the English polymath Alan Turing – sometimes labelled the grandfather of AI – raised the key question ‘can machines think?’8 Turing, who had been involved as a mathematician in important enemy code breaking during World War II, raised the prospect that automated machines represent a continuation of thinking by other means. Thinking in the hands of Turing becomes a kind of conversation, a question-and-answer session between human and machine. Turing’s theory of machines thinking was based on a British cocktail party game, known as ‘the imitation game’, in which a person was sent into another room of the house and guests had to try to guess their assumed identity. In Turing’s reworking of this game, a judge would sit on one side of a wall and, on the other side of the wall, there would be a human and a computer. In this game, the judge would chat to mysterious interlocutors on the other side of the screen, and the aim was to try to trick the judge into thinking that the answers coming from the computational agent were, in fact, coming from the flesh-and-blood agent. This experiment became known as the Turing Test.
There has been, then, a wide and widening gamut of automated technological advances, symptomatic of the shift from thinking machines that may equal the intelligence of humans to thinking machines that may exceed the intelligence of humans, but all of which have been and remain highly contested. Whether automated intelligent machines are likely to surpass human intelligence not only in practical applications but in a more general sense figures prominently among the major issues of our times and our lives in these times. Notwithstanding the notoriously overoptimistic claims of various AI researchers and futurists, there has been an overwhelming sense of crisis confronted by scientists, philosophers and theorists of technology alike, in greater or smaller measure, that the feverish ambition to establish whether AI could ever really be smarter than humans has resulted in a new structure of feeling where humanity is ‘living at the crossroads’. There have been, it should be noted, some very vocal and often devastating critiques of AI developed in this connection. The philosopher Hubert Dreyfus was an important early critic. In his book What Computers Can’t Do, Dreyfus argued that the equation mark put between machine and human intelligence in AI was fundamentally flawed. To the question of whether we might eventually regard computers as ‘more intelligent’ than humans, Dreyfus answered that the structure of the human mind (both its conscious and unconscious architectures) could not be reduced to the mathematical precepts which guide AI. Computers, as Dreyfus put it, altogether lack the human ability to understand context or grasp situated meaning. Essentially reliant on a simple set of mathematical rules, AI is unable, Dreyfus argued, to grasp the ‘systems of reference’ of which it is a part.
Another critique, arguably more damaging, of the limitations in equating human and machine intelligence was developed by the American philosopher John Searle. Searle was strongly influenced by the philosophical departures of Ludwig Wittgenstein, especially Wittgenstein’s demonstration that what gives ordinary language its precision is its use in context. When people meet and mingle, they use contextual settings to define the nature of what is said. This time-and-effort contextual activity of putting meaning together, practised and rehearsed daily by humans, is not something that AI can substitute for, however. To demonstrate this, Searle provided what he famously termed the ‘Chinese Room Argument’. As he explains:
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.9
The upshot of Searle’s arguments is clear. Machine and human intelligence might mirror each other in chiasmic juxtaposition, but AI is not able to capture the human ability of constantly connecting words, phrases and talk within practical contexts of action. Meaning and reference are, in short, not reducible to a form of information processing. It was Wittgenstein that pointed out that a dog may know its name, but not in the same way that her master does. Searle demonstrates this is similarly true for computers. It is this human ability to understand context, situation and purpose within modalities of day-to-day experience that Searle, powerfully and provocatively, asserts in the face of comparisons between human and machine intelligence.
Frontiers of AI: Global Transformations, Everyday Life
Another way of reading AI against the grain – contesting the ‘official’ narrative of artificial intelligence – is to rethink its relation to economy, society and unequal relations of power. These are all key domains in which the discourse of AI can and must be situated. I have argued in the preceding section that what the idea of an intelligence rendered ‘artificial’ signifies is, among other things, the transformation and transcendence of human capabilities from natural, inborn and inherited determinations of the biological and biographical realms. AI consists in the project of transforming human knowledge into machine intelligence – and charging social actors with the task of integrating, incorporating and invoking such newly minted artificial automations into the living of everyday life. Such manufacturing of automated intelligent machines, however, works not only upon an internal register – the field of individual life, individualization and the development of human intelligence – but also outwards – across societies, economies and power politics. AI-powered software programs are today downloaded to multiple locations across the planet – at once stored, operationalized and modified. Contrasting the limitations of the human brain by cranial volume and metabolism with the extraterritorial reach of AI, Susan Schneider argues that automated machine intelligence ‘could extend its reach across the Internet and even set up a galaxy-wide “computronium” – a massive supercomputer that utilizes all the matter within a galaxy for its computations. In the long run, there is simply no contest. AI will be far more capable and durable than we are.’10
So, AI is also all about galaxy-wide movement and especially the automated global movement of software, symbols, simulations, ideas, information and intelligent agents. AI-powered information societies involve a relentless automation of economic, social and political life. This point is an important one to register, as many commentators invoke the spectre of globalization to capture the economic transformations of manufacturing, industry and enterprise as a consequence of AI technology and its deployment in offshore business models. Certainly, a great deal of academic and policy thinking has emphasized how the global digital economy has become ‘borderless’, with many frontiers now automated and regulated through the operations of intelligent machines. The rise of AI is intricately interwoven with globalization, it is often said. This is surely the case, though it is vital to see that globalization links together people, intelligent machines and automation in complex, contradictory and uneven ways. Understanding that AI is both condition and consequence of globalization has to be properly contextualized.
Many studies have cast globalization solely as an economic phenomenon. From this angle, globalization consists of the ever-increasing integration of economic activity and financial markets across borders. Some analyses have emphasized that globalization is the driver of economic neoliberalism, privatization, deregulation, speculative finance