Markus Gabriel

The Meaning of Thought


Скачать книгу

translating foreign languages (more or less adequately), writing books or sending emails.

      At the same time, I want to argue that our human intelligence is itself a case of artificial intelligence – indeed, the only real one we happen to be acquainted with. Human thought is not a natural process governed by the laws of nature, like the dynamics we find in the sun or in sandstorms. Unlike the moon’s orbiting of the Earth or the expansion of the solar system, we cannot understand our thinking if we abandon mentalistic vocabulary – i.e. language designed to articulate the meaning of thought – which typically includes words such as intelligence, thought, belief, hope, desire, intention and the rest.

      The human being is the creature who is conscious of this very fact. And, accordingly, it orients its life around its ability to make targeted interventions into the conditions of its own life and survival. This is why humans elaborate sophisticated technologies in the form of systems for improving and simplifying their survival conditions. The human is thus networked with technology in its very self-understanding. In my view, the deep root of this interconnection lies in the various ways in which we are the producers of our own intelligence. The ways in which we think are formed by socio-economic framework conditions that human civilizations have been developing and transforming over millennia. This is how our artificial intelligence comes into being: by way of the self-determination of our human mindedness.3

      Over the millennia, human life has revolved around the question of who or what the human being really is. One of the oldest known answers is that the human being is a rational animal. It is to Aristotle that we owe the corresponding designation of the human as zoon logon echon, the animal that – depending on translation and interpretation – possesses language, thought or reason.

      Yet it is precisely this (supposedly) distinguishing characteristic and privilege of us human beings which the digital age brings into question. The Italian philosopher Luciano Floridi (b. 1964) goes so far as to see contemporary developments in AI research as a deep affront to our sense of our humanity, comparable to such seismic revolutions in our self-image as the heliocentric worldview, Darwin’s theory of evolution and Freud’s discovery of the unconscious.4

      Of course, it has long been the case that the computers we carry about with us pretty much all the time – such as smartphones, smart watches and tablets – can outsmart most human beings in simulated situations. Programs can play chess better than humans, beat us at Go and at good old Atari games. They are better travel agents, can search the entire internet at lightning speed, immediately report the temperature in every corner of the globe, and find patterns in gigantic data sets which would take humans an age even to notice. As if that weren’t enough, they also carry out mathematical proofs that even the very best mathematicians can understand only with considerable effort.

      In the light of these advances, scientists, futurologists, philosophers and politicians like to engage in speculation about how long it will be before the infosphere, as Floridi calls our digital environment, attains a kind of planetary consciousness and liberates itself from its dependence on us humans. Some fear that a digital worstcase scenario, known as the singularity or superintelligence, will occur in the not too distant future. This position has found a prominent salesman in Raymond Kurzweil (b. 1948), himself inheriting ideas from pioneers of AI research such as Marvin Minsky (1927–2016). Even such famous personalities as Bill Gates (b. 1955) and Stephen Hawking (1942–2018) have warned of a fast approaching intelligence explosion, in which intelligent machines will take control and potentially exterminate humanity.

      The truth certainly lies somewhere in the middle. The infosphere and the digital revolution aren’t leading us towards a dystopian future, such as the world depicted in the Terminator films or in novels such as Michel Houellebecq’s The Possibility of an Island; nor does the latest leap forward in technological progress lead towards the solution to all of humanity’s problems, contrary to the hopes that the German tech entrepreneur Frank Thelen (b. 1975) expressed in a dialogue between the two of us in the German Philosophie Magazin.5 We will not solve the impending crises of food and water shortages through better algorithms and faster computers. Thinking we will is really to get things back to front: it is technological advancement in the digital industries – i.e. attaining higher computing power through more efficient hardware – which contributes to resource scarcity and world hunger – and not only because of the alacrity with which we bin our ‘old’ smartphones and tablets so that we can buy the latest versions with their ever higher processing power. Computers do not solve our moral problems; they aggravate them. We mine the earth in poorer parts of the world to extract rare metals for our smartphones, use plastics for our hardware, and waste untold quantities of energy in order to keep digital reality running twenty-four hours a day, seven days a week. Every click and every email uses energy. We tend to notice this only indirectly, but that doesn’t make things any better.

      In order to untangle the conceptual knot, I will be working in what follows with two anthropological principles, both of which will come up time and again. I mentioned the first anthropological principle at the outset: the human being is the animal that doesn’t want to be one. This principle explains the presently widespread confusions that go by the names of post-humanism and transhumanism. Both movements are built on bidding farewell to the human being and welcoming the cyborg, a hybrid combining both animal-human and technological components.

      Post- and transhumanism, both especially rampant in California, propagate the view that the human being can be overcome, surpassed. The place of the human is to be occupied by the infamous Übermensch, first conjured up by Friedrich Nietzsche (1844–1900). In a society in which an ever-expanding collection of superheroes has become a staple of popular culture, in which Hollywood propagates the fantasy that we might shake off the earthly shackles that tie down us normal mortals and propel ourselves into a superior future, it is no accident that technology and scientific research find themselves in thrall to the Nietzschean fantasy of the Übermensch.

      In this connection, the French sociologist Jean Baudrillard (1929–2007) reminds us of the notorious rumour that Walt Disney tried to have himself cryogenically frozen, hoping to be awakened one day in order to witness the technological wonders of the future.6 One of the main problems animals have to face is that they are mortal. Everything mortals do revolves around life and death, whereby we find life for the most part good and death for the most part bad. For a long time now, technology has been bound up with the fantasy of overcoming death on Earth. Today, this (pathological) wish finally to discard our animality and to become an inforg, a cyborg consisting