by imagining her point of view, or by some combination of the two? If the goalkeeper acts like she wants to win the game, how does Mindy know she’s not just pretending to have this desire? If the goalkeeper reports liking the smell of old socks, how does Mindy know that socks smell the same way to the goalkeeper as they do to her? Can we ever really know what’s going on in the goalkeeper’s mind or are we effectively just guessing? Could brain scans and advanced psychological investigation give us more direct access to the goalkeeper’s mind, or are her mental states always hidden from us? How confident can we be that the goalkeeper even has a mind?
These epistemological questions are bound up with the metaphysical questions discussed earlier. If the mind is a material thing, then we need an account of how we know about our brain states and the brain states of others. If, on the other hand, the mind is immaterial, we need an account of how we can gain knowledge of these special non-physical states. These epistemological questions also have deep practical and ethical implications. When is it wrong to doubt someone’s report of what’s going on in their mind? Is it ever right to think that you know what someone wants better than they do? Can a juror ever really know that the accused intended to kill? Can a probation officer ever really know that a murderer doesn’t desire to kill again? Our answers to these more concrete epistemological questions will be shaped by our answer to the Knowledge Question.
1.4.3 The Distribution Question
There are lots of things in the world, but which of them have minds? If you’re watching the football game, you’ll be pretty sure about the distribution of minds. You’ll be confident that Mindy, the goalkeeper and the referee each has a mind. You’ll also be confident that the ball, the goalposts and the referee’s whistle don’t have minds. But is this confidence well founded? And in many circumstances we’re not so confident about the distribution of minds. Does a newborn baby have a mind? What about a foetus, or a zygote? Does your pet cat have a mind? What about a bat, a bee or an octopus? Should we attribute minds to trees, to plants or to viruses? Could there ever be an AI with its own mind? What about the internet, a smartphone or a self-driving car? Might it be that everything has a certain level of mindedness and that mentality pervades the universe? Or might it be that nothing does and that the whole idea of minds is a myth?
Once we’ve decided which things have minds, there remains the further question of what kind of mind they have. If a foetus has a mind, is it a conscious mind or a mind populated only by unconscious mental states? If an octopus has a mind, is it a rational mind like ours or a bundle of instinctive mental processes? If a self-driving car has a mind, is it an emotional mind with feelings of love and hate or is it emotionally inert? Besides asking these general questions about the distribution of consciousness, rationality and emotion, we can ask some more specific questions about the distribution of specific mental state types. For instance, which of these beings can feel pain and which cannot?
The Distribution Question has an epistemological aspect. How do we know whether something has a mind or what kind of mind it has? What criteria should we be applying and how confidently can we apply them? It also has a metaphysical aspect. What does it take for something to have a mind? What does it take for that mind to be rational, conscious or emotional? Does it involve something immaterial or is it a case of having the right physical properties? Could something without a biological brain have a mind? Could something without a physical body have a mind?
Our answer to the Distribution Question has enormous implications for how we interact with the people, creatures and artefacts around us. If we learn that the goalkeeper doesn’t really have a mind, then how we treat her would immediately change. It would alter our expectations of her, the duties we feel towards her and the rights we attribute to her. Whether something has a mind can be a deciding factor in whether it deserves our moral consideration. The question of when a developing infant acquires a mind has great ethical import. Well into the 1970s, it was routine to give babies surgical procedures without anaesthetic. Why? Because the surgeons thought that babies hadn’t yet developed minds capable of suffering so there was no need to risk giving them anaesthetic. It looks like the surgeons were making a huge ethical mistake here, and this mistake was built on their erroneous answer to the Distribution Question.
Something similar applies to our treatment of non-human organisms. You needn’t feel guilty about standing on a daisy because you don’t think that daisies have minds, but you ought to feel guilty about standing on a cat. So in order to treat organisms the right way, we really ought to know where in the tree of life minds start to emerge. AI stretches our moral imagination even further. You wouldn’t feel guilty about sending a self-driving car to the scrap-heap, but is your indifference justified? One problem here is that we tend to be on the lookout for minds like ours. Could daisies or cars have minds that we fail to recognize because they’re so totally unlike our own? And once we consider the possibility of completely different kinds of mind, the field of possible minds gets even broader. Maybe molecules have minds. Maybe planets do. Maybe the universe as a whole forms a vast ‘über-mind’ of which we are all a part. If any of these possibilities are true, it could completely change how we act.
1.5 A Plan of Action
Over the next five chapters, we’ll be looking at the key positions in philosophy of mind – the main ‘theories of the mental’. These theories are defined by the answers they offer to the Three Big Questions, so by the time we’re finished you’ll have a decent grip on how to approach those questions. We’ll be looking at the main theories of the mental in historical order, running from Descartes’ work in the seventeenth century right up to contemporary debates in the field. One advantage of this is that it allows us to see how each new theory relies on its predecessors, building on their successes and attempting to overcome their failures. Another advantage is that it allows us to see how philosophy of mind interacts with the science of its time, drawing on scientific insights and challenging scientific assumptions. Furthermore, it allows us to see how philosophers living in different centuries can nevertheless be cut from the same cloth, adopting similar approaches to the puzzles of the mind. The following is an overview of our journey:
Chapter 2 explores Descartes’ dualism. The seventeenth century saw great progress in our scientific understanding of the material world. Descartes, a scientist in his own right, asked how the mind would fit into this emerging picture. He argued that the mind must be an immaterial substance that stands apart from the material world but that is able to interact with it via the body. But Descartes’ arguments faced a flurry of objections that still haunt dualists today.
Chapter 3 jumps ahead to the early to mid-twentieth century and introduces two materialist theories of mind. Behaviourism argues that mental states are nothing more than patterns of behaviour, and identity theory argues that mental states are nothing more than brain states. These theories were inspired by the emerging sciences of brain and behaviour and promised to overcome the failings of dualism. But each theory faced problems of its own.
Chapter 4 takes us to the mid- to late twentieth century and the computer revolution. According to functionalism, the mind is akin to a computer with our brain acting as the hardware on which the software of the mind runs. We look at how functionalism improved on other materialist theories to become the leading theory of the mental.
Chapter 5 looks at a problem for materialism that gained special traction at the end of the twentieth century – the Problem of Consciousness. A range of striking thought-experiments suggest that theories like functionalism cannot explain what our mental lives feel like on the inside. Conscious experience is thus an explanatory residue that requires special treatment. I look at two radical ways of dealing with this explanatory residue: a partial reversion to dualism on the one hand and a flat denial that conscious experience exists