fusées keep time or that engines fly. Moreover, as we noted, verbs of sensation, such as ‘hurts’, ‘itches’, ‘tickles’ do apply to parts of an animal (e.g. ‘My throat felt sore’, ‘His leg hurt’, ‘Her head ached’), even though they are non-mechanical. So the applicability or inapplicability of psychological or mental predicates to parts of an animal has nothing to do with what is and what is not ‘mechanical’.
One might concede that it is mistaken to ascribe certain predicates of wholes to parts of the whole, but nevertheless insist that it is fruitful to extend the psychological vocabulary from human beings and other animals to (a) computers (that are ‘mechanical’) and (b) to parts of the brain (that are ‘sub-personal’).41 Note the difference between (a) and (b). Attributing psychological properties to machines (in the context of a discussion of whether machines can think) is mistaken, but does not involve a mereological fallacy of any kind. Attributing psychological properties to the brain or its parts is mistaken and does involve a mereological fallacy. Taking the brain to be a computer42 and ascribing psychological properties to it or its parts is therefore doubly mistaken.
It is true that we do, in casual parlance, say that computers remember, that they search their memory, that they calculate, and sometimes, when they take a long time, we jocularly say that they are thinking things over. But this is not a literal application of the terms ‘remember’, ‘calculate’ and ‘think’. Computers are devices designed to fulfil certain functions for us. We can store information on a computer, as we can in a filing cabinet. But filing cabinets cannot remember anything, and neither can computers. We use computers to produce the results of a calculation, just as we used to use slide-rules or cylindrical mechanical calculators. Those results are produced without anyone or anything literally calculating – as is evident in the cases of a slide-rule or mechanical calculator. In order literally to calculate, one must have a grasp of a wide range of concepts, follow a multitude of rules that one must know, and understand a variety of operations. One must view the results of one’s calculation as warranted by the premises. Computers do not and cannot.
Notes
1 1 F. Crick, The Astonishing Hypothesis (Touchstone, London, 1995), pp. 30, 32f., 57.
2 2 G. Edelman, Bright Air, Brilliant Fire – On the Matter of the Mind (Penguin, Harmondsworth, 1994), pp. 109f., 130.
3 3 C. Blakemore, Mechanics of the Mind (Cambridge University Press, Cambridge, 1977), p. 91.
4 4 J. Z. Young, Programs of the Brain (Oxford University Press, Oxford, 1978), p. 119.
5 5 A. Damasio, Descartes’ Error – Emotion, Reason and the Human Brain (Papermac, London, 1996), p. 173.
6 6 B. Libet, ‘Unconscious cerebral initiative and the role of conscious will in voluntary action’, Behavioural and Brain Sciences, 8 (1985), p. 536.
7 7 J. P. Frisby, Seeing: Illusion, Brain and Mind (Oxford University Press, Oxford, 1980), pp. 8f. It is striking here that the misleading philosophical idiom associated with the Cartesian and empiricist traditions, namely talk of the ‘outside’ world, has been transferred from the mind to the brain. It was misleading because it purported to contrast an inside ‘world of consciousness’ with an outside ‘world of matter’. But this is confused. The mind is not a kind of place, and what is idiomatically said to be in the mind is not thereby spatially located (cp. ‘in the story’). Hence too, the world (which is not ‘mere matter’, but also living beings) is not spatially ‘outside’ the mind. The contrast between what is in the brain and what is outside the brain is, of course, perfectly literal and unobjectionable. What is objectionable is the claim that there are ‘symbolic descriptions’ in the brain.
8 8 R. L. Gregory, ‘The confounded eye’, in R. L. Gregory and E. H. Gombrich (eds), Illusion in Nature and Art (Duckworth, London, 1973), p. 50.
9 9 D. Marr, Vision, a Computational Investigation into the Human Representation and Processing of Visual Information (Freeman, San Francisco, 1980), p. 3, our italics.
10 10P. N. Johnson-Laird, ‘How could consciousness arise from the computations of the brain?’, in C. Blakemore and S. Greenfield (eds), Mindwaves (Blackwell, Oxford, 1987), p. 257.
11 11Susan Greenfield, explaining to her television audiences the achievements of positron emission tomography, announced with wonder that for the first time it is possible to see thoughts. Semir Zeki informed the Fellows of the Royal Society that the new millennium belongs to neurobiology, which will, among other things, solve the age-old problems of philosophy (see S. Zeki, ‘Splendours and miseries of the brain’, Philosophical Transactions of the Royal Society, B 354 (1999), p. 2054). We shall discuss this view in §17.4.2.
12 12The relation between the concept of a person and the concept of a human being will be examined in the Preliminaries to Part II below.
13 13L. Wittgenstein, Philosophical Investigations, ed. G. E. M. Anscombe and R. Rhees, tr. G. E. M. Anscombe (Blackwell, Oxford, 1953), §281 (see also §§282–4, 357–61). The thought fundamental to this remark was developed by A. J. P. Kenny, ‘The homunculus fallacy’ (1971), repr. in his The Legacy of Wittgenstein (Blackwell, Oxford, 1984), pp. 125–36. For the detailed interpretation of Wittgenstein’ s observation, see P. M. S. Hacker, Wittgenstein: Meaning and Mind, Volume 3 of an Analytical Commentary on the Philosophical Investigations, 2nd, extensively revised, edn (Wiley Blackwell, Oxford, 2019), Exegesis §§281–4, 357–61, and the essay entitled ‘Men, minds and machines’, which explores some of the ramifications of Wittgenstein’s insight. As is evident from chapter 1, he was anticipated in this by Aristotle (DA 408b 2–15, quoted on p. 24 above). He was also anticipated by George Henry Lewes (1817–1878) in his book The Physical Basis of Mind (Trübner & Co., London, 1877): ‘It is the man, and not the brain that thinks; it is the organism as a whole, and not one organ, that feels and acts’ (p. 441).
14 14Of course, outside neuroscientific and psychological theorizing and philosophical reasoning it is harmless to say ‘My brain isn’ t working today’ or ‘My brain told me that something was fishy’. They just mean ‘I can’ t think clearly today’ and ‘I thought that something was suspicious’.
15 15Kenny (‘Homunculus fallacy’, p. 125) uses the term ‘homunculus fallacy’ to signify the conceptual mistake in question. Though picturesque, it may, as he admits, be misleading, since the mistake is not simply that of ascribing psychological predicates to an imaginary homunculus in the head. In our view, the term ‘mereological fallacy’ is more apt for our purposes. It should be noted, however, that the error in question is not merely the fallacy of ascribing to a part predicates that apply only to a whole, but is a special neuroscientific instance of this more general confusion. Furthermore, as Kenny points out, the misapplication of a predicate is, strictly speaking, not a fallacy, since it is not a form of invalid reasoning, but it leads to fallacies (ibid., pp. 135f.). To be sure, this mereological confusion is common among psychologists as well as neuroscientists.