sensory stimulations. Thus, those sensory stimulations, in virtue of going on outside of my awareness or unconsciously, do not provide me with good reasons for believing my books are still in their bookcase. The sensory stimulations are the ultimate cause of my seeing and hence my belief, but since they do not provide me with a good reason to have the belief, they do not count as my evidence.
Of course, what counts as a good reason for a person to believe something depends partly on factors involving that person, most importantly perhaps their background knowledge. Recall the aliens from earlier. Not having any knowledge about humans and their diseases in the background, the evidence in the form of Koplik spots is not available to them. Just as we argued that if I am unaware of my sensory stimulations, they can’t be reasons for me, if the aliens don’t know what Koplik spots are and what they indicate, the spots do not provide good reasons for them to believe that the person in front of them is about to come down with the measles. But we don’t have to invoke aliens. Any nonexpert looking at Koplik spots will fail to receive evidence for the measles.
Thus, whether or not an observation is evidence for a person depends crucially on what the person is bringing to the table in terms of background knowledge. The physician sees the Koplik spots as Koplik spots and thereby gains evidence for her belief that the patient is coming down with the measles. I, a nonexpert, see the same spots, but I do not see them as Koplik spots. Thus, they are not evidence for me, because they don’t provide me with good reasons for believing that I am confronted with a case of the measles. Only observations that are seen as this or that can be evidence for (or against) some hypothesis.
Another way of putting the same point is this: During the processing of sensory stimuli, we bring – often automatically – various categories, background knowledge, and similar things, to bear. Thus, categorized observations are what constitutes evidence. This fact has been identified as the theory-ladenness of observation and/or measurement, which we discuss below and in later chapters.
Some readers might worry that we are overintellectualizing evidence, seemingly requiring that evidence is always conceptualized (e.g., seeing a discoloration as a Koplik spot). Such readers are in good company – there are a number of prominent epistemologists who believe that nonconceptualized experiences can be good reasons for having certain beliefs. For example, simply tasting something salty is a good reason for believing it to be salty, even if we don’t have the concepts to properly express our belief (that’s why we often say that something tastes like chicken if we lack the appropriate concepts). However, it seems to us that in the empirical sciences, unconceptualized experiences hardly ever play a role as evidence, and so we leave them aside.
2.2.3 Observation, Naked and Enhanced
Another feature about observations is that when unaided, they can be both restricted in their scope and sometimes unreliable. Not only do we bring background knowledge to the categorization of our observations, but what we can observe in the first place is limited to and characterized by the way in which our brains process incoming information. First, there is a lot of information available in the world that our sensory apparatus cannot pick up directly. We can only see a small part of the electromagnetic spectrum – the wavelength of visible light stretches from about 390 to 700 nanometers. We obviously can’t see infrared (however, some snakes can), and we can’t see radiowaves or gamma rays. All of these waves can carry information. In order to tap into that information, we need special equipment, such as night vision goggles that allow us to “see” infrared, or radio receivers that allow us to hear the information superimposed onto the radiowaves.
Moreover, observing something with our unaided senses can sometimes lead to false, or inconsistent, beliefs. Consider trying to measure the temperature of an object with your hand. As it happens, you are coming home from a walk on a very cold afternoon. Somewhere along the way, you lost one of your gloves, so that only one of your hands was protected and is still nice and warm. The other hand has gotten quite cold. After you open the door, you are wondering whether the pot of soup you made earlier is still warm. You touch the pot with your cold hand and determine – observe – it to be still really hot. So you take off the glove from the other hand to pick up the pot and carry it to the table. Surprise! To the other hand, the temperature of the soup is barely above the temperature of the room. What’s happening is that the temperature information you receive from your two hands does not only depend on the temperature of the soup pot, but also on the temperature of your hands. That’s why you get inconsistent information. Hands, or thermoreceptors in the skin in general, are not designed to provide us with objective temperature readings; what they do is provide us with information about the temperature difference between environment and body.
In order to overcome these two shortcomings of unaided observations, we can – and do – avail ourselves of instruments that can both broaden the range of the information to which we have access and make the information less susceptible to distortions stemming from our sensory apparatus. Consider temperature again. We build thermometers that can measure the temperature of objects that would be far too hot or too cold to be touched by us without injury – molten iron, for example, or liquid nitrogen. In this way, we can get information we couldn’t receive through unaided observation. Second, thermometers, like all other measuring instruments, are calibrated. We use certain natural facts, such as the phase changes of water, to equip them with a scale that allows us to map numbers on the thermometer to temperatures in the objects measured. In the Celcius scale, the freezing point of water is set to 0, and its boiling point (at sea level) to 100 degrees. This has the effect that we measure the temperature of objects by comparing them to the temperature of water (at normal pressure), instead of simply determining the temperature difference between the measuring instrument and its environment, as we do when we ascertain temperature by means of our own body. Thus, we eliminate some of the subjective elements in our observations. But, as we will see shortly, a different sort of bias manifests itself in the use of measuring (and other scientific) instruments.
2.3 Measurement
As we just noted, instruments for measurement are used for principally two reasons. First, they expand the range of information about the world available to us, and second, they set up a correspondence between numerical values and properties. The second feature in particular has received attention from those concerned about the subjective character of ordinary experience and the desire for scientific objectivity. If, so they thought, we could compare properties of different objects (taken in a wide sense) with a standardized measuring instrument as opposed to using our unaided observation, certain errors could be minimized. For example, trying to gauge the temperature of a pot with a very warm hand might lead to questionable results. Better to use a thermometer. However, there are some interesting philosophical issues that are raised by the employment of measurement, and a couple of those issues rise to the status of serious problems and puzzles, such as the matter of measurement in quantum mechanics. But we start with its less problematic features.
2.3.1 Measurement Scales
In general, a measuring instrument sets up a correspondence between numbers and physical properties. However, not all features of the number system (the natural numbers in most cases) are relevant to all measuring devices. What is relevant depends on the sort of scale one uses. We can distinguish between at least
Ordinal Scales (e.g., hardness)
Interval Scales (e.g., temperature)
Ratio Scales (e.g., length)
First, ordinal scales. For minerals, the so-called Mohs Scale is the standard way for comparing their hardness. It consists of numbers 1 to 10 with which various minerals are associated. For example, in a standard Mohs laboratory hardness kit, we find 1 = talc, 2 = gypsum, 3 = calcite … 9 = corundum, and (if you have enough research money, you include) 10 = diamond. We measure the hardness of some mineral M by a scratch test. If M can be scratched by calcite but not by gypsum, and if M cannot scratch gypsum, we assign 2 as the number of its hardness. If, however, gypsum cannot scratch M, but M can scratch gypsum (but not calcite), we assign 2.5.