J. Adam Carter

This Is Epistemology


Скачать книгу

justification doesn't derive, in part, from the subject's other beliefs.

      1.86 We can state the crux of BonJour's argument as follows:

       Doxastic Ascent Objection

      P1: if some belief (e.g. B1) is non‐inferentially justified, it must be possible for B1 to be justified simply because it has some feature F (i.e. justified in such a way that B1's justification does not derive from any further beliefs the believer has about B1).

      P2: it is not possible for B1 to be justified simply because it has some feature F because justified beliefs are responsibly held and it's not responsible to hold B1 unless you recognize that B1 has F.

      C: B1 cannot be non‐inferentially justified. (And what goes for B1 goes for B2, B3, B4, etc.)

      1.87 How powerful is this objection? Notice that the objection rests on two crucial assumptions:

      Assumption 1: justified beliefs are responsibly held beliefs.

      Assumption 2: a belief isn't responsibly held unless we have beliefs about the features of this belief (e.g. beliefs about what's good about this belief).

      1.89 It helps to remember the kinds of things that foundationalists might offer in giving a substantive specification of F. They might say that if you have a spontaneous visual belief about your surroundings or a spontaneous introspective belief about what you're currently thinking about, these beliefs will be justified non‐inferentially. When you think about good candidates for F, you're supposed to think of the things that would be good resources for settling a question. You might think that to believe responsibly is just to use the best resources for settling a question, in which case there'd be nothing more to responsible believing than forming F‐beliefs. To responsibly settle the question as to whether we're low on milk, you check the fridge. To responsibly settle the question as to what sort of mood you're in, you introspect. It doesn't seem to be a failure on your part that would merit the charge of irresponsibility if – after checking the fridge and seeing that it's empty – you don't continue to think about the reliability of vision. Isn't checking the fridge and judging straight off that there's no milk hiding in the empty fridge a perfectly responsible way of settling that question?

      1.91 The epistemic regress problem arises when we try to identify the features that distinguish justified from unjustified beliefs. The Principle of Sufficient Difference tells us that there must be some further difference between these beliefs that accounts for the fact that the justified ones are justified and the others aren't. The natural place to look to understand this difference is to the kind of rational support these beliefs enjoy. Clear cases of justified belief are cases in which further beliefs provide strong support for those beliefs. Clear cases of unjustified belief are cases in which further beliefs lack such support. As we've seen, there is disagreement about the structure of this support. The coherentists and infinitists don't think that there are (or could be) foundational beliefs that terminate the regress, justified beliefs that can justify further beliefs without themselves being justified by any further beliefs. The foundationalists, for their part, don't think that any putative structure of justification could really justify the beliefs embedded in that structure unless there are foundational beliefs that can transmit that support to further elements in the structure via inference.

      1.92 While the infinitists, coherentists, and foundationalists all have to deal with serious objections, the standard objections to foundationalism seem most clearly surmountable. In the chapters to come, we'll look at some of the different ways that the foundationalist view might be fleshed out. Most contemporary foundationalists believe that our perceptual beliefs are among the foundational beliefs, so we'll look at some debates about the nature of perceptual experience in the next chapter and discuss the significance of these debates for the foundationalist project in the chapter after that.

      Internet Encyclopedia of Philosophy, http://www.iep.utm.edu. See entries for Foundationalism (Ted Poston), Infinitism in Epistemology (Peter Klein and John Turri), Coherentism in Epistemology (Peter Murphy), Ancient Greek Skepticism (Harald Thorsrud).

      Stanford Encyclopedia of Philosophy, http://plato.stanford.edu. See entries for Foundationalist Theories of Epistemic Justification (Ali Hasan and Richard Fumerton), Coherentist Theories of Epistemic Justification (Erik Olsson).

      Notes

      1 1 Some philosophers distinguish between those beliefs that merely guide behavior, but which we would not be inclined to positively endorse on reflection (e.g. our suppressed beliefs, including those which might be painful to consider), from those we would be inclined to affirm as true. Ernest Sosa (e.g. 2017) terms the former “functional” beliefs and the latter “judgmental” beliefs. For the present purposes, we'll be referring to beliefs in a general sense. However, the distinction between functional and judgmental beliefs will be revisited in Chapter 11.

      2 2 The reader can assume that by “justified” we will always mean (throughout the discussion in this book) “epistemically justified” – viz. justified from the point of view where what matters is things like truth and knowledge – unless explicitly stated otherwise.

      3 3 The careful reader might have caught something here that seems problematic; what if some of the beliefs you have include words like “right” and “wrong”, “good” and “bad”, “justified” and “unjustified”? Given that the helmet – by stipulation – scans all of your beliefs, won't it scan these beliefs, too? And if so, then isn't it incorrect to suppose that the helmet does not detect normativity? Here it is important to distinguish between (i) the scientists' describing, without passing judgment, that you have some belief that includes a normative term (e.g. your belief that murder is wrong), and (ii) the scientists' being able to tell whether your beliefs actually have some kind of normative status (e.g. whether they are justified, unjustified). On the situation we're inviting you to imagine here, suppose what the helmet cannot do is, specifically, (ii).

      4 4 Flipping a coin is certainly a method you could apply! But it is an unreliable method, one that would lead you astray as easily as not.

      5 5