In this post, I shall argue for the conclusion that there is no such thing as moral vagueness. The argument rests on a certain assumption, which I myself believe to be true. The crucial assumption is that the fundamental ethical or normative concepts are all essentially comparative notions, like ‘__is better than__’ and ‘There is more reason for__than for__’, and the like.
If this assumption is true, there is no moral vagueness. The moral realm is as precise as the realm of mathematics. Locke’s notorious talk of “moral geometry” is to that extent entirely appropriate.
In her paper “Moral vagueness is ontic vagueness” (Ethics 2015), Miriam Schoenfield gives several alleged examples of moral vagueness. All of these examples involve alleged borderline cases of permissibility. One such example is the following:
It is impermissible to amputate a person’s arm [sc. without their consent] to save another’s life. It is permissible to amputate a person’s arm to save a billion lives. How many lives must be at stake for it to be permissible to amputate someone’s arm? Plausibly, we can create a Sorites series, admitting of borderline cases of permissibility, out of a series of amputations, each of which is performed to save an increasing number of lives.
If the fundamental ethical and normative notions are all essentially comparative, then every proposition about “permissibility” is equivalent to an explicitly comparative proposition.
Now, there may be several different ways of using the word ‘permissible’. But to keep things simple, let us suppose that as it is used in this context, ‘A is permissible’ is equivalent to ‘There is at least as much reason for A as for any available alternative B’. Thus, for A to be permissible is for it not to be the case that there is an alternative B such that there is more reason for B than for A.
Consider the series of cases that Schoenfield gestures towards (and calls a “Sorites series”). How are we to analyse this series of cases given this interpretation of “permissibility”?
- In the first case C1, there are two options: A1, which involves saving one person’s life by amputating another person’s arm [without their consent], and B1, which involves not amputating anyone’s arm and letting the first person die.
- In general, in every case Cn (for n = 1, …, 1,000,000,000), there are two options: an A-option, An, which involves saving n people’s lives by amputating one other person’s arm [without their consent], and a B-option, Bn, which involves not amputating anyone’s arm and letting those n people die.
In each case Ci, there is one reason against the A-option, Ai – namely, it involves amputating one person’s arm without their consent, and a corresponding reason in favour of the B-option – namely, it involves not amputating anyone’s arm without their consent. Call this the “reason not to amputate”.
Similarly, in each case Cj, there is one reason in favour of the A-option, Aj – namely, it involves saving j people’s lives – and a corresponding reason against the B-option – namely, it involves letting j people die. Call this the “reason to save lives”.
Presumably, as we go along the series from each case to its successor, the weight of the reason to save lives gets gradually greater and greater, while the weight of the reason not to amputate remains constant.
In the first case C1, the reason not to amputate is weightier than the reason to save lives – this is why the A-option is impermissible in this case.
In the last case C1,000,000,000, the reason not to amputate is not weightier than the reason to save lives – this is why the A-option is not impermissible in this case.
On this analysis, there has to be a last case in the series where the reason not to amputate is weightier than the reason to save lives. The immediately following case will be the first case where the reason not to amputate is no longer weightier than the reason to save lives. In other words, there is a sharp cutoff point on this series.
Note that this is true even if not all the “weights” of reasons are comparable with each other — so that these weights can only be partially ordered. Even if the weights are not all comparable and so only partially ordered, classical logic alone, together with the analysis of the case, implies that there must be such a cutoff point somewhere on the series.
Of course, we can never know exactly where on the series this cutoff point comes. But there is no reason to think that the explanation of our ignorance has anything to do with vagueness. The explanation seems to be exactly the same as the reason why we cannot always know that one mass is weightier than another mass, given the obvious limitations of our powers of discrimination. (E.g., suppose that you had to tell which mass was heavier by holding one mass in each hand, and trying to judge which mass feels heavier…)
‘Mass 1 is weightier than Mass 2’ is not intuitively vague (or if it is vague, it is only because of vagueness in the referring expressions ‘Mass 1’ and ‘Mass 2’, not because of any vagueness in the predicate ‘__is weightier than__’ or in the relation that the predicate stands for). However, even if it is true that Mass 1 is weightier than Mass 2, we cannot always know that it is true, given our limited powers of measurement.
In a similar way, ‘Reason 1 is weightier than Reason 2’ does not seem to me to be vague either. Our ignorance of its truth can be explained purely on the basis of our limited powers of discrimination. There is no reason to postulate any vagueness in this predicate ‘__is weightier than__’ or in the relation that it stands for.
For these reasons, then, I remain unpersuaded of the case that there is any such thing as moral vagueness.3