Is Rule-Consequentialism Too Pervasive?

According to Hooker’s version of rule-consequentialism (RC), the criterion of rightness is as follows: “An act is wrong if and only if it is forbidden by the code of rules whose internalization by the overwhelming majority of everyone everywhere in each new generation has maximum expected value in terms of well-being (with some priority for the worst off).”

Now Hooker believes that one criterion by which we should assess moral theories is how well the implications of a given moral theory cohere with our considered moral convictions. In his book Ideal Code, Real World, he seems to suggest that RC does pretty well on this criterion, but perhaps he has overlooked the fact that RC will be too pervasive. A moral theory is too pervasive if it pervades every aspect of our lives, such that every voluntary human action, including those that have no effect on others, is potentially morally wrong.

Act-utilitarianism is too pervasive, for if I’m faced with the choice of going to the dentist today or a week from now, I’m morally required to do what will maximize aggregate welfare even if my choice won’t affect anyone else’s welfare. Thus if I need a tooth extracted and I’ll suffer more pain if I wait a week, then it’s morally wrong for me to wait a week even if my decision won’t make a difference to anyone else’s welfare. Yet this is contrary to our considered moral convictions. It may seem stupid and imprudent for me to wait a week, but not morally wrong.

It seems that RC will likewise be too pervasive, for surely a code that includes some rule about acting prudently will, if internalized by everyone everywhere, produce more expected value than one that doesn’t, other things being equal. Thus waiting a week to have a painful tooth extracted would, contrary to our moral convictions, be morally wrong on RC.

By the way, does anyone know why the criterion of rightness stated on p. 32 of the hardback version of his book differs from that of the paperback version? In the hardback edition, the criterion is expressed in terms of a biconditional, whereas in the paperback edition the criterion is expressed in terms of a one-way conditional (“if” but not “only if”).

11 Replies to “Is Rule-Consequentialism Too Pervasive?

  1. Some of us would have the considered moral conviction that it is be morally wrong in some cases to injure oneself gravely, even if doing so would not hurt anyone else. For instance, some of us would say that it is morally wrong to dive off a bridge without making sure that the water is deep enough. One could argue that those of us who have this sort of conviction are being inconsistent if they say that grave injury to oneself is morally wrong, but do not say that minor injury to oneself is also morally wrong (though less so). Thus, Hooker could argue that even if we have the considered moral conviction that certain minor self-injuries are not morally wrong, this considered moral conviction is inconsistent with other of our CMV’s, and therefore at least questionable.
    I have a different problem with Hooker’s view. I do not know if this has been mentioned yet on this blog since I haven’t kept up with everything that’s been said. I also don’t know if Hooker himself has replied to this difficulty. At any rate, my problem is as follows. It seems reasonable to think that there exists at least one code of rules which has the following two properties:
    1. Its internalization by the overwhelming majority of everyone everywhere will have the maximum expected value in terms of well-being, etc.
    2. Its internalization by a small minority of everyone everywhere will have horrible consequences (i.e. will have an extremely low expected value in terms of well-being, etc.)
    Suppose that there exists a rule such as this. Suppose also that in the present circumstance, this rule is internalized by a small minority of everyone everywhere. Should I act in accordance with this rule? According to Hooker’s criterion of rightness, acting in accordance with this rule would be morally right, because were the rule internalized by a majority of everyone, etc., it would have the greatest expected value, etc. But given the circumstances which actually obtain, acting in accordance with this rule would have horrible consequences. If these consequences are sufficiently horrible (suppose one of these consequences is that a million people die a painful death) then I do not see how Hooker can sincerely say that this act is morally right. But I think that Hooker’s criterion of rightness commits him to saying so.

  2. Thanks, David, for the interesting comment and the opportunity to clarify my position.
    I too have the considered moral conviction that it is wrong to do certain things (such as, commit suicide when one has the potential for a good life, injure oneself gravely for no good reason, and waste one’s talent to compose beautiful music due to laziness) even when doing so won’t hurt anyone else. And one could, as you suggest, argue that it would be inconsistent of those who have such a conviction to deny that causing oneself extra pain by delaying a visit to the dentist is permissible (note that no injury is involved in this case). But I don’t think one will get very far with this line of argument, because there is a relevant difference between the two types of cases. We do, it seems, have the considered moral conviction that it is wrong to waste, squander, or destroy what’s valuable. Now one’s life, abilities, and talents are often very valuable. Thus we think that it is wrong to do such things as injure oneself gravely for no good reason. But prudence requires not only that one maintain one’s life and health, but also that one minimize one’s pain over time. Yet we don’t have the considered moral conviction that we must minimize our pain over time, as my dentist case shows. And there is no inconsistency in maintaining that people have a duty not to waste, squander, or destroy what’s valuable but no duty to either maximize one’s pleasure or minimize one’s pain. In so far as RC would include a rule that requires us to maximize our own aggregate pleasure, RC would, then, be contrary to our considered moral convictions.
    Concerning the difficulty you raise, I do think that Hooker has a reply. As Hooker says, “rule-consequentialists would claim that it is morally right to go against other rules when this is necessary to prevent a disaster” (p. 121). And I think that Hooker is right: the ideal code (the one with the greatest expected value) would include a rule about violating other rules to avoid disaster. And surely your case in which a million people die a painful death qualifies as a disaster.

  3. Doug, thanks for your reply and clarification.
    You say that
    >one’s life, abilities, and talents are >often very valuable. Thus we think that it >is wrong to do such things as injure oneself >gravely for no good reason.
    If it is wrong, all else equal, to deprive oneself of one’s life, abilities, or talents for the reason that these things have value, then we should also say that it is wrong, all else equal, to deprive oneself of pleasure or to bring oneself pain (unless we want to say that these things do not have value (or dis-value, in the case of pain)). Consider this argument:
    1. Some people (call them group A) have the considered moral conviction that it is wrong to decrease the amount of value in the world.
    2. Some members of group A (call them group A1) have the considered conviction that their own pleasure has value and their own pain has dis-value.
    3. Thus, the convictions of members of group A1 commit them to the view that it is wrong to cause oneself pain.
    4. Some people (call them group B) have the considered moral conviction that it is not wrong to cause oneself pain if the pain is very slight.
    5. Those in the intersection of A1 and B have inconsistent moral convictions. Thus their moral convictions are at least questionable.
    If the intersection of A1 and B is sufficiently large, then this is evidence, I think, that we should at least question the considered moral conviction that it is not wrong to cause oneself slight pain. (Personally, I find myself in the intersection of A1 and B. This does not surprise me, however, because I have found that many of my considered moral convictions are inconsistent with one another. Furthermore, just anecdotally, I find that many other people have inconsistent moral convictions. This is one reason that I do not think our moral convictions are reliable data on the basis of which to evaluate moral theories.)
    Regarding your reply on behalf of Hooker to the difficulty I raised in my last comment: I am not sure whether the rule that “it is morally right to go against other rules when this is necessary to prevent a disaster” would be part of the ideal code, i.e. the moral code having the maximum expected value, etc. when internalized by everyone everywhere. Hooker’s claim that this rule would be part of that ideal code is just one manifestation of his more general belief that the ideal code would not bring about disasters when followed in the “real world,” i.e. in a world in which only a minority of everyone everywhere follows it. But surely it is at least conceivable that the ideal code would have tremendously bad results when applied in the real world, even if it were the case that it would have the best possible results when applied in a world in which a majority of everyone everywhere has internalized it. If Hooker says that in this at least conceivable case we should abandon the ideal code and act to avoid the real-world bad results, then Hooker has deferred to act-consequentialism in the case of disasters. But if Hooker is willing to defer to act-consequentialism to avoid “tremendously bad” results in the real world, then I think he begins to tumble down a slippery slope toward full-blown AC. For there is no clear dividing line between what is “tremendously bad” and what is merely “pretty bad,” and what is merely “less than perfect,” etc. And anyway, it seems to me at least inelegant to say that when we are dealing with truly serious disasters we should be act consequentialists, but when we are dealing with disasters of some arbitrarily lesser proportion we should be rule-consequentialists.

  4. Reading over my comment, I see that I did not everywhere insert “expected” where it was appropriate. I hope it is clear that it belongs in the places that it is not.

  5. Note that I didn’t say, “it is wrong, all else equal, to deprive *oneself* of one’s life, abilities, or talents for the reason that these things have value” (emphasis added). What I said was that it is wrong to waste, squander, or destroy what’s valuable. And what I should have said was that it is wrong to waste, squander, or destroy what’s potentially valuable to others. I don’t think the wrongness of wasting one talents lies with its effects on oneself. Consider that it doesn’t seem wrong to waste one’s talent for pogo stick jumping even if one could increase one’s own happiness by developing it (as where one wants to get in the Guinness Book of World Records for pogo stick jumping). By contrast, it does seem wrong to waste one’s talent for composing beautiful music — perhaps it is wrong to do so even where one would be slightly happier doing something else besides developing this potentially frustrating talent. Why, intuitively, do we think it’s wrong to waste one’s talent to compose beautiful but not wrong to waste one’s talent for pogo stick jumping? I would argue that the best explanation is that whereas the one (i.e., pogo stick jumping) is of little or no potential benefit to others, the other is of great potential benefit to others. Thus I think that we can consistently hold both that it’s wrong to waste certain talents and that it is not wrong to fail to do what maximizes one’s aggregate pleasure. I think that we can say similar things about killing or gravely injuring oneself when one *could* otherwise be a valuable member of society. I know that this view is controversial and I’m planning a post on it later, but I hope that this is enough to show that there is no obvious inconsistency here.
    By the way, if your moral convictions are inconsistent, then I would argue that they are not your *considered* moral convictions. It seems to me that we should define considered moral convictions as those that survive a process of reflection that would include revising them in light of any inconsistencies. So I don’t share you worries about using coherence with our considered moral conviction as one among many criteria to be used in assessing moral theories. I’m planning a post on this topic as well.
    Regarding your interesting objection to Hooker: since I’m not sure what Hooker would say, I’ll let Hooker speak for himself if he chooses to. He has told me that he hopes to read our discussions here at PEA Soup and chime in, although it may not be for a week or so as he is currently busy with some professional trips.

  6. Hi David. You write, “I am not sure whether the rule that “it is morally right to go against other rules when this is necessary to prevent a disaster” would be part of the ideal code.” Do you mean that *you* don’t think it would be part of the ideal moral code, or that *Hooker* does not think it would be part of the ideal code? I don’t have his book in front of me right now, but I believe I recall Hooker saying repeatedly that the ideal code would contain a rule requiring one to prevent disasters–perhaps even at the cost of one’s own life. You are right that if there were no such rule in the ideal code, and if Hooker would sanction us to ignore rules if doing so would prevent disaster, then he has gone down the road of AC. But if the “prevent disaster” rule is *part* of the ideal code, the I don’t see how he has deferred to AC.

  7. Thanks to Dan and Doug for humoring my complaints for so long. My replies:
    Dan: I mean that *I* don’t think it would be part of the ideal moral code. Or rather, I mean that I am not *sure* it would be part of the ideal moral code. Let me restate my objection in (perhaps) clearer terms, since I’ve had time to think about it a little bit:
    1. Though admittedly quite unlikely, it is at least conceivable that the ideal code (the code which has the maximum expected value when internalized by everyone everywhere) would be such that, in the “real world” (the world where only a minority of everyone everywhere has internalized it), it would prescribe a course of action which results in an expected (foreseeable) disaster.
    2. In this at least conceivable case, Hooker has two options: A. To say that one ought to abandon the ideal code and act to prevent the foreseeable disaster; B. To say that one ought to act in accordance with the ideal code and accept the consequences (i.e. the disaster).
    3. If Hooker chooses A, then he has deferred to act-consequentialism for disasters. If Hooker chooses B, then he is able to maintain his rule-consequentialism, but at the cost of prescribing a course of action which results in a foreseeable disaster.
    Now, Hooker can reply by saying that this “conceivable” case will never, in practice, come up. He can argue that it will never actually turn out that the ideal code prescribes a course of action resulting in a foreseeable disaster in the real world. One way to argue this would be to argue (as Doug pointed out) that the ideal code would include a rule against taking action which results in foreseeable disaster. This reply is, of course, reassuring if it succeeds. But the necessity of this reply already suggests a concession to act-utilitarianism. The necessity of this reply shows that, in order for Hooker to maintain a plausible moral theory, he must endorse a code which does not produce horrible results in the *real world*. It is not enough that it produces non-horrible results (indeed, the best possible results) in a world in which a majority of everyone everywhere has internalized it.
    Hooker wants to provide an alternative to act-utilitarianism. The real test of whether he has successfully done so is to ask whether his theory would be plausible if it provided substantially different from those of act-utilitarianism. I think that the above argument shows that, at least in the case of disasters, Hooker has not done this. When it comes to disasters, I claim that IF Hooker’s theory differs from act-utilitarianism, THEN it is Hooker’s theory which must go – not act-utilitarianism. Even if we want to say that the antecedent of this conditional never actually is true, it is enough that the conditional itself is true. The truth of this conditional suggests that act-consequentialism has priority over Hooker’s theory in the case of disasters. (Or so I argue.)
    Doug: I suppose I simply have different convictions than you do. One of the consequences of your claims seems to be that there would be no morality for a person stranded on a deserted island; there would only be prudence. I think this is simply false. For instance, it seems to me that it would be wrong for a person on a desert island to take pleasure in imagining murdering someone, and to take actions which he knows will bring harm to himself. Obviously the fact that my convictions differ from yours does not show that yours are wrong; but perhaps it shows that yours are at least questionable. (Mine too, of course.)
    As a side note, coherence of moral convictions seems to me a risky standard to follow. Suppose I have conviction A and conviction B, and that convictions A and B are contradictory. Certainly one of them has to be abandoned; but which one? The standard of coherence can let us know that one of two convictions is false, but it cannot let us know which to abandon. If one abandons the wrong one, then one ends up with a set of convictions which are coherent but worse than the previous, incoherent set (since at least the incoherent set contained some “true” convictions). Of course, these points do not show that one should not strive for coherence of one’s moral beliefs; they show only (at best) that adopting coherence as one’s *only* standard can have the effect of making one’s views worse, not better.

  8. You say, “One of the consequences of your claims seems to be that there would be no morality for a person stranded on a deserted island; there would only be prudence. I think this is simply false. For instance, it seems to me that it would be wrong for a person on a desert island to take pleasure in imagining murdering someone, and to take actions which he knows will bring harm to himself.”
    I don’t think that my claims have this consequence. Could you explain which claims of mine have this putative consequence. My main thesis — that there is no moral duty to maximize one’s own aggregate utility — certainly doesn’t have this consequence. My main thesis suggests that it would be permissible to take less pleasure in imagining reading a good book when one could take more pleasure in imagining eating a juicy steak, but it doesn’t imply that it’s permissible to take pleasure in imagining murdering someone. One can claim that it’s wrong to take pleasure in the thought of someone suffering, while also holding that there’s no moral obligation, per se, to maximize one’s pleasure whenever the opportunity presents itself.

  9. I incorrectly took these two sentences to suggest that what makes X wrong is that it does harm to others:
    (1) “And what I should have said was that it is wrong to waste, squander, or destroy what’s potentially valuable to others. I don’t think the wrongness of wasting one talents lies with its effects on oneself.”
    (2) “I would argue that the best explanation [for the fact that it is not wrong to waste talent for music but not wrong to waste talent for pogo stick jumping] is that whereas the one (i.e., pogo stick jumping) is of little or no potential benefit to others, the other is of great potential benefit to others.”
    A person on a deserted island cannot do harm to others. If what makes something wrong is that it does harm to others, then a person on a deserted island cannot do anything wrong.
    But I see now that you were not advancing a criterion of wrongfulness. You were only saying what *isn’t* part of the correct criterion.

  10. Douglas’s original posting on this topic asks two good questions about my book.
    The first is whether my rule-consequentialism will be too pervasive. To be more specific, the suggestion is that my rule-consequentialism will insist on prudent decisions in all cases where the agent is choosing among alternatives not already forbidden by other-regarding rules.
    Unlike David, I agree with Douglas that this kind of pervasiveness would be a vice in a moral theory. I agree with Douglas about this because, following Hobbes, Hart, Mackie, Scanlon and many others, I think of morality as “a system … whose central task is to protect the interests of persons other than the agent” (see footnote 6 on page 6 of Ideal Code, Real World). In fact, I’d take out the qualifier “central.” On the other hand, I accept that there is an equally venerated tradition according to which there can be moral duties to oneself.
    Because I think morality a system whose task is to protect the interests of persons other than the agent, perhaps I should have built this in right from the beginning. That is, perhaps I should have formulated my rule-consequentialism as claiming that wrongness is a matter of offending against rules *about other-regarding behavior* that pass the test I specify.
    However, even without ground-level restriction to rules about other-regarding behavior, my rule-consequentialism might not be pervasive in the way we are discussing. If someone chooses the clearly worse of two self-regarding alternatives, we will think that person imprudent, impulsive, short-sighted, maybe even silly and stupid. Is there any point in adding moral condemnation? Obviously, there is no place for its close relative resentment here. Since resentment is out of the question and the effect of moral blame would probably be pointless or even counterproductive, my best guess is that a moral code endorsed by the test outlined in the book would in fact not include a rule against imprudent self-regarding behavior.
    If my best guess about that is incorrect, I’d retreat to the stipulation made in the earlier paragraph.
    Douglas’s other question is why I changed the formulation on p. 32 from “if and only if” (as it was in the 2000 hardback) to merely “if” for the 2002 paperback. The answer is that in 2001 John Skorupski pointed out to me that the “only if” part is inconsistent with my claim two sentences later. In the 2000 hardback, p. 32’s “if and only if” sentence includes the restriction that rule-consequentialism claims an act is wrong only if it’s forbidden by the code with maximum expected value. Two sentences later, there is the claim that, if two or more codes are better than the rest but equal to one another in terms of expected value, then the one closest to conventional morality determines what acts are wrong. But this conflicts with the earlier restriction.
    David’s comments on Dan’s posting raise a central objection to rule-consequentialism. This is the objection that following the code that would have the highest expected value if everyone internalised it might have disastrous consequences in the real world, because in fact only a small minority of people (indeed a tiny minority) have internalised it.
    I tried to deal with this objection by pointing out (pp. 98–99) that rule-consequentialism would endorse a “prevent-disaster” rule (up to some level of self-sacrifice, pp. 164-168). This rule would not be external to the ideal code; it would be an element within the ideal code. This rule would come into play in many contexts. Some of these contexts would be situations where others aren’t following the ideal rules (pp. 164-165, see also pp. 80-84).
    Given the formulation of rule-consequentialism on p. 32, I cannot see how rule-consequentialism could fail to endorse a “prevent-disaster” rule as one crucial component of the code it favors. And given such a prevent-disaster rule, I cannot see how compliance with the theory could be predicted to result in disaster. But this is no concession to act-consequentialism (p. 99).
    For the sake of completeness, I should mention that I think there is a distinction we need to make between cases where the potential beneficiaries of our complying with a rule are people who themselves have not refused to comply with it, and cases where the potential beneficiaries are people who have refused to comply with it. With respect to potential beneficiaries of our complying with a rule who have not refused to comply with it, the “prevent-disaster” rule kicks in, and the live question concerns degrees of self-sacrifice. With respect to potential beneficiaries who themselves have refused to follow the rules in question, there is a rule-consequentialist rationale for motivating them to start complying by making rules on how others treat them conditional on their behavior (p. 125).

  11. Brad: You suggest that the ideal code might not include a rule requiring “prudent decisions in all cases where the agent is choosing among alternatives not already forbidden by other-regarding rules.” The reasons you cite have to do with the pointlessness and, perhaps, counter-productiveness of adding moral condemnation to the mix. But what does moral condemnation have to do with it? The rule in question is not one requiring moral condemnation of those who act imprudently. Rather the rule in question is one requiring prudent decisions. Wouldn’t a code with a rule requiring prudent decisions have higher expected value, other things being equal, than a code without such a rule? Or are you suggesting that we don’t need such a *moral* rule, because there is already a non-moral rule requiring prudent decisions in place? But, in that case, you’ll still need to revise your criterion of rightness. You’ll need to say, “an act is wrong if it is forbidden by the code of rules that *when added to whatever non-moral rules that are already in place (e.g., rules of prudence, etiquette, and religion)* and internalized by the overwhelming majority of everyone everywhere in each new generation has maximum expected value in terms of well-being.” This revised criterion will, I think, be problematic for other reasons.
    Of course, you can still claim that wrongness is a matter of offending against rules *about other-regarding behavior* that pass the test you specify. You’ll need to say, “an act is wrong if it is forbidden by the code of rules *about other-regarding behavior* whose internalization by the overwhelming majority of everyone everywhere in each new generation has maximum expected value in terms of well-being.” This seems to me to be your best option.

Comments are closed.