On esoteric normative theories

Normative theories are occasionally criticized for being esoteric.  A theory T is esoteric iff (T is true (or correct, or superior to its rivals, etc.), but it is better that T not be generally believed or accepted.)  Examples of allegedly esoteric philosophical theories include:
• "Sophisticated" utilitarianisms: These theories distinguish between utilitarianism as a decision procedure and utilitarianism as a criterion of rightness, and might argue that utility is better maximized if some people people neither use the utilitarian standard as a decision procedure nor even accept or endorse that standard. If the theory were not esoteric, it may well be self-defeating.
Political anarchism: Even though it is false that we have any obligation to obey the state as such (apart from the justness of its laws and directives), it is better that people in general believe they have such an obligation.
Ethical egoism: Even though what we ought to do is always to pursue our self-interest, egoists would prefer not know or believe this.

One initial observation:  "It is better that T not be generally
believed or accepted" can be cashed out different ways.  Sometimes
esoteric theories are defended on terms internal to the theory, i.e.,
that the theory is better implemented or realized if it remains
esoteric (such as in sophisticated utilitarianism and ethical egoism).
But other times, the esotericism is defended by appeal to some other
normative concern; in the case of political anarchism,  it is better
that people not accept or believe the theory not because the theory
would be better realized if we did not believe it, but because of an
independent normative worry, such as fear of social disorder.

I’m curious whether esotericism is a fair criticism of a normative
theory — whether there is a single criticism here, and what force such
criticisms may have.  Initially, I can think of three reasons to
suppose that esotericism is fair criticism:

Ethics of belief:  Belief should strive to be true.  An esoteric theory denies this, and so should be rejected.
Stability:  Esoteric theories do not result in stability.  The best
way to esnure that a theory is realized in practice is for people to
believe and accept it.   We get no benefits from the true theory if few
people believe it. (I gather this is the force of a lot of
contractualist and Rawlsian thinking about publicity, which I gather is
the denial that estoeric theories are acceptable.)
Immorality: It is immoral — or at least morally wrongheaded — to
endorse esotericism in a theory.  Genuinely moral agents do not simply
act in accordance with, or in ways that promote, the normative ideals
proposed by a given theory.  They must instead act in accordance with
the theory because they realize the theory is true and gives them
reasons to act.  Virtue theorists, Kantians, and others with
agent-centered conceptions of morality might find this line of thought
attractive.

In any case, I’d be curious to know if anyone has thought carefully about these issues.

70 Replies to “On esoteric normative theories

  1. Mike,
    Isn’t the problem with esotericism rather that it conflicts with the usefulness of a moral theory? Under the assumption that, ideally, practical principles are useable, we should prefer practical principles that are, inter alia, (1) determinate, (2) exoteric and (3) not overly demanding. Principles that do not satisfy these criteria are likely not to be especially useful.

  2. I’ve never studied this in any depth, but I have for a long time failed to understand this kind of criticism.
    Is Mike’s suggestion of “usefulness” really what Michael is getting at when he talks of “stability”?
    I think Mike may be close to what people intend when they criticise theories in this way when he talks of usefulness, but it looks to me to be a completely wrongheaded criticism. Sophisticated utilitarians, for instance, will certainly maintain that their theory is very useful, given that it can tell us what motivational set we should adopt. How are these theories meant to be non-useful if they tell us which motivational sets to adopt?
    Some briefs thoughts on the other two worries that Michael raises:
    In terms of “Ethics of Belief”, this must be a claim with a ceteris paribus clause. Sure, beliefs should aim at truth, but that’s not to say that we shouldn’t sometimes believe falsehoods when there are other stronger reasons to do so. Can such a weak ceteris paribus be a deciding factor against, say, sophisticated utilitarianism, given the huge number of other arguments that might be employed both for and against it?
    I’m not sure I understand your last idea. “Immorality” seems like a strange way to phrase it, for the obvious reason that we’re here assessing normative theories. To assess them by bringing in some additional dimension of normativity seems like it might generate a regress. Or is the idea that correct normative theories ought to prescribe their own non-esotericity? (is that a word?!) That sounds question-begging.
    Thanks for the interesting post.

  3. Sophisticated utilitarians, for instance, will certainly maintain that their theory is very useful, given that it can tell us what motivational set we should adopt. How are these theories meant to be non-useful if they tell us which motivational sets to adopt?
    If a moral theory is esoteric then it is better, typically on the theory’s own terms, that the theory is not taught or made widely known. If utilitarianism is esoteric, for instance, then the maximization of utility is better achieved if the principle of utility is not widely known. The criticism advanced is that a moral theory that remains unknown (or known to just a few) won’t be especially useful. And it is not a good feature of practical principles that they are not useful. I certainly don’t claim to have originated the criticism. For what the criticism is worth, it is well-known.

  4. Mike C. writes:

    Ethics of belief: Belief should strive to be true. An esoteric theory denies this, and so should be rejected.

    I take it that the ‘should’ here is the moral should. I also take it that it is people, not beliefs, that should strive for truth. And, of course, a sophisticated utilitarian doesn’t deny that you should often pursue truth, only that you should always pursue truth. But now whether people should always act in ways that make it more likely that their beliefs will be true is clearly a question for normative ethical theory. And the claim “people should always act in ways that make it more likely that their beliefs” is true if and only if utilitarianism is false. Given this, I don’t see how you can use this claim as evidence for utilitarianism’s being false. Your ethics of belief just seems to beg the question against the sophisticated utilitarian.

  5. And the claim “people should always act in ways that make it more likely that their beliefs” is true if and only if utilitarianism is false. Given this, I don’t see how you can use this claim as evidence for utilitarianism’s being false.
    I can’t follow that, Doug. The objection from the ethics of belief is based on epistemic evidentialism. The idea is that you should believe only those propositions for which your evidence is on balance positive. You need not believe those propositions that are such that belief in them on balance pays. Utilitarianism requires the acquisitions of those beliefs that are utility-maximizating. It does not require (and sometimes prohibits) the acquisition of beliefs evidence for which is on balance positive. These criteria pull apart in many cases. Pascal’s Wager is one. But there are lots of others. It might be utility-maximizing for me to believe that I will survive surgery, even if the evidence is strongly against it. Evidentialists argue that I should not (i.e., should not, as a good epistemic agent) believe I will survive.

  6. I think there is an assumption of ‘complete publicity’ hidden in this discussion. We should first ask whether complete publicity or complete transparency is a necessary condition of a true/acceptable/correct moral theory.
    Complete publicity: moral theory M possesses complete publicity only if every agent has the capacity or ability to use M as a decision procedure.
    Complete publicity is an assumption that seems unproven. There are at least some reasons for not believing the assumption. Using any M as a decision procedure requires considerable factual knowledge, sensitivity to many values and interests, and imaginative awareness of effects of decisions on other people. Whatever non-moral capacities, abilities, and skills are required for an agent to use M as a decision-procedure vary widely throughout the human population. If S lacks certain sorts of experiences, biological dispositions, or educative opportunities that would develop S’s non-moral capacities, S might not possess the non-moral characteristics required to use M as a decision-procedure. S need not be ‘abnormal,’ ‘depraved,’ or ‘psychopathic’ for this to be true of S.
    It seems that for any M and any population, some proportion of the population would lack the non-moral characteristics required to use M as a decision-procedure.
    I think complete publicity is an ideal condition, so I am inclined to think that being esoteric itself is not good reason to reject a moral theory until we have clarified the publicity requirement on moral theories.

  7. Mike A.
    I meant to write: “the claim ‘people morally ought always act in ways that make it more likely that their beliefs are true’ is true if and only if utilitarianism is false.”
    If Mike C. is claiming only that people ought, epistemically speaking, believe what is true, then I don’t see that esoteric moral theories deny this. After all, moral theories make no claims about epistemic oughts or reasons.
    So I guess it’s a dilemma: On the one hand, if “Belief should strive to be true” is interpreted as merely a claim about what we should do epistemically speaking, then Mike C. is wrong to claim that esoteric moral theories deny this. On the other hand, if “Belief should strive to be true” is interpreted as a claim about how people morally ought to act, then it does conflict with, say, utilitarianism, but it just begs the question to say that this conflict counts as a reason to reject utilitarianism.

  8. Doug,
    I don’t think the objection is question begging. There is independent reason to hold that we should believe only those propositions for which the evidence is on balance positive. The reason is that such beliefs are more likely true and, certainly prima facie, truth is the aim of belief. Utilitarianism entails that truth is not the aim of belief (nor a norm of assertion), rather utility is. Since moral goals trump epistemic goals, the conflict resolves. We are left with a utility-maximizing epistemology. That seems, again prima facie, mistaken.
    Incidentally, D.H. Hodgson made this argument long ago in Consequences of Utilitarianism. D. K. Lewis has an interesting reply to Hodgson in ‘Utilitarianism and Truthfulness’ in his Phil. Papers II.

  9. I thought Doug’s point was that “beliefs aim at truth” means something like there’s an epistemic obligation to believe truths. Not a moral obligation. Utilitarianism entails that it’s morally wrong to believe truths sometimes, but says nothing about whether it’s epistemically obligatory. Maybe some utilitarians think that moral obligations trump epistemic ones: if you’re e-obligated to do a1 and m-obligated to do a2, and a1 and a2 are alternatives, you should all-things-considered do a2. If they think that, it’s not because they are utilitarians. It’s because they think moral obligations trump epistemic ones. You could think that if you were a Kantian too. And you could be a utilitarian and deny it. I don’t see any problem for utilitarianism here at all.

  10. I meant to write: “the claim ‘people morally ought always act in ways that make it more likely that their beliefs are true’ is true if and only if utilitarianism is false.”
    I think you really meant to write ‘only if’, not ‘if and only if’. (Just a small point.)

  11. Mike A.,
    You write:

    Utilitarianism entails that truth is not the aim of belief (nor a norm of assertion), rather utility is.

    As I understand the view, act-utilitarianism holds:
    AU: S’s act of X-ing is morally permissible if and only if there is no alternative available to S that would produce more utility than X would.
    Can you explain to me how AU entails that truth is not the aim of belief? As I see things, AU says something about acts and what determines their moral statuses, but it says nothing about the norms of assertion or about belief and what it’s aim is supposed to be.
    AU implies that acting so as to cause oneself to believe that P (whether P be true or false) is morally permissible if and only if so acting maximizes utility. But if Mike C. is giving us an account of the ethics of belief such that acting so as to cause oneself to believe that P is morally permissible if and only if there is sufficient evidence that P is true, then it seems to me that he is just begging the question against AU.

  12. I don’t see any problem for utilitarianism here at all.

    I guess I’m less certain there’s no problem here or in the vicinity. Suppose you are an ideal utilitarian, and say to me “I pushed button B”. I have no reason to believe that you are aiming at expressing the truth and every reason to believe that you are aiming at expressing something utility-maximizing. In fact, I couldn’t determine whether you are even instrumentally aiming at expressing a truth unless I know what you believe about what I would do upon the expression of the truth. But I don’t know what you believe about what I would do and, for obvious reasons, it wouldn’t help to ask you. The problem would begin all over again. Do Kantians have such a worry? Do social contract theorists?

  13. Thanks everyone. As usual, it looks like I set off a firecracker and scurried away. My own post was motivated by genuine curiosity: I’m troubled by charges of esotericism, but am not confident about their force or nature. So I’m trying to figure out if I have a dog in this race or not.
    Doug, Mike A., Ben, et al.: The ethics of belief point suggests (to me at least) a conflict between epistemic and moral oughts. (Ben is correct, I think, about Kantians also allowing that moral considerations supersede epistemic ones. Kant’s moral arguments for the afterlife and for belief in God are examples of this, though in Kant’s metaphysics, there is not evidence *against* these conclusions that would entail that accepting them violates a weak form of evidentialism.) That is to say, Mike A. is right and Doug is right: Utilitarianism (sophisticated or naive) by itself entails nothing about the ethics of belief, but it may violate evidentialism. The issue here is whether our normative committments form (or ought to form) a unified set, comprising both the ethical and the epistemic.
    Robert – I appreciate the spirit and direction of your reply. Though I wonder if clarifying “the publicity requirement on moral theories” is a project we can undertake independent of any substantive moral commitments. I.e., I suspect that arguments on what *the* publicity requirement on moral theories is will turn on what moral theory we find attractive. Publicity doesn’t seem like an a priori or theory-neutral theoretical constraint in the way that, say, the ability to make testable predictions is an a priori constraint on scientific theory.
    Alex – on the “immorality” criticism. Well, as I mentioned above in my reply to Robert, this might suggest that talk of publicity vs. esotericism is not something that can avoid circularity or begging the question. At root, my suspicion is that we have a dispute about the place you think morality has in human existence: Is morality mainly a standard for appraising action, etc., or is it something that is supposed to guide our behavior and toward which we are supposed to be positively disposed?

  14. “As I see things, AU says something about acts and what determines their moral statuses, but it says nothing about the norms of assertion or about belief and what it’s aim is supposed to be.”
    I take an assertion to be an act. I take the utilitarian goal for assertions to be utility-maximization. At best the goal of speaking truthfully (a typical norm of assertion) is instrumental. I’d say the same thing about belief. I take the cultivation of belief to be (or to include) voluntary action. These actions aim at utility maximization. So I should cultivate beliefs that are such that having them would maximize utilty. That’s all I have in mind.

  15. Michael:
    your response reminded me of something Frankena writes at the end of “Obligation and motivation in recent moral philosophy” about the difficulty of determining whether externalism or internalism about ethics is more correct. “Such a determination,” he says
    calls for a very broad inquiry . . . about the nature and function of morality, of moral discourse, and of moral theory, and this requires not only small-scale analytical inquiries but also studies in the history of ethics and morality, in the relation of morality to society and of society to the individual, as well as in epistemology and in the psychology of human motivation.”
    A large task, indeed, The objection based on esotericism like the publicity requirement, is the result of many conclusions in many areas of thought. It is often linked to unexamined beleifs in other areas. So I think it is not a decisive objection by any means to say that a moral theory is esoteric.

  16. Mike A.,
    Thanks. That’s helpful. I’m not onboard with thinking there’s any real problem for AU here, but I appreciate your point better.
    Here’s why I don’t think there’s a problem. Yes, asserting that P is an act. And you may have some intention/motive/goal in performing this act. But note that AU, by itself, doesn’t entail any view about what your intentions, motives, or goals ought to be. Some utilitarians claim that the best intentions/motives/goals to have are the one’s that dispose one to maximize utility. But one can accept AU without accepting this.

  17. Doug, I agree that,
    . . . AU, by itself, doesn’t entail any view about what your intentions, motives, or goals ought to be.
    What a utilitarian should do is maximize utility. How could that not be true? Maybe the best way to do that is not to aim directly at utility maximization. Maybe the best way to do that is to aim at conveying the truth. But certainly that is not in general true. Since I know that, I should not (except in cases where I know a lot more than I typically do about the utilitarian’s beliefs) trust the information that a utilitarian conveys.

  18. My problem with esoteric normative theories is mainly with their lack of success in what I take to be one of the main aims of normative theories. I’m often puzzled about what I ought to do, what would be wrong for me to do, and so on. Sometimes, at weak moments, I turn to normative ethical theories for guidance. Esoteric theories are the ones by definition that I cannot turn to. By the lights of these theories themselves, it would be wrong for me to do so or at least they guide me to think about something else. For this reason, the fate of this kind of theories is, in Williams’s memorable words, to usher themselves from the scene. If the esoteric background theory points to some other considerations that I should use to guide my actions anyway, then when I am doing *normative ethics* it is those considerations I’m more interested in.
    One word also about the ethics of belief objection. This is a tempting claim to make but I do think there is a reply from the esoteric perspective. The objection assumes that there is a difference between truth and warrant for having the relevant beliefs. The esotericists can deny this gap if they accept a deflationary or some other epistemic notion of truth. They can thus think that, yes, it is true that the aim at truth is constitutive of the belifs. But, they can say that for this class of beliefs to be true just is for them to be warranted. For this reason, whatever reasons the esoteric theory gives for the beliefs, these reasons are reasons for their truth. So, in terms of ethics of the beliefs things are kosher.

  19. Jussi,
    You write,

    My problem with esoteric normative theories is mainly with their lack of success in what I take to be one of the main aims of normative theories. I’m often puzzled about what I ought to do, what would be wrong for me to do, and so on. … Esoteric theories are the ones by definition that I cannot turn to.

    You must be using a different definition of an esoteric theory than Mike C.’s. On Mike’s definition, esoteric theories offer the true (or the correct, or at least the more plausible) account of what you morally ought to do. So they’re very successful at telling you what you morally ought to do and what it would be right and wrong for you to do. They are also good at telling you that you would be more successful at doing what you ought to do if you didn’t focus on what you ought to do but on something else instead.

  20. Doug,
    But Jussi’s onto something, isn’t he? An esoteric theory is one people aren’t supposed to believe, ergo, are not supposed to use the theory to *guide* their actions. They provide as you put it “an account of what you morally ought to do,” as you put it, but explicitly deny that the theory should be consulted in determining what you ought to do. In this respect, an esoteric theory provides us abstract or backward-looking criteria, but not prospective guidance for how to act.

  21. Thanks Michael! That’s pretty much the point. Whatever answer these theories give to the answer the question what I ought to do, that answer is something I should not think about when I deliberate what I ought to do. They thus tell me nothing after all when I think about this question from my first person deliberative perspective as an ordinary moral agent. Whatever they give as an answer to that question in the abstracted theoretical perspective is of little interest to me in the practical life. I guess I am with Jackson that normative ethics is concerned with practical perspective and not the theoretical, abstract one.

  22. I wonder if there’s two possible criticisms here, and it might be important which it is that we’re worried about:
    1) A theory X is esoteric iff it is better if no-one is aware of X at all.
    2) A theory X is esoteric iff it is better if people do not motivate themselves according to X.
    To some extent the difference is a matter of degree, but Jussi’s worry seems to apply only to (1), and not to (2). Theories which fall into category (2) can tell us (as I stated earlier) which motivational set to adopt, and therefore do function, indirectly, as a guide to practical action. Theories in category (1), on the other hand, offer no practical guide to action given that we ought to ignore them entirely.
    Here’s another worry with the criticism: Isn’t whether a theory ends up being esoteric or not a function of how intelligent those people using it are? And if so, doesn’t that make the esotericism criticism reliant on a kind of relativism? Why should the truth of a theory be dependent on how able we are in using it?
    And on the ethics of belief, isn’t the critic facing the following dilemma:
    Either, the criticism is that we morally ought to try to adopt true beliefs, or the criticism is that we epistemically ought to try to adopt true beliefs.
    If it’s the former, then it is, as Doug states, question begging. If it’s the latter, then I don’t see the problem – Why can’t we have moral reasons which conflict with epistemic reasons? As others have pointed out, this is by no means unique to esoteric theories.

  23. Jussi,
    They thus tell me nothing after all when I think about this question from my first person deliberative perspective as an ordinary moral agent.
    When you look at the theory from the first person deliberative perspective, it tells you exactly what you are supposed to do. Right? It’s just if the theory is esoteric, you shouldn’t be looking at the theory. But whenever you do look at, it tells you exactly what you ought to do. After all, that’s what it’s a theory of; it’s a theory about what we ought to do. So it does exactly what we ask of it. It doesn’t provide the best decision procedure for deciding what to do, but that’s because it’s not a decision procedure.
    Mike C.
    An esoteric theory is one people aren’t supposed to believe, ergo, are not supposed to use the theory to *guide* their actions. They provide as you put it “an account of what you morally ought to do,” as you put it, but explicitly deny that the theory should be consulted in determining what you ought to do
    Yes. But why is this a fault with the theory? What’s odd about the way you set up the problem is that esoteric theories are, by your definition, true. Yet you go on to provide criticisms of it. But if you admit that the theory is true, aren’t you just complaining about how you don’t like the theory’s implications? But no matter how much you don’t like it, you have to accept that it is true, given your own stipulations.
    Jussi and Mike C.
    Let me try to put the point differently. Suppose that human beings are such that we always do the opposite of what we believe we ought to do. In this case, whatever the true moral theory is, it will be esoteric. But how is this a criticism of the theory. We can lament that we’re like this. And we can lament that, because we’re like this, we ought to try to get ourselves to believe the opposite of what’s true. But the problem lies with us, not with the theory, which is, by stipulation, true.

  24. Doug,
    you write:
    ‘When you look at the theory from the first person deliberative perspective, it tells you exactly what you are supposed to do. Right? It’s just if the theory is esoteric, you shouldn’t be looking at the theory. But whenever you do look at, it tells you exactly what you ought to do’
    So, I want to know what I ought to do. I ask an esoteric normative theory for help. It tells to me ‘don’t ask me’ or as you say ‘you shouldn’t be looking at the theory’. How does that help me in my practical problem? If I, on the other hand, ignore the first advice of the theory and investigate what other guidance the theory gives me, then the by the theory’s own lights I am acting wrongly. However, acting wrongly was the last thing I wanted to do. Therefore, the theory cannot guide me to do what I ought to do.
    Alex,
    I think you are right. The theories of the class (2) are the kind of two-level theories that you can find from, for instance, Hare. I don’t think that the Williams objection can be put against them but I do think they face other serious problems. Scanlon on Hare is very good on this. There seems to be something morally dubious in the idea that our normal moral thinking is not aiming to be directed at what is really wrong, good, and so on, but rather there is a theoretical level on which we think about what acts really are morally wrong and our ordinaty thinking is just to instrument that gets us to act in the right way.

  25. Jussi,
    So, I want to know what I ought to do. I ask an esoteric normative theory for help. It tells to me ‘don’t ask me’ or as you say ‘you shouldn’t be looking at the theory’. How does that help me in my practical problem?
    Let’s just use act-utilitarianism (AU) for the purposes of illustration. It tells you what you want to know: maximize aggregate utility. Not only that, it tells you that you ought to do whatever will cause you to adopt the decision procedure that will maximize your chances of maximizing aggregate utility. So it tells you what you ought to do and it tells you what decision procedure to adopt. What else could you possibly ask for?
    If you didn’t know that AU was true, you wouldn’t even know what decision procedure would maximize your chances of doing what you ought to do. If it weren’t for AU, you might wrongly do what would inculcate in you and in others a bad decision procedure (like do whatever the Catholic Church says, or do whatever you think will be self-interestedly best for you, or do whatever you think will maximize aggregate utility). It’s a good thing that you have AU to tell you that you ought not do what will inculcate these sorts of decision procedures in you and in others.
    If AU is true and esoteric, then this is the best you can hope for. You may not like the idea that you should do what will make it the case that you are guided by something other than the truth in your day-to-day decision making. But I don’t see how our not liking a theory’s implications is a criticism of a theory that we are, for the sake of argument, stipulating is true.

  26. Hi All,
    Just a quick point – at a bit of a tangent, but I thought it might be worth contributing. As regards the force of the esotericity (?) objection, it seems to me that insofar as it has any force, it’s captured quite nicely by Korsgaard (in her ‘the Sources of Normativity’) where she cites three constraints which must be met by any answer to the question ‘what justifies the claims that morality makes on us?’. One of these is the transparency constraint: that is, it can’t be that the true nature of moral motives must be concealed from the agent if those motives are to be efficacious – that is, the justification and the explanation must both go through once the agent understands himself completely. [I’m quoting loosely from page 16 here]. The reason why this constraint applies, if I understand Korsgaard correctly, is that any correct moral theory should be able to address the question as posed from the first-person position of an agent who demands a justification of the claims which morality makes on him. If the theory is self-effacing, then it’s not clear how it can do this (because the explanation undermines the justification).
    Does that seem plausible?

  27. Jussi,
    Please answer the following questions:
    Suppose that human beings are such that we always do the opposite of what we believe we ought to do. Do you agree that this entails that all true normative theories are esoteric? Does this mean that these true normative theories are less plausible (or less likely to be true) than the false ones that it would be better for us to believe and follow? If you answer “yes” and “no” respectively, then can you explain in what sense your putative criticisms are genuine criticisms if they’re not considerations that make the truth of the theory less plausible.

  28. Doug,
    Again, not confident what my all things considered views on these matters is , but perhaps the (or a) problem with esoteric theories is that at least with normative theories, we ask for more than that they be true. Such theories are also subject to conditions of … well, I’m not sure what to call them, maybe something like acceptability, publicity, action-guidingness, etc. This might be a consequence of thinking of them as *normative*, i.e., as providing us guiding norms. If so, a theory may be criticizable for being esoteric without that criticism being one that casts doubt on its truth per se. I.e., worries about esotericism are not just attempts to deploy modus tollens against a theory to demonstrate its falsity.
    And I’m not sure how to understand the relationship between truth and these kinds of conditions: whether action-guidingness is a condition that determines if a theory is true or if it’s better thought of as an independent constraint on the endorseability of true theories. As you say, to insist that a theory not be esoteric may be question-begging against esoteric theories, but I wonder if it’s not equally question-begging to reject action-guidingness, publicity, etc., as a criterion for a theory’s acceptability. I feel genuinely at a loss here.

  29. What’s wrong with this argument?
    P1: By Mike’s definition, esoteric theories are true.
    P2: Whatever a true theory implies in conjunction with other truths is true.
    P3: The fact that a theory implies something that is true can never be a valid criticism of that theory.
    C: Therefore, there are no valid criticisms of esoteric theories of the form: a given esoteric theory in conjunction with other truths implies X.
    Aren’t all the criticisms that have been offered so far of the form: a given esoteric theory in conjunction with other truths implies X?

  30. Mike,
    Thanks. That’s helpful. But when you say, “Such theories are also subject to conditions of … well, I’m not sure what to call them, maybe something like acceptability, publicity, action-guidingness, etc.,” what are these conditions for? Conditions for being an adequate theory? Is it your view, then, that a normative theory can be true but inadequate? I suppose that a theory can be true but be inadequate to some task (say, that of itself providing a decision procedure). But so what? Isn’t it still the best theory – indeed, the true theory? Isn’t a false theory always inadequate? So which is more inadequate: a false theory that isn’t self-effacing or a true theory that is self-effacing?

  31. Maybe I’m just not taking a moral theory to be what others are taking it to be. As I take it, a moral theory is an account of what makes acts right and wrong. As such, all we want from it is a true account of what the fundamental right-making and wrong-making features of acts are. Now, of course, I don’t deny that we also want (and even need) to know what we should believe is right and wrong, how we should deliberate about what’s right and wrong, how to morally justify ourselves to others, etc. But why suppose that there is any one theory that will do all these various things?

  32. Doug,
    thanks – interesting points. Here’s few things I have to say in response. Let’s start from this one:
    ‘Let’s just use act-utilitarianism (AU) for the purposes of illustration. It tells you what you want to know: maximize aggregate utility. Not only that, it tells you that you ought to do whatever will cause you to adopt the decision procedure that will maximize your chances of maximizing aggregate utility. So it tells you what you ought to do and it tells you what decision procedure to adopt. What else could you possibly ask for?’
    So, I get the answer to my practical question and it is : ‘maximize aggregate utility’. First, as a reply to any practical question this is a really bad and uninformative one. As it doesn’t say what utility is, it fails to give any quidance. Second, if we forget that, there is an obvious follow-up question: ‘How does one do that?’. The esoteric view says ‘not by reflecting on what maximizes aggregate utility’. So, the question then is ‘How should I think about what I ought to do?’. Your assumption is that AU gives an answer to this question. But, I’m not sure it does. By AU’s lights, if I could get more utility by doing something else than by starting to count on the utilitarian calculus about how should I deliberate about what I ought to do, then I should be doing something else. It would be wrong for me to think about the question about the deliberation procedure in the utilitarian way then. So, now AU does not tell me what to do or how to go on deciding this. I think I could ask for more from a normative theory.
    Next, you ask:
    ‘Suppose that human beings are such that we always do the opposite of what we believe we ought to do. Do you agree that this entails that all true normative theories are esoteric? Does this mean that these true normative theories are less plausible (or less likely to be true) than the false ones that it would be better for us to believe and follow? If you answer “yes” and “no” respectively, then can you explain in what sense your putative criticisms are genuine criticisms if they’re not considerations that make the truth of the theory less plausible.’
    Sorry, I just cannot imagine such a possibility for Davidsonian reasons. We could not make sense of the agents in such a world. In order to make sense of them, we would rather deny that this is what they think they ought to do (they might be mistaken themselves about what they ought to do) and attribute them some other thoughts about what they ought to do. Also, even if I could imagine the scenario, I think in that case moral theorising would loose its point. Why start to think about what we ought to do if we knew that the consequence would be that we could not act in just that way?
    I think your last question gets to the heart of the matter. There are two conceptions of what ethical theory is in business of doing. The first sees the project as a quasi-science that hopefully would one day reveal the true nature of the world. The interest here is theoretical. Ordinary agents can, and they are even encouraged, to ignore the results in their practical decisions. The second sees the project as a practical one. We want to form plans of how to act, and for this reason we ask first how we ought to act. Normative theory then is just a more systematic way of going of solving this problem. In the latter view, esoteric theories make little sense. The question though is, does the first project make sense? I’m not sure.
    On a final note, I don’t think the notion of truth can be used to distuingish the projects. The question of ‘Should I do X?’ and ‘Is true that ‘I ought to do X?’ seem to be pretty same as are the answers ‘I should do X’ and ‘It is true that ‘I should do X’. Therefore, I don’t think it can be the case that the questions about the truth are limited to the theoretical perspective whereas the ones that do not can be reflected on the practical level. If the theory fails to answer the ought question, it fails in the truth question too, and vice versa.

  33. Jussi,
    I take the following excellent point of yours:

    There are two conceptions of what ethical theory is in business of doing. The first sees the project as a quasi-science that hopefully would one day reveal the true nature of the world. The interest here is theoretical. Ordinary agents can, and they are even encouraged, to ignore the results in their practical decisions. The second sees the project as a practical one. We want to form plans of how to act, and for this reason we ask first how we ought to act. Normative theory then is just a more systematic way of going of solving this problem.

    Perhaps, this is the source of our disagreement. My interest in ethical theory is a theoretical one, which is not to say that I’m not interested in the other project you mentioned, just that it’s a different project to my mind. Perhaps, though, my realist views are what attracts me to the theoretical conception. If you’re not a realist, then maybe the only project for you is the practical one.
    In response to my query about a world in which human beings always do the opposite of what they believe they ought to do, you say, “Sorry, I just cannot imagine such a possibility for Davidsonian reasons. We could not make sense of the agents in such a world.”
    Can you explain this further? And please note that I didn’t say that the reason they always do the opposite of what they believe they ought to do is that they always intend to do the opposite of what they believe that they ought to do. I was imagining a case like that of the direct act-utilitarian who tries to follow the utilitarian calculus as a decision procedure but fails miserably. Imagine then that human being are inept, not that they always trying to do the opposite of what they judge that they ought to do. They try to do what they believe they ought to do, but they always fail spectacularly.

  34. Doug,
    You say above,
    . . AU, by itself, doesn’t entail any view about what your intentions, motives, or goals ought to be.
    But let me see if I have this right. You seem to be evading problems for AU by denying that AU offers any recommendation concerning which actions to perform or which actions we have most moral reason to perform or intentions to have or goals to pursue. So you seem to be endorsing AU, and claiming it is consistent with RU, MU, and GU. And further you seems to be claiming that a moral agent that endorses AU and RU and lives in accordance with MU and GU might nonetheless lead a perfectly moral life.
    MU. One ought always to act in a way that minimizes utility or one ought never perform what AU claims is the right action.
    GU. And one ought to have as an ultimate moral goal the minimization of overall utility.
    RU. An act is morally right iff. it maximizes utility, but the moral rightness of an action gives no moral reason to perform the action.
    AU is consistent with MU and RU, on your view, since AU entails nothing about what actions we ought to perform or have most moral reason to perform. AU is consistent with GU since AU entails nothing about what moral goals we should have. This seems less than plausible to me, but it seems to be what you are defending. Is that right?

  35. Mike A.,
    You seem to be…denying that AU offers any recommendation concerning which actions to perform or which actions we have most moral reason to perform.
    How do you get this from what I wrote?
    On my understanding of what a moral reason is, RU is incoherent. But why don’t you tell me what you mean by a ‘moral reason’.
    And AU certainly tells you what you morally ought to do and morally ought not to do, for ‘impermissible’ just means ‘what one morally ought not to perform’. That would seem to be a moral recommendation concerning which actions to perform.
    My claim was that AU says nothing about what our beliefs, intentions, goals, motives, or any other of our mental states should be. It says nothing about what the norms of assertion are. It says nothing about epistemic matters. And it says nothing about what the aim of belief is.

  36. And AU certainly tells you what you morally ought to do and morally ought not to do, for ‘impermissible’ just means ‘what one morally ought not to perform’
    Hold on. Haven’t you been insisting that AU just tells us just what makes an action right? As such it is a metaphysical principle telling us what properties confer rightness on actions. Do you want to say as well that it tells us what to do? Maybe you’ve been saying that, too. If so, I missed it.

  37. Mike A.,
    Isn’t the property of being morally right just the property of being that which morally ought to be performed?
    Perhaps, if you think that “Utilitarianism entails that truth is not the aim of belief (nor a norm of assertion),” you should explain what you take utilitarianism to be and explain how it entails that truth is not the aim of belief. My point was mainly that I don’t see the entailment that you claim there is.

  38. Isn’t the property of being morally right just the property of being that which morally ought to be performed?
    If you’re happy with that, then suppose that A is some assertion and A has the property of being morally right.
    1. :. A ought morally to be performed. From the claim above.
    2. A is right iff. A maximizes overall utility. From AU.
    3. :. A maximizes overall utility. From 1,2 and claim above.
    4. It is not in general true that A maximizes overall utility iff. A is true.
    5. :. It is not in general true that A ought to be performed iff. A is true. From 1,2,3,4.
    It is (5) that I claim is inconsistent with the norm of assertion that roughly you ought to assert A only if A is true. Which of (1)-(5) is mistaken?

  39. Doug,
    thanks that’s helpful. I can imagine a world where utilitarians are instrumentally irrational. But, in the relevant sense these people are doing what they believe they ought to do. They believe they ought to phi, that by psying they would phi, and that they therefore ought to psi. And, they do. It’s just the instrumental belief is false. I don’t see how this would make all true moral theories in that world esoteric.
    This:
    ‘My claim was that AU says nothing about what our beliefs, intentions, goals, motives, or any other of our mental states should be.’
    is something I never understood. If a theory says what we ought to do, surely this implies what we ought to intend to do, and so on. Nameny to do what we ought to do.
    I do understand that many people see normative ethics as a theoretical pursuit to discover the normative reality. I used to think this. But, now I am hesitating. The reason is that we are interested in goodness, wrongness, oughts and so on. These are not theoretical notions scientifically defined but rather everyday notions that get their meaning through how we used them in our moral community. It would be odd if they came to denote something through our use that was hidden us and that could be discovered by technical philosophical investigation. I just cannot imagine that there was an very complicated answer to what things are wrong, good, and so on that could come as a surprise to us, that would be difficult to comprehend, and which we could ignore in our everyday life. How could we have come to mean that by our use of these ordinary notions? I’m worried that I’m getting into too much Wittgenstein…

  40. Mike A.,
    5. :. It is not in general true that A ought to be performed iff. A is true. From 1,2,3,4.
    What follows from 1, 2, 3, and 4 is:
    5*: It is not in general true that A ought morally to be performed if and only if A is true.
    5* is inconsistent with the norm of assertion if and only if the norm of assertion is that you ought morally to assert A only if A is true.
    I didn’t think that this is what the norm of assertion was. But if it is, then I think that it is quite obviously false and so we shouldn’t worry about a theory just because it inconsistent with it.

  41. Jussi,
    I don’t see how this would make all true moral theories in that world esoteric.
    Maybe I’m not being clear. Suppose the theory in question is T. Suppose that all human beings believe T and so try to do what T says, which is to try to E (e.g., refrain from violating others’ rights, maximizing utility, or whatever). Further suppose that for some reason (they have false means-ends beliefs, they’re incompetent, or an evil demon is messing with them) when they try to E, they always end up doing the opposite of E (e.g., violating others’ rights). If, however, they believed that they ought to try to ~E (e.g., violate others’ rights), they end up E-ing (e.g., refraining from violating others’ rights). So wouldn’t T be an esoteric theory on Mike’s definition, that is, isn’t it the case that it would be better that T not be generally believed or accepted?

  42. Jussi,
    If a theory says what we ought to do, surely this implies what we ought to intend to do, and so on.
    I’ll accept that if a moral theory says that some act token, A1, is the one, of all my alternatives, that I morally ought to perform, then I ought to intend to do A1. But the following seems false: from both the fact that a moral theory says that I ought to maximize aggregate utility and the fact that my performing A1 is what would maximize aggregate utility, it follows that when I perform A1 I ought to have the intention of maximizing aggregate utility.

  43. Doug,
    thanks. The case is now coming clearer but it’s also a different one than the one we started with. Now, there’s us as we are and them who cannot act as they believe they should. We have no reason not to believe T and thus it is not esoteric for us. But, place yourself to their situation. Could you do normative ethics if you know that what ever you came to believe you could not do? It would be a funny situation. In fact, in that case, if we assume ought implies can, we could ensure that any random possible normative theories is false just by believing it. No theory could be true for us. Why attempt to then find out which one is?
    Right to the last comment. But, already the first admission implies that
    ‘My claim was that AU says nothing about what our beliefs, intentions, goals, motives, or any other of our mental states should be.’
    is false. I take it that AU allegedly implies for every situation for all of us a token of act we ought to do. So, in each case it also allegedly implies what we ought to intend to do. It is probably unable to tell us how go on figuring out what is the right thing to intend to do in each situation.

  44. Very interesting discussion!
    I think the problem with a radically esoteric moral theory is that it becomes impossible to do the right thing because it is right. For if you do in fact perform the right act, you won’t know that it is right.
    I am not Kantian enough to think that ONLY actions performed out a sense of duty have moral worth. But I do think it must regularly be possible to do the right thing in part because you see that it is right. And radically esoteric moral theories, if I understand them, deny that possibility.

  45. You cannot observe the norm of assertion and the utilitarian moral norm. Right? So the recommendation of one norm is not consistent with the recommendation of the other norm. Makes no difference to me that you happen to believe this norm of assertion is false. The problem remains that utilitarians cannot be trusted to make true assertions. That is bad news, especially in coordinating behavior.

  46. Thanks Jussi. Yes, I was admitting that my claim was false. But all I need for the point that I was trying to make is:
    “But for the cases where AU requires that we intend to do some morally required act token, AU says nothing about what our beliefs, intentions, goals, motives, or any other of our mental states should be.”
    Regarding this: “if we assume ought implies can, we could ensure that any random possible normative theories is false just by believing it. No theory could be true for us.”
    I’m not sure that you have the relevant sense of ‘can’ in mind here. I’m also not sure what it means for a theory to be true or false for us. I only understand “true” as a monadic predicate. But I get what you’re driving at, and I’ll think about. Thanks.

  47. Mike A.,
    I don’t think any moral (or nonmoral) person could be trusted to always make true assertions, and I suspect this poses no special problem for coordinating behavior. I suspect that regardless of which moral theory is correct, we will go on coordinating behavior just as well as we currently do. In fact, we already are.

  48. Jussi says:

    There are two conceptions of what ethical theory is in business of doing. The first sees the project as a quasi-science that hopefully would one day reveal the true nature of the world. The interest here is theoretical. Ordinary agents can, and they are even encouraged, to ignore the results in their practical decisions. The second sees the project as a practical one. We want to form plans of how to act, and for this reason we ask first how we ought to act. Normative theory then is just a more systematic way of going of solving this problem.

    I think this is exactly right, and seems to me to mirror what Michael Smith says in The Moral Problem, namely, that the two central features of a moral theory are its objectivity and its practicality. Now an interesting question concerns the proper relation that exists between these two features. If we describe the first feature as the ‘standard of rightness’ (or the set of moral truth-conditions) and the second one as the ‘decision procedure’, we could ask the following meta-ethical question: What is the proper relation between standards of rightness and decisions procedures? I can think of two answers, that alternatively privilege one or the other feature. We might think that the relation is conceptual, and that is bridged by a principle like John Broome’s krasia, which says, roughly, that rationality requires of me that, if I believe that I ought to do something, I intend to do that thing. Alternatively, we might think that the relationship is normative, and that the correct decision procedure is the one that ought to be adopted, according to the truth conditions set by the standard of rightness.
    Like Prof. Portmore, I consider that the business of ethics is the “quasi-scientific” one of providing us with the structure of the normative world. But the problem with this view—and concerns like this might have led Jussi to reconsider his initial sympathy for it—is that it seems to undermine what we take to be the intuitively plausible requirements that rationality imposes upon us. In this picture of the nature of normativity, whether we ought to believe this or that has nothing to do with our prior beliefs or intentions, but simply with the causal role those beliefs have in promoting whatever it is that our normative theory says ought to be promoted. There is something hard to digest in this implication.

  49. Sorry Doug. I put the ‘for us’ to the wrong place. It was supposed to be related to the theory and not truth. That is, in that case the theory, understood as what determines the moral status of our acts, could not be true, full stop. I fully agree about the truth bit. I’m not sure what sense of ‘can’ I had in mind. I just imagined believing a theory, trying to do it, and, as the story was told, this would never come to happen.
    Eric,
    I like your point. I know there are people who think that it is better to do the right thing because of the features that make it right and not because it is right. But, even if this was right, surely it would have to at least make sense to act from the less than admirable motivation that it is the right thing to do. I’m not sure though that the esoteric views have to deny this. They usually say that it is better that we do not believe the theory – not that we cannot do so. If we can, then we can be motivated by the theory even though we should not be.

  50. I have a lot of sympathy with Mike A.’s ethics-of-assertion argument.
    It seems to me that we do not come up with moral theories on our own. We do this collectively, with rational debate. So now imagine that I have been in moral discussions with Eso and I know that Eso believes that esotericism is true. I have some pressing problem and I ask Eso what to do. He tells me, “You should do X.”
    Now, how am I supposed to take this? Eso has probably not told what he believes to be true; he has probably told me what he thinks will engender the most moral results if I believe it, and by hypothesis this is not the truth about what I should do. So I will not, if I am wise, take Eso’s utterance as aiming at the truth. In fact, I might as well begin parsing Eso’s utterances of the form, “You should do X” in different terms. I will either think of them as meaning, “It would be best for you to believe that you should do X” or some non-assertoric speech act, as “Do X!” So if we endorse esotericism, or believe others do, our normal patterns of interaction with them break down.
    It gets worse I think. For now recall that I have been in theoretical moral debates with Eso. But has he been telling me what he really thinks? Or only what he thinks would be best if I believe? How am I to interpret his moral-theoretic discourse now? If he has convinced me of esotericism, but we are still debating first-order questions, how can he take my utterances? Not as truthful assertions, I think.
    The general lesson: as soon as we have reason to think that others are not following the assertion norm, we can no longer reason with them, morally or otherwise. And since we have to reason morally together–no Lone Moral Rangers–we have to all follow the assertion norm. And if that’s right, we can’t endorse esotericism and remain moral reasoners.

  51. I don’t think any moral (or nonmoral) person could be trusted to always make true assertions, and I suspect this poses no special problem for coordinating behavior. I suspect that regardless of which moral theory is correct, we will go on coordinating behavior just as well as we currently do. In fact, we already are.
    Oh, certainly. None of these highly idealized problems for utilitarianism have the slightest practical significance. But what else is new? I’ve never thought or said that they do. To be fair, I also never said that the problem is that utilitariansim entails that persons could not be trusted to always make true assertions. Of course that is no special problem for utilitarianism. The problem is that ideal utilitarians (i.e., moral agents that unfailingly follow the recommendations of the utilitarian principle–i.e., always seek to maximize overall utility) can be expected to make utility-maximizing assertions, not truthful assertions. That is really the long and short of it. And (as Hodgson rightly says) this does generate interesting coordination problems in lots of (again, idealized) cases. Of course this is not a problem for actual utilitarians since none of them unfailingly (or even close to unfailingly) follows the principle. Anyway, there it is.

  52. I also think there’s a lot to be said for Jussi’s Davidsonian gambit. Is it conceivable that there is a group of rational agents who systematically fail to do what they believe they ought to do? I think not. Note that a society of people with radically false means-end beliefs (hard as that is to conceive) won’t quite cut it. For suppose I believe I ought to do X, and falsely believe I can do X by Y-ing, and so I ineffectually (for doing X) do Y. Still, I will have believed that I ought to take all necessary and some sufficient means to X, and that Y was necessary or sufficient, and hence that I ought to do Y-or-something-just-as-good. And I have done that. So I have not failed entirely to do what I believe I ought to do.
    No, we have to conceive of a society of rational agents who are radically incompetent. They are constantly flubbing their intentional actions. Consequently, the y would ignore what anyone said about what they were going to do, although since speech acts are intentional acts, there is no reason to think that people could communicate thoughts at all in this society. Their practical reasoning would have no predictable effect on their actions, and in that condition they would probably cease practical reasoning at all. They cannot teach their words to children, since they always ostend the wrong things when teaching, and they always perform the wrong actions when they go to demonstrate skills. I think we lose our grip on any conception of rational agency in this situation.
    The general lesson: practical reasoning, the kind that is effective in generating action, cannot diverge too far from theoretical reasoning about what one ought to do, on pain of failing to be reasoning at all. Theoretical moral reasoning has to be able to guide practical action if the agents are to be rational. But if that is so, then no rational agent can seriously embrace esotericism in a moral theory.
    Here’s a more down-to-earth example. When faced with a moral issue, how is the philosopher who embraces esotericism to think? “What I ought to do is [ordinary X]…, but what I really ought to do is [esoteric Y]…”? This can’t go on in one head. If you think a moral theory is both true and esoteric, do you act on it when the time comes to act? If you do, you’re acting irrationally by your own lights (against the esoteric part); if you don’t, you’re also acting irrationally (against the truth part). I conclude that no one ought rationally to believe a moral theory is both true and esoteric.

  53. Thanks Heath. I’m with you there. If only I could put things in such a nice way.
    I think I want to raise a further worry about the esoteric theories. It is based on a contestable premise which I’m quite fond of. I’m quite convinced that there is an inferential connection between wrongness, rightness, oughts and our reactive attitudes of blaming, resenting, praising, and so on, others. I think that any plausible theory of what things are wrong, right and ought-to-be-done should be able to fit this connection. I wonder whether esoteric views can do this.
    Consider the case where according to an esoteric theory acts of the kind X are wrong but no-one should believe that they are wrong qua being that kind of acts. If we accept the inferential connection, then we think that if an act is of the kind X, then doing acts of that kind is something one is to be blamed for. But, that sounds a bit unfair in this case. We already thought that agents should not think that that kind of acts are wrong qua being that kind of acts. How can they then be required to make sure that they are not doing that kind of acts? If they cannot make sure that they do not do acts of the kind X, how could they be blameworthy if they happen to act in that way? The esotericists could try to deny the inferential connection and say that wrong acts need not imply blameworthiness but for me that strikes as changing the subject of what we are talking about.

  54. Poking my nose back in … Echoing Heath’s remarks about moral theories being such as to guide our practical reasoning: Is our dispute about whether esotericism is a legitimate objection to a moral theory a proxy for the dispute about moral reasons internalism? I.e., if you endorse [borrowing Darwall’s terminology]:
    (MRI) If S morally ought to do A, then necessarily S has reason to do A
    aren’t you also likely to think that esoteric theories should be rejected because they imply the falsity of MRI? To wit, an esoteric theory does not advocate making available to agents the moral reasons they have for doing what they morally ought to do, since esoteric theories do not recommend that (in general) agents believe the theory,thus denying the consequent of MRI. But esoteric theories suggest that agents morally ought to do what the theory recommends, thus affirming the antecedent of MRI.

  55. Michael,
    I’m not sure. MRI, as you have written it down, is what people usually nowadays call rationalism about morality. Even though it was earlier confused with internalist claims (because reasons were thought to be mental states), it’s relation to moral reasons internalism is not clear at all. In any case, an esotericist can accept MRI, if she accepts esotericism about reasons as well.
    But, you are right that esotericism about reasons might be a commitment to externalist moral reasons in Williams’s sense. His externalism was the idea that
    ‘An agent has a reason to phi iff there is a sound deliberative route from her prior motivations to being motivated to phi.’
    So, the esotericist might think that all agents have reasons to act in certain ways even if they should not have motivations that would guarantee the sound deliberative route to acting in the required ways from the pre-existing motivations. That would be a rejection of Williams’s internalism. But, on the other hand, I don’t think that those who want to reject esotericism on the grounds that the cannot provide practical guidance ground their objection on the internalist motivational claims, but rather on what kind of questions we want our normative theories give answers for.

  56. Jussi,
    I agree with you that esoteric theories usually say that it is better that we do not believe the theory – not that we cannot do so. And we can be motivated by the theory too, as you say.
    But the problem is that if we are indeed motivated by the theory, we won’t wind up doing what the theory recommends. That’s exactly why the theory has to be estoeric. This is especially true for theories like AU, devoted to singling out only one action as right.
    This leaves two (well, three) possibilities:
    1. The agent is motivated by the theory, and, since the theory is esoteric, acts wrongly.
    2. The agent is not motivated by the theory, and, as luck would have it, acts rightly.
    3. The agent is not motivated by the theory, and acts wrongly.
    So in no case can the agent be motivated by the theory, and acts successfully according to the criterion of the theory. In other words, you cannot do the right thing because it’s right.
    I agree that one need not always be motivated for the right reason in order to do the right thing. But for this to be systematically impossible seems bizarre.

  57. Eric,
    again, that seems right. But, as always, things get complicated with certain theories. What I have in mind are the esoteric virtue ethicists (yes – I think about them a lot ;-)). So, their theory of right for instance could be:
    X is right in C iff a virtuous would do X in C.
    Of course, the virtuous agent does not believe this about rightness. She believes that honesty, kindness, and so on, make acts right and is pulled to act in these ways through her virtuous charactheristics. This leads to the view being esoteric. We should not believe the virtuous ethicists theory about rightness according to her but rather become virtuous and acquire the correct motivational dispositions that are not targeted at rightness per se. The theory even gives advice how to do this – hang out with the virtuous ones.
    In this case, you could believe the virtue theory in be motivated to be what it is says is right because those things are right. This, however, would be less admirable. the other option would be to not be motivated by the theory but rather follow its advice to become a virtuous person. If you do that, then it is still not the case that it is down to luck that you do the right things. There is a systemic explanation for why you would do the right things without doing them because they are right and right is what the theory tells us.

  58. There are some things that one does best if one does not consciously aim to do them. Sleeping is my favourite example. Trying to get to sleep is not a good way to get to sleep. Does it follow that you ought not to sleep? No. Perhaps you ought not to try to get to sleep. But sleeping is okay.
    Now suppose maximising utility is like sleeping in this respect: if you try to maximise utility, you won’t maximise utility. Does it follow that you ought not to maximise utility? No. As we saw in the case of sleeping, the fact that trying to do something is self-defeating does not show that you ought not to do that thing.

  59. Campbell,
    I’m not sure what the point of the analogy is. No-one’s claimed that it is an argument against esoteric AU that aiming maximizing has worse results than not aiming at maximizing at it. That observation just leads the utilitarians to form the esoteric view which problems we have discussed so far.
    I’m not even sure there is an analogy. In the sleeping case, you believe you ought to sleep. That’s why you do various actions that you have found out contribute to your sleeping: you brush your teeth, go to bed, turn the lights off and so on. In the esoteric AU case, on the level of thougth, nothing like this is supposed to be going on.

  60. Campbell, you write,
    As we saw in the case of sleeping, the fact that trying to do something is self-defeating does not show that you ought not to do that thing
    This does sound right. But it’s maybe easier to follow this way.
    As we saw in the case of sleeping, the fact that trying to do something directly is self-defeating does not show that you ought not to do that thing indirectly.
    I don’t know, but that seems like what you’re saying.

  61. Jussi
    Perhaps I misunderstood what was meant by ‘esoteric’. As I was thinking of it, a moral theory M is esoteric if using M as a decision procedure makes it less likely that one will comply with M. I think at least some of the discussion above has been about esotericity in this sense.
    But it seems you want to say that M is esoteric only if believing M makes it less likely that one will comply with M. I doubt that utilitarianism is esoteric in this stronger sense.

  62. Campbell,
    here’s how Michael defined esoteric theories in the beginning:
    ‘A theory T is esoteric iff T is true (or correct, or superior to its rivals, etc.), but it is better that T not be generally believed or accepted.’
    I would make a slight chance to this description and put it like this:
    A theory T is esoteric iff if T was true, then this would imply we should not believe T.
    I’m not sure this fits either of the options you give. But, it is easy to see how some forms of AU might potentially end up being esoteric in this way. We could assume that T said that one ought maximize utility. It might turn out that if people believed that they ought to maximize utility this would not maximize utility. And, therefore they should not believe that they ought to maximize utility by theory’s own lights.

  63. It might turn out that if people believed that they ought to maximize utility this would not maximize utility.
    How might this happen? And how is the possibility of this happening sufficient to show that T (utilitarianism) implies we should not believe T? (I guess the second is a question about what you mean by ‘imply’.)

  64. Well, you might think that rational agents have a disposition to try to do what they think they ought to do. This is what a lot of people think rationality requires from us. In fact, if someone is not trying to bring about what they say they think they ought to do, we often start to question whether they hold the normative belief in the first place.
    If this was the case, then rational agents would be disposed to aim at maximizing utility and, as you wrote, this could have unfortunate consequences in terms of what levels of utility they achieve.
    No, it does not imply that all forms of utilitarianism therefore provide reasons not to believe the view. But, if you think that believing is something that you do and something that should thus also be morally assessed with the utilitarian criterion, then the theory would lead to a prescription not to believe itself.

  65. Perhaps we should see this a bit like a kind of Pascal’s wager case. Having the true belief has epistemic virtues. But there can be cases in which achieving what we think we ought to achieve involves avoiding thinking that that is our real goal, perhaps even persuading ourselves that something else is our real goal. To follow up on Campbell’s case, imagine that thinking that one needs to stay awake causes one to fall asleep. Then if one really wants to achieve sleep, one has a reason to get oneself to think one needs to stay awake.

  66. Campbell writes:
    “There are some things that one does best if one does not consciously aim to do them. Sleeping is my favourite example. Trying to get to sleep is not a good way to get to sleep. Does it follow that you ought not to sleep? No. Perhaps you ought not to try to get to sleep. But sleeping is okay.”
    I agree that sleeping is something that you can’t consciously aim to do. But this is because sleeping isn’t an action at all. As Austin noted, not every verb that can be grammatically inserted in the “John Vs” schema is an action. And I take it as a conceptual point that a theory of right governs only actions, not bodily happenings like sleeping, digesting, circulating, salivating.
    Is “maximizing utility” an action? Good question!

  67. Jussi,
    I too think a lot about esoteric virtue theories! Even on the kind of virtue theory you mention, it seems possible to do the right thing because it is right. This motivation isn’t ideal from the theory’s perspective, as you rightly note. But successfully acting from such a motive is still possible. This is because you do not need to be virtuous in order to do the right thing, as Aristotle famously notes. (You do have to be virtuous to act virtuously, however.)
    Radically self-effacing theories of the right–theories that say, for example, in order to maximize happiness (and thus do what’s right), you should believe that AU is false and that Kantianism is true, and be motivated from the Formula of Law–imply that you cannot do the right thing because it is right. Such theories seem more esoteric than the kind of virtue theories you have in mind.
    Because virtue theories of the right have this problem with esotericity (is that a word?), I think that maybe Anscombe was right that virtue theorists should just give up on the notion of the “right” or the “moral ought”.

  68. Well, you might think that rational agents have a disposition to try to do what they think they ought to do.
    That can’t be right. Suppose you believe that you ought to get to sleep, but that your trying to do so would be self-defeating: if you try, you won’t. Surely the rational thing would be not to try.
    Mike A’s distinction between direct and indirect trying may be helpful here. It’s plausible that directly trying to maximise utility is self-defeating. But it’s less plausible that indirectly trying is self-defeating. So a rational utilitarian might try indirectly rather than directly.

  69. Campbell,
    I don’t see the argument against the psychological constitution of rational agents there. All the rationality claim about not trying amounts to is a judgment about instrumental rationality. The best way to achieve what you are aiming at is not to consciously think about it. This is not to say that you are not trying when you prepare to go to sleep and make the circumstances favourable.
    I guess the question is do you really want to deny the requirement of rationality:
    Rationality requires [If one beliefs one ought to do X, then one intends to do X].
    If we deny this and also the psychological ability most agents have to comform to it, then any normative talk becomes quite pointless as it would have no bearing at all on our actions about which the talk seems to be in the first place.

Comments are closed.