I want to discuss a problem for ethical intuitionism and an argument that seems to show that ethical intuitionists either have to embrace skepticism or naturalism. It's an interesting argument and I'm not entirely convinced that the response I set out below adequately addresses the worries that motivate it, but I thought I'd give it a shot. The argument from cosmic coincidence is taken from Matthew Bedke's Pacific Phil Quarterly paper (here or here if you can't get library access). Before we get to the argument, I should say that the view I want to defend is the view that it's possible to have non-inferential moral knowledge based on intuition alone even if we have no independent grounds for thinking that our intuitions are reliable (provided, of course, that there aren't reasons to think intuition is unreliable that we ought to take account of). The argument seems to show that if ethical properties are non-natural properties, intuitionists have to say that we cannot have moral knowledge. Once we recognize this, we cannot have justified moral belief. (Maybe you can have justified belief without knowledge, but I don't think you can justifiably believe that which you have good reason to think you aren't in a position to know.) So, given some assumptions about the metaphysics of moral properties, the argument can lend some support to the skeptical view that it's not possible to have moral knowledge (ST1) and that it's not possible to have justified moral judgment (ST2).
Sometimes we believe we ought to do things. Sometimes we then do them. I'd love to know how the normative status of normative judgments (which I'm taking to be beliefs about what ought to be done all things considered) are related to the normative status of the things we do. I think that this is right: if your belief that you ought to Φ is justified, Φ-ing is justified. (If you ought to believe that you ought to Φ, you really ought to Φ.) I've written up a short little piece attacking a view (a.k.a., 'The View') that uses some principles I like but uses them for nefarious purposes (attacks on epistemic purism, attacks on views of the ontology of practical reasons that identify them with states of the world or worldly facts). I've attacked The View before (in 'The Myth of the False, Justified Belief' (here)), but my argument rested on intuitions about the moral significance of facts that an agent is non-culpably ignorant of that some people think are dodgy. (Fwiw, I've found much better rhetoric to use to get people to have the right intuitions than I used in that paper.) It can't be that facts you're non-culpably ignorant of determine what your obligation is, if you fail to take account of them, that's just bad luck. Or something like that. I'll try something different here and try to hit The View where it hurts. (Because I know the targets and we seem to be on reasonably friendly terms, I'm a bit more glib than I would be otherwise. Since they seem to be rather glib in attacking the views I cherish, I hope they'll forgive me as it's clearly not intended to be disrespectful.)
First things first. I want to thank Doug, Dave, Dan, and Josh for inviting me to come on as a contributor. I’m interested in connections between reasons for action and belief. For a while, I’ve been content to argue that at a certain high level of abstraction, we ought to expect similarities between reasons for action and belief. So, for example, if we can show that reasons for action belong in a certain ontological category, it would be surprising if the right account of reasons for belief located those kinds of reasons in an entirely different ontological category. If there’s a gap between reasons and rationality on the practical side, it would be surprising if there were no similar gap on the theoretical side. (Of course, if there’s no gap between reasons and rationality on the theoretical side, we ought to reconsider the suggestion that there’s a gap on the practical side.) You get the idea.
What justification is there for thinking that claims about reasons for action justify claims about reasons for belief? I suppose you might say that the arguments that (purport to) show that there’s a gap between reasons and rationality on the practical side show that there’s nothing to the concepts of normative reason or rationality that require them to go hand in hand. If someone wishes to defend the view that there’s no gap between reasons and rationality on the theoretical side, the onus would be on them. To paraphrase a remark of John Gibbons’ from a forthcoming paper of his, there’s a built in explanation of the similarities since both reasons for belief and action are reasons.
I’m interested to see if we can establish something stronger than just the claim that there’s a burden of proof on those who wish to insist that reasons for action and belief differ in important ways. I’ve been kicking around an idea for the past few months and thought I’d see what sort of reaction it would receive here. Consider:
Link: If you oughtn’t φ, you oughtn’t believe that you ought to φ or that you may φ.