Upcoming Ethics Review Forum: McGrath’s “Moral Knowledge”, Reviewed by Wiland

We’re pleased to announce our next Ethics review forum on Sarah McGrath’s Moral Knowledge (OUP 2020), reviewed by Eric Wiland. The forum will start on Friday, January 22nd. Save your comments until then—we’ll be posting a new thread once it gets started.

Click here to read more about the book, and click here to read Wiland’s review (both are also excerpted below). Of course, you’re welcome to participate in the forum even if you haven’t read either.

Book Blurb:

Compared to other kinds of knowledge, how fragile is our knowledge of morality? Does knowledge of the difference between right and wrong fundamentally differ from knowledge of other kinds, in that it cannot be forgotten? What makes reliable evidence in fundamental moral convictions? And what are the associated problems of using testimony as a source of moral knowledge? Sarah McGrath provides novel answers to these questions and many others, as she investigates the possibilities, sources, and characteristic vulnerabilities of moral knowledge. She also considers whether there is anything wrong with simply outsourcing moral questions to a moral expert and evaluates the strengths and weaknesses of the method of equilibrium as an account of how we make up our mind about moral questions. Ultimately, McGrath concludes that moral knowledge can be acquired in any of the ways in which we acquire ordinary empirical knowledge. Our efforts to acquire and preserve such knowledge, she argues, are subject to frustration in all of the same ways that our efforts to acquire and preserve ordinary empirical knowledge are.

 

Excerpt from Wiland’s Review:

[I]n Moral Knowledge, Sarah McGrath clearly and powerfully argues that we can acquire moral knowledge in all the ways we come by ordinary empirical knowledge. Just as I can know that it’s now raining by perception (seeing and feeling the raindrops), by inference (the people outside are using umbrellas), or by testimony (my mother outside is texting me about the weather), so too I can gain moral knowledge by any of these channels. […] […] The book has an introductory chapter, a concluding chapter, and, in between, four substantive chapters each of which is devoted to one particular subtopic: the method of reflective equilibrium, testimony and expertise, observation and experience, and losing moral knowledge.

The chapter on the method of reflective equilibrium (MRE) is the best discussion of the topic I know about. McGrath is aptly pessimistic about its powers. She argues that on its most defensible interpretation, MRE takes for granted that we typically already have some moral knowledge, knowledge that the method hopes to extend by making our moral views more coherent. But this means that we don’t get all our moral knowledge from MRE. Much like testimony […], MRE can extend only what already exists.

[…] McGrath argues that when we reflect upon our moral convictions, we should prioritize neither our general moral views nor our lower level moral judgments. […]  If we weren’t justified in being confident about one level, we wouldn’t be justified in being confident about the other level; and if confidence in neither were justified, then MRE couldn’t take us anywhere good. Fortunately, we do already have some moral knowledge, and so MRE can extend it modestly. MRE, however, might be best at delivering not moral knowledge, but moral understanding. When we align our general moral views and our particular moral judgments, we better grasp why those particular moral judgments are true. The more general principles can explain the facts captured by our particular moral views, and grasping these explanations is one form moral understanding takes.

If the method of reflective equilibrium is better at delivering moral understanding than moral knowledge, the opposite can be said, McGrath argues, about the method of testimony. […]

Although testimony is indeed a source of moral knowledge, McGrath argues that the epistemologically interesting issues concern not moral testimony per se but the broader issue of moral deference. The putative problem is that if you hold a moral view because you’ve deferred to someone else, then you typically don’t understand why that view is true. This is problematic for at least two reasons. First, when you judge something to be wrong, you are expected to be able to cite facts in virtue of which it is wrong. But if you have completely deferred to the view of another, then, she argues, you won’t be able to meet this expectation.  (One, however, should wonder whether it is difficult to learn these facts too by testimony.)  Second, it’s an ideal of moral agency to be able to do the right thing for the right reason; but if you know only what’s right, and don’t understand why the right thing is right, then you won’t be able to do the right thing for the reasons that make it right. So, acting on the basis of moral deference is, at best, second-best.

I’ll […] briefly flag a couple worries about this criticism of moral deference. As I’ve argued elsewhere, in typical responsible cases of (adult-to-adult) moral deference, the hearer does grasp the various operative reasons (or goods and bads) at stake, but defers to a speaker about how to weigh them up. For example, if you are a minimally competent adult, you already know that it’s pro tanto bad to allow five people to die, that it’s pro tanto bad to kill one person, yet remain unsure whether it’s wrong to turn the trolley, or to kill a healthy patient for their organs. Thus you might defer to someone in a better position to know such things. But even if you do so defer, 1) you could still cite facts in virtue of which one of the options is wrong (“That’s allowing five people to die!”), and 2) do the right thing for the right reason (“I’m turning the trolley to save five people.”) So I think McGrath doesn’t completely show that moral deference is problematic in the ways she describes. […]

[…] [I]n the most ambitious chapter, McGrath argues that experience and observation can contribute to moral knowledge in the very ways they contribute to ordinary knowledge. One way experience contributes to moral knowledge is by enabling us to entertain the relevant contents: you can know that, say, murder is wrong only if you have the concept murder, and experience can enable you to grasp that concept. Experience can also trigger moral knowledge. As a young man, Einstein was an absolute pacifist, but witnessing the Nazi era let him to conclude that violence could be just. […]

More ambitiously, McGrath argues that observation and experience can confirm and disconfirm one’s moral views, even those view that are also knowable a priori. Non-moral observation can disconfirm one’s moral views, because it can make one’s overall view less coherent in a way in which it is most reasonable to rectify this incoherence by giving up one’s original moral view. Likewise, when non-moral observation makes one’s overall view more coherent, it thereby tends to confirm one’s moral views thus implicated.

[…] Suppose Ted initially believes both 1) that same-sex marriage is intrinsically wrong and shouldn’t be condoned, and 2) that social recognition of same-sex marriages would have bad consequences, including leading to an increase in the divorce rate. On this second thought, even though Ted thinks that the wrongness of same-sex marriage is not because of any bad consequences flowing from its recognition, he still believes it would lead to bad consequences at least in part because it’s (already) intrinsically wrong.

Now suppose that at some later time, Ted observes that the legal recognition and social acceptance of same-sex marriage does not lead to an increase in the divorce rate […] Ted’s views are now less coherent. He could adjust his views in various ways to make them coherent again, but suppose he retains the view that recognition of intrinsically wrong practices lead to bad consequences, but decreases his confidence that same-sex marriage is wrong. This adjustment is rational, and it shows how Ted’s original view that same-sex marriage is wrong may be disconfirmed by observation.

Let me flag a worry […]. Ted’s moral view is disconfirmed by observation only because he is confident in the complex conditional: if same-sex marriage is wrong, then if same-sex marriage becomes socially accepted, the divorce rate will rise. Observation shows him that the main consequent of that conditional is false. So if Ted retains his confidence in the conditional, he will need to lower his confidence in the main antecedent (viz., same-sex marriage is wrong). This is how observation can disconfirm a moral view.

But this structure is available to any domain, not just morality. Suppose Ted also initially believes, contrary to Euclid’s Theorem, that there is a largest prime number. He also holds that if there is a largest prime number, then if Euclid’s Theorem and other false mathematical views become widely held, technological advancement will decline. Suppose, however, he observes that as more […] schoolchildren are learning Euclid’s Theorem, technology continues to advance […]. Ted’s views are now less coherent. […] [S]uppose he retains the view that widespread mathematical ignorance hurts technological development, but decreases his confidence that there is a greatest prime number. This adjustment is rational, and it shows how Ted’s original view that there is a largest prime number may be disconfirmed by empirical observation.

But Euclid’s Theorem, of all things, isn’t confirmable empirically. Proof seems necessary. […]

Moving on, the final substantive (and most original) chapter discusses the question whether one can lose moral knowledge. Gilbert Ryle famously claimed it was ridiculous or absurd to say “I’ve forgotten the difference between right and wrong.” If correct, this might seem to show that moral knowledge is not like knowledge gained by expertise or ordinary empirical knowledge, which can be forgotten. […]

McGrath [replies by] arguing that while you can indeed lose moral knowledge, doing so corrupts you in a way that makes it difficult and awkward to recognize that you’ve been so corrupted. […]  What makes it absurd to say that you’ve forgotten the difference between right and wrong is not that this proposition can’t be true, but that when it is true, you’re not in a good position to recognize its truth. So, while ceasing to care about the right thing is indeed one way to lose moral knowledge, McGrath aptly argues that there could still be other ways to do so, including by forgetting things.

[…]

Space doesn’t permit me to summarize McGrath’s critical discussion of Ronald Dworkin’s view that our moral beliefs are relatively immune to being undermined by discoveries about their etiologies, but her reply to Dworkin is one of the most compelling arguments of the entire book, and I’ll leave that as a teaser for you to check it out for yourself.