Today we start a two-part series of posts about different papers from a brand new book: An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind, edited by Erin Beeghly and Alex Madva. The posts will be by Rima Basu (today) and Lacey Davidson and Nanch McHugh (Friday, July 24).
Today Basu will offer a brief post introducing her paper, “The Specter of Normative Conflict: Does Fairness Require Inaccuracy?” (Routledge has kindly made pre-prints of all the papers that will be discussed in this series available for free.)
Here now is Basu:
Let’s start with an obvious point: we form beliefs about other people all the time. I believe that at a busy intersection no less than three drivers will turn left when the light turns red. Why? Because I see it happen all the time. On similar grounds I believe that when it rains half of my students won’t show up for class. Why? Because in my experience no one in Southern California, and I include myself in this generalization, knows how to deal with rain. We often don’t think twice about forming beliefs on the basis of these sorts of statistical regularities or stereotypes. But maybe we should.
There has been a lot of work recently on a challenge that arises from the fact that our world has been shaped by, and continues to be shaped by, racist attitudes and institutions. If we’re responsible epistemic agents, i.e., agents that form their beliefs on the basis of the evidence, when we try to navigate this world we may find that the evidence is often stacked in favor of beliefs that we’d otherwise disavow, e.g., racist beliefs. To use an example, originally from John Hope Franklin’s autobiography and repeated in Tamar Gendler’s 2011 paper “On the epistemic costs of implicit bias”, it is morally wrong to assume, solely on the basis of someone’s skin color, that they’re a staff member. But, what if you’re in a context where, because of historical patterns of discrimination, someone’s skin color is a very good indicator that they’re a staff member? It might be unfair to assume that they’re a staff member, but to ignore the evidence would mean risking inaccurate beliefs.
In my chapter for An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind, I start by setting up this conflict between accuracy and fairness in more detail, and then I canvas some of the ways one could respond.
For example, you could take the position of The Dilemmist, and argue that these sorts of conflicts are both inevitable and unresolvable. Alternatively, according to The Pluralist, there is a plurality of oughts and from the perspective of each ought you simply ought do what it prescribes. Both the The Dilemmist and The Pluralist note that obligations of all different sorts are in conflict all the time, so maybe there’s nothing special about moral-epistemic conflicts of this sort between accuracy and fairness.
However, there’s something unsatisfying about both these answers, and that dissatisfaction might lead you down the path of either Moral Priority or Epistemic Priority. According to moral priority the moral considerations overrule the concerns of accuracy; alternatively according to epistemic priority all that matters in the end is that you secure epistemic goods like having accurate beliefs. Alternatively still, perhaps there’s some third perspective from which you can weigh these competing considerations to figure out what you all-things-considered ought to do. But as I note, no matter how you cash out this idea of an all things considered ought, there are some reasons for skepticism.
Unsurprisingly, I’m not that moved by any of these options, but I do try to motivate each of them in their most convincing form while also noting what bullets one will have to bite or what objections one will have to overcome to go in any of those directions. Further, the route that I endorse, moral encroachment, also faces some problems of its own but I think it provides the most promising answer to these cases of conflict. The moral risks present in the cases, i.e., the moral consideration that our belief that John Hope Franklin is a staff member would be unfair to John Hope Franklin for reasons previously discussed, makes the cases high-stakes cases. Moral encroachment, then, understood as the thesis that morally risky beliefs raise the threshold for (or “encroach” upon) justification, makes sense of the intuitive thought that racist beliefs require more evidence and more justification than other beliefs. But, even this answer faces some challenges.
In an effort to keep this chapter accessible for an undergraduate audience (for example, this chapter may be the first time a meme has been published in a philosophical text?) there’s much discussion and literature that I had to elide. There’s much more to be said about base-rate fallacies, proof paradoxes, the reference class problem, whether true moral or epistemic dilemmas are even possible, etc.
Although the chapter ends with some discussion questions and suggestions of further reading, as this literature grows that list will be incomplete. For example, even my own thoughts on moral encroachment (and the related but conceptually distinct idea of doxastic wronging) have evolved since I wrote this chapter (see “A Tale of Two Doctrines: Moral Encroachment and Doxastic Wronging”). For that reason, I encourage others to engage in self-promotion (or other-directed-promotion) in the comments because as people work on their syllabi for the coming academic year, perhaps this blogpost can serve as dynamic teaching resource in the way that a published chapter can never be.