As Thanksgiving rolls around, it’s time to pause and take stock of how you got to be who you are, at least as a moral/political philosopher, and what giant’s shoulders you’ve been standing on to see as far as you’ve seen. What’s the ONE moral/political philosophy book or article you’re most thankful for and how did it influence you?
Hi everyone! Thanks very much for the opportunity to discuss our work-in-progress, “‘I Love Women’: The Conceptual Inadequacy of ‘Implicit Bias.’”
Tests for implicit bias, in particular the Implicit Association Test (IAT), have recently come under scrutiny. Two different meta-analyses, by Oswald et al. (2013) and Forscher et al. (2016) (recently discussed in the Chronicle of Higher Education) have concluded that measurements of “implicit bias” do not reliably predict biased behavior.
In our paper, we offer a different critique of implicit bias testing, one which philosophers and other humanistic thinkers might be well-suited to address. We argue that the dominant implicit bias tests assume crude and implausible conceptions of explicit prejudice, leaving open the possibility that the morally bad and wrong actions supposedly best explained by something interestingly implicit are instead best explained by non-obvious but nonetheless explicit prejudice.[i]
Re-posting after a technical glitch this morning (eds.)
Current events are reminding us that patriotism, at least of the sort that gets publicly acknowledged, is a confusing virtue. I don’t mean that the patriot might get drawn into doing bad things on behalf of his country. Patriotism is a form of loyalty, and loyalty, whether to friends, family, one’s university, or whatever, can draw us into doing bad things on their behalf. I mean instead that those who say they care about patriotism seem surprisingly okay with others doing bad things without regard for the interests of their country.
Over the past few years, an interesting development in experimental philosophy has been work on the “ought implies can” principle (OIC) in commonsense morality. Several research teams have investigated whether patterns in commonsense moral judgment are consistent with a commitment to OIC, understood as a conceptual entailment from having a moral responsibility to being able to fulfill it. Across a variety of contexts and testing procedures, the principal finding has been very consistent: people are definitely willing to attribute moral responsibilities to agents unable to fulfill them. Based on these findings, I and others have concluded that there is no conceptual entailment from “ought” to “can.” But there is a lingering question. If there is no conceptual entailment, then what is the source of the intuitive link, which many theorists seem to sense, between “ought” and “can”? A new paper might provide at least part of the answer.
Much is made these days of ideological bubbles and commitment cocoons (OK, I made up that one), in which people stick to their beliefs regardless of any “evidence” or “reasoning” otherwise. But, let’s admit it, it’s hard to change your mind about something you’ve been committed to solely based on your assessment of reasons. This is true even for — perhaps especially for — professional philosophers.
It might be worth hearing, then, about your true conversion stories and the role contrary reasons played for you: What moral/political view were you committed to — perhaps even published about — that you abandoned solely in the face of good reasons otherwise? Were the reasons available to you all along and you just saw them in a newly salient light, or were they new reasons to you? Have you “backslid”? Have you gone on to publish on the contrary view? (See my conversion story below the fold.)
Eric Schwitzgebel writes:
Here are four things I care intensely about: being a good father, being a good philosopher, being a good teacher, and being a morally good person. It would be lovely if there were never any tradeoffs among these four aims.
Explicitly acknowledging such tradeoffs is unpleasant — sufficiently unpleasant that it’s tempting to try to rationalize them away. It’s distinctly uncomfortable to me, for example, to acknowledge that I would probably be better as a father if I traveled less for work. (I am writing this post from a hotel room in England.) Similarly uncomfortable is the thought that the money I’ll be spending on a family trip to Iceland this summer could probably save a few people from death due to poverty-related causes, if given to the right charity.
Today I’ll share two of my favorite techniques for rationalizing the unpleasantness away. Maybe you’ll find these techniques useful too!