Kevin Tobia and I have co-written a paper on personal identity for the Oxford Handbook on Moral Psychology, edited by John Doris and Manuel Vargas. You can see the draft here. What’s particularly new and interesting about the entry is Kevin’s part of the project, which involves a survey and critical discussion of the state of the art on work about personal identity that has taken place in the psychology-heavy literature over the last 5-10 years. We then bring together that literature with the standard philosophy-heavy work on the topic since Locke. As we will get one more run on revising, we are interested in any thoughts you might have about it, preferably spelled out in the comments here.
When was the last time you read an Anthropology article or book? Did you know that there is a recent “Ethical turn” in anthropology and that anthropologists are writing interesting things about moral development, practical reasoning, virtue, autonomy, and other moral topics – all with reference to specific cultural contexts and practices?
If you are like me only a little while ago, you have never heard of the ethical turn because current anthropology is simply not on your radar. And that is why I am posting! I think this might be of interest to many philosophers, but especially to graduate students.
Hi everyone! Thanks very much for the opportunity to discuss our work-in-progress, “‘I Love Women’: The Conceptual Inadequacy of ‘Implicit Bias.’”
Tests for implicit bias, in particular the Implicit Association Test (IAT), have recently come under scrutiny. Two different meta-analyses, by Oswald et al. (2013) and Forscher et al. (2016) (recently discussed in the Chronicle of Higher Education) have concluded that measurements of “implicit bias” do not reliably predict biased behavior.
In our paper, we offer a different critique of implicit bias testing, one which philosophers and other humanistic thinkers might be well-suited to address. We argue that the dominant implicit bias tests assume crude and implausible conceptions of explicit prejudice, leaving open the possibility that the morally bad and wrong actions supposedly best explained by something interestingly implicit are instead best explained by non-obvious but nonetheless explicit prejudice.[i]
Over the past few years, an interesting development in experimental philosophy has been work on the “ought implies can” principle (OIC) in commonsense morality. Several research teams have investigated whether patterns in commonsense moral judgment are consistent with a commitment to OIC, understood as a conceptual entailment from having a moral responsibility to being able to fulfill it. Across a variety of contexts and testing procedures, the principal finding has been very consistent: people are definitely willing to attribute moral responsibilities to agents unable to fulfill them. Based on these findings, I and others have concluded that there is no conceptual entailment from “ought” to “can.” But there is a lingering question. If there is no conceptual entailment, then what is the source of the intuitive link, which many theorists seem to sense, between “ought” and “can”? A new paper might provide at least part of the answer.
With Donald Trump now president-elect, many people are concerned that something truly precious and fundamental is under threat. Though Americans disagree about many things, we traditionally had a shared national sense of the bounds of normal behavior and a seemingly entrenched understanding that certain kinds of behavior fell completely outside those bounds. There is now a widespread fear that Trump’s recent actions will be ‘normalized’ and that our shared understanding of the normal will then be lost.
I think that this fear is getting at something of deep importance, and it is therefore worth taking a moment to think philosophically about what is at stake here. What exactly does it mean to see certain behavior as normal?
Suppose you are sitting at your desk, reflecting on a moral question. Now suppose that as you are reflecting on this question, you happen to be looking around at a somewhat disgusting scene. Perhaps there is a half-eaten apple on the desk, or a bad smell in the room, or maybe you just didn’t have an opportunity to wash your hands.
(From Joachim Horvath) Dear colleagues,
We would like to invite you to an online experiment on moral decisions. In the experiment, you will be asked to judge about several cases which option the agent in the scenario should choose. You can enter the experiment via the following link:
We especially encourage the participation of people with expertise in philosophy and/or ethics. At the end of the study, every participant can register for a price draw to win a copy of Daniel Kahneman’s book “Thinking: Fast and slow”.
If you have any further questions regarding our experiment, please feel free to contact Alex Wiegmann, Department of Psychology, University of Goettingen (Alex.Wiegmann@psych.uni-goettingen.de) or Joachim Horvath, Department of Philosophy, University of Cologne (firstname.lastname@example.org).