Actualist Utilitarianism (AU) is, roughly stated, the view that we ought to act so as to maximise the sum total of actual people’s utilities. (By utility is here meant a numerical representation of a person’s level of wellbeing, or welfare.) It is distinguished from regular utilitarianism in that it excludes the utilities of “merely possible people” from figuring in our moral judgements. And, for this reason, it might be motivated by various “person-affecting” intuitions to the effect that merely possible people are morally insignificant. I shall not, however, try to develop that line of motivation here. Rather, I want to focus on an objection to AU advanced by John Broome in his recent book Weighing Lives. (Although Broome doesn’t consider the case of AU in particular, he does object to “actualist axiologies” more generally, and his objection is applicable to AU. With that clarification noted, I shall for simplicity proceed as though Broome’s objection is aimed specifically at AU.)
The objection, in short, is that AU is incapable of giving practical adivice. As Broome understands AU, it implies that what the agent ought to do in a given situation of choice may sometimes depend on what he actually does in that situation. Thus, if the agent were to ask “ought I to do X?”, then the best practical advice that AU could give him would be prefaced with “well, that all depends on whehter or not you actually do X.” But that would be no practical advice at all; usually we want to know whether or not it’s permissible to do something in advance of our having done or not having done it. Understood in this way, then, AU will be practically impotent — of no use at all in deciding what to do.
As I shall argue, however, AU need not be understood in this way. Below I suggest two formulations of AU, and show that only one of these is vulnerable to Broome’s objection.