Options, Actions, and Indeterminism

It is fairly common to give a conditional analysis of an option, e.g.:

(CAO) Performing an act X at a future time t1 is an option for a subject S at the present time t0 if and only if S would perform X at t1 if S were to intend (to try, to decide, or to choose) at t0 to perform X at t1.

I know that there are a host of problems with such conditional analyses, but let’s set those aside for the moment, for I want to address what seems to be an unappreciated worry concerning the possibility of indeterminism.


Of course, if we accept libertarian indeterminism and thereby hold that the formations of intentions are not causally determined by the preceding events and the laws of nature but are instead caused by us, then there is no problem for CAO so long as we hold that which acts we perform are causally determined by the intentions that we (indeterminately) form. But what if that’s not the case? That is, what if whether or not my intention to perform X results in my X-ing is indeterminate? Suppose, for instance, that if I intend to pick up the cup of coffee that’s on the table in front of me, there’s a 75% (objective) chance that I’ll pick it up but a 25% (objective) chance that I’ll knock it over, spilling it. And suppose that if I were instead to intend to leave it be, there’s 100% that I will leave it be. In this case, it’s clear that I have the option to leave it be. But do I have the option of picking it up? If whether or not I do so is not up to me or to my intentions but is rather up to chance, then it’s hard to see how my doing so is a genuine option. Moreover, if we do say that it’s an option at a 75% chance of success, what will we say when there is only a 74, 73, 72, …, or 0 percent chance of success? Where do we draw line, and how do we do so non-arbitrarily?   

I want to suggest, then, that if the effectiveness of our intentions is indeterminate in this way, then we should think of our options not as actions, but as gambles. So, in the case described, we should say that I have the option of taking gamble 1 (where this is the gamble of taking both a 75% chance of picking up the cup of coffee and a 25% chance of knocking it over). And we should also say that I have the option of taking gamble 2 (where this is the “gamble” of taking a 100% chance of leaving it be). Furthermore, there will be many other gambles that are options for me.

Now, if this is right, we should revise CAO as follows:

(CAO*) Taking a gamble G is an option for a subject S at the present time t0 if and only if S’s intending (trying, deciding, or choosing) at t0 to perform some act X at t1 entails taking gamble G.

Now, this is all off the cuff, so I wonder if others might tell me what sorts of problems they foresee with this kind of view and whether they know of any literature on this topic. One putative problem with this view is that it seems to imply that it is mistaken to talk about reasons for action. For consider whether I have good reason to pick up the cup of coffee. It would be odd to say that I don’t because picking up the cup of coffee risks spilling it all over me, causing me severe burns. For picking up the cup of coffee doesn’t involve that risk; it is only intending to pick up the cup of coffee that involves that risk. That is, the risk is associated with the gamble, not the action. And such risks seem to be very relevant to our practical deliberations. So, perhaps, this sort of view suggests that talk of reasons for action is best understood as an imprecise way of talking about reasons for intending. And that when we look at reasons for intending, we need to look not just to the results of performing the intended act but also the risks of performing acts that are not intended.

(Thanks to Christian Coons, Mark van Roojen, and the other participants at SLACRR 2012 for raising an objection that led me to think about this issue. And thanks to Mark for the suggestion that I might talk about gambles.)

36 Replies to “Options, Actions, and Indeterminism

  1. I’m no fan of conditional analyses like CAO and I think counterexamples (like Lehrer’s in the 60s) blow them out of the water. But it seems to me that the worry you raise is not one that arises only for agents in indeterministic worlds. Similar problems arise for agents in deterministic worlds. The problem is that the relevant would-counterfactuals in CAO would all come out false or indeterminate in some cases even in deterministic worlds. It’s also not clear to me how an appeal to objective chances can help us out here.
    For example it may be the case that there simply is no fact of the matter what I would do were I to intend to phi because there are many equally close possible worlds in which I intend to phi some of which are ones in which I phi and some of which are such that I don’t phi. Given that objective chances (as I understand them (which I grant may be completely confused)) are underwritten by the laws of nature of a world, I don’t know how to apply objective chances to the space of possible worlds. (Maybe you could appeal to subjective probabilities. But that would have its own problems, I think.)

  2. Hi Pete,
    I assume you’re talking about cases like the one in which I didn’t spin the wheel of fortune and we then ask what would have happened if I had? Would I have won? This gets tricky when whether I would have won depends on precisely what force I would have spun the wheel with. In such a case, there doesn’t seem to be any determinate answer to the question, for the antecedent in the relevant counterfactual conditional is underspecified (there are, after all, many different forces with which I might have spun the wheel) and the relevant agent lacks the ability to determine which of the more fully specified antecedents would obtain were she to spin the wheel. Assume, then, that whether the spin is a winning one depends on precisely how much force the wheel is spun and the agent lacks the ability to determine whether she spins the wheel with, say, precisely 15.88354 N of force or precisely 15.88355 N of force.
    But are there many different ways of intending to phi such that I lack the ability to determine which of those ways of intending to phi I intend to phi? It seems that I can intend to spin the wheel softly or intend to spin the wheel vigorously and that I have the ability to determine which of these intentions I form. And it seems that there will be certain truths about what force I would have spun the wheel with (or at least there will be a probability distribution of possible forces) had I to form some specific intention at some specific time.
    So are you suggesting that there will be counterfactuals with antecedents involving intentions that are underspecified and that the agent will lack the ability to determine which of the underspecified antecedents obtain?

  3. Hi, Doug,
    An interesting suggestion. I’ll have to think about if this will help with any/all/most of what we discussed. But here are a couple quick comments:
    If reasons for acting are reasons to intend, then it seems (perhaps) the apt response to these reasons would be forming the intention to intend (and we’re off the races).
    Also, one worries one can have reasons to intend that are not reasons to act (Demon bribes etc.).
    And finally, does this view make probabilities of pleasure/pain (etc.) reason providing…rather than these fact-types themselves?
    Not sure if these are problematic entailments, or even entailments at all. But each seemed worth thinking about further.
    By the way, thanks for the warm welcome to Pea Soup. It’s an honor to be here.

  4. Hi Pete,
    Here’s another thought. I readily admit that there are all kinds of problems with conditional analyses of options. Given this, we can look at ways to tweak the conditional analysis to avoid the problems or we can abandon the conditional approach altogether. I might be swayed to go for the latter if I had some idea of what a plausible non-conditional analysis of an option might look like. So do you have any suggestion about what to fill into the following blank:
    Y is an option for a subject S at the present time t0 if and only if…

  5. Thanks, Christian. I will think about these more, but here are my initial thoughts:
    If reasons for acting are reasons to intend, then it seems (perhaps) the apt response to these reasons would be forming the intention to intend (and we’re off the races).
    I don’t think that our intentions are under our volitional control. That is, I don’t think that S comes to intend to X by intending to intend to X. Here’s I’m of Kavka’s Toxin Puzzle.
    Also, one worries one can have reasons to intend that are not reasons to act (Demon bribes etc.).
    I’m inclined to think that Demon bribes give me a reason to intend to do what might cause me to have the relevant-bribe-acquiring intention, but they give me no reason to intend to have that intention.
    Of course, I need to think about whether reasons for taking certain gambles are of the relevant object-given type.
    And finally, does this view make probabilities of pleasure/pain (etc.) reason providing…rather than these fact-types themselves?
    I had a ready response, but now I worry whether it is consistent with my other responses above. I really need to think more about this and the previous one before saying more. So thanks for giving me stuff to think about.

  6. Hi Doug,
    I was thinking that there may indeed be different ways of intending to phi (for instance, there may be different neurological states which each correspond to intending to phi). [Note, CAO and CAO* make no restriction to intendings to phi I have any abilities to determine between. Perhaps you mean for them to be so restricted, however.] But even if there were only one way of intending to phi, it still might be that there are different equally close possible worlds in which I intend to phi (which differ perhaps in the small miracle (if, for instance we’re working within the Lewisian framework) that “lead to” my intending to phi in that one particular way in those worlds) which differ with respect to whether I phi.
    I’m also a little bit confused about the dialectic. The conditional analysis, CAO, is supposed to give us an account of what actions I can perform and thereby allow us to eliminate the notoriously difficult agential ‘can’ in favor of counterfactuals of a certain sort. But now you appeal to another modal ability notion, an ability to determine how I intend. I find the talk of the ability to determine how I intend just as (if not more) mysterious/difficult to analyze as the agential ‘can’. How are we supposed to understand this ability to determine how to intend notion?

  7. Hi Doug,
    I don’t have an analysis to offer for ‘S can phi at t’. (Though I think we should accept that there are true claims of the form S can at t0 phi at t0 and then analyze cross-temporal can claims (S can at t0 phi at t1) in terms of them.) For right now we may have to settle with there being no analysis of ‘S can at t0 phi at t0’.

  8. Very interesting. Firstly, about the literature. I think that the literature on the control conditions of voluntary or intentional actions is relevant here. I take it that options are actions we are able to do intentionally. And, intentional action is often taken to require a degree of control over the success of the action. A good place to start is Al Mele’s Free Will and Luck book. One interesting argument in this book is that this is a problem also for the libertarian indeterminist views.
    Also, I think that the conclusion that all talk about reasons for action is really talk about reasons for intending is overstated. Firstly, one could talk about reasons for trying to do things. After all, the risks are relevant for why one should or should not try to pick up the coffee cup. And, it’s not clear to me that tryings collapse into intentions in the way that your formulation of CAO* suggests. It seems to me that one can intend to do something and yet fail to try to do it. There are also further reasons such as the ones which Christian mentions.
    Finally, even if there are some reasons for intensions and tryings, this does not entail that there are no reasons for actions. Even if the fact that one might spill counts against intending to pick up the cup or trying to do so, there really are considerations that count in favour of picking up the cup – namely that in this way one can drink coffee. There’s no reason why actions, tryings, and intentions could not all be favoured by different considerations.
    Finally, a couple of less clear points. Firstly, it seems to me that consequentialists who talk about expected utility have always considered the agent’s options to consist of gambles in a way that is quite similar to your way of thinking about this (even if they probably have not emphsised the role of intentions).
    Secondly, I’m worried that the position is now inconsistent. This passage:
    “But do I have the option of picking it up? If whether or not I do so is not up to me or to my intentions but is rather up to chance, then it’s hard to see how my doing so is a genuine option”.
    suggests that whether or not something is a genuine option for you depends on whether or not it is up to you whether you take the option (or whether it is merely down to chance). So, if gambles are to be genuine options, then it would have to be up to you whether or not you take a gamble (and not merely down to chance). And, it seems like, according to CAO*, whether one takes a gamble is a matter of intending an action that entails taking that gamble.
    However, according to the previous constraint on genuine options, this means that taking a gamble can be an option for you only if it is up to you to intend an action that entails that gamble (and not merely down to chance). However, in your answer to Christian, you deny that intentions are under our voluntary control which presumably means that intending something is not up to us. But, if this is right, then according to the constraint you set on the genuine options, taking a gamble can never count as a genuine gamble.

  9. @Christian:
    Now that I’ve had a chance to think about it some more, I want to retract what I said in the post about reasons for intending to pick up the cup of coffee depending on facts about the consequences of having this intention (such as the fact that having this intention runs the risk of resulting in my spilling the cup of coffee). Instead, I want to say that the fact that having the intention to pick up the cup of coffee runs the risk of spilling it counts as a reason against only trying to pick it up and, thereby, taking what I called Gamble 1 (the gamble of taking both a 75% chance of picking up the cup of coffee and a 25% chance of knocking it over). But it does not count against intending to pick it up.
    @Pete:
    I do think that the best formulations of CAO and CAO* will include restrictions to intentions that I have, in my current state, the capacity to form. My main motivation is not to give an analysis of the agential ‘can’ that makes no appeal to other modals. My main motivation is the thought that there is some account of an agent’s options such that the idea that agents ought to perform their best options is maximally plausible. If you have the time, could you please give me a concrete instance of the sort of example that you think is going to be problematic for me.
    You write: I don’t have an analysis to offer for ‘S can phi at t’. (Though I think we should accept that there are true claims of the form S can at t0 phi at t0 and then analyze cross-temporal can claims (S can at t0 phi at t1) in terms of them.) For right now we may have to settle with there being no analysis of ‘S can at t0 phi at t0’.
    On your view, is it the case that I can at present touch my nose at present even though I’m not touching my nose at present? It seems to me that given that I’m not now touching my nose, the only thing that I can do is touch my nose at some time in the future. How can I now do now what I’m not doing now? (I hope that makes sense.) In any case, can you give me an example where S can at present phi at present even though S is not at present phi-ing at present?
    @Jussi:
    Thanks; this is very helpful. I’ll check out Mele.
    I agree that my claim that “all talk about reasons for action is really talk about reasons for intending is overstated” and for the reasons that you give. I retract that claim.
    Regarding potential inconsistency: it’s true that I don’t think that my intentions are under my volitional control, where S has volitional control over whether or not she φs only if both (1) she has the capacity to intend to φ and (2) whether or not she φs depends on whether or not she intends to φ. But that doesn’t imply that whether or not I intend to φ (and, thus, whether I take a certain gamble) is not under my control. I think that my beliefs, desires, and intentions are under my “rational control,” even though they are not, like my acts, under my volitional control. Here, I’m think that S has rational control over whether or not she φs only if both (1) she has the capacity to recognize and assess the relevant types of reasons and (2) she is at least moderately responsive to her assessments concerning these types reasons in a range of counterfactual situations – see Fischer and Ravizza for a more detailed account of being moderately reasons-responsive.

  10. Hi Doug,
    thanks. One worry about the notion of rational control you suggest. On your proposal, I could have a rational control over whether or not I phi even if it would be impossible for me to phi. Even in this case, I could recognise reasons and these assessments would be responsive to reasons in the counterfactual conditions. So, I’m thinking that we need to add some sort of control over phying condition to the definition of rational control.
    If we do this, then I guess I wonder why the idea of rational control could not be directly used to evaluate what options we have. The view would be in two steps:
    A) S has an option to phi iff S has rational control over whether or not she phies.
    B) S has rational control over whether or not she φs only if all
    (1) she has the capacity to recognize and assess the relevant types of reasons and
    (2) she is at least moderately responsive to her assessments concerning these types reasons in a range of counterfactual situations and
    (3) some further control condition is satisfied.

  11. Hi Jussi,
    I was only stating necessary, not sufficient, conditions for rational control. So, strictly speaking, my proposal says nothing about when you could have rational control, and says only when you could not have rational control. But I take your larger point.
    Let me say then that S has rational control over whether or not she φs only if (1) she has the capacity to recognize and assess the relevant of reasons, (2) her φ-ing is at least moderately responsive to her assessments concerning these reasons, and (3) she would φ if she were to judge that that she has decisive reason to φ and wasn’t suffering from any irrationality.
    So, now, you ask: why not just say something like this: “S has an option to phi iff S has rational control over whether or not she phies”?
    My worry about this is that could be that my Y-ing is an option for me even though whether I Y is not itself under my rational control. That is, it may be that whether or not I Y depends on whether I X and that my X-ing, but not my Y-ing, is under my rational control. Imagine, for instance, that an eight-year-old girl is able to see to it that she proves Godel’s incompleteness theorem (by, say, intending now to study math in college) even though she does not now have the capacity to appreciate the reasons for proving such a complex theorem. Nevertheless, she can appreciate (even at this age) her reasons for intending to study math in college.

  12. Hi Doug
    sorry about that. Let me just give one motivation for connecting options and rational control. Both notions seem to me to be closely tied to what we can hold people responsible for or blame or praise for. Options seem to me to be the alternatives that we can evaluate the agent for either choosing or not choosing. Rational control seems to me to be an attempt to capture what such options are – at least on the Fischer/Ravizza view.
    Perhaps there are worries of the sort you suggest. There are two things we could do here. One we could give a disjunctive account:
    S has an option to phi iff (i) S has rational control over whether or not she phies or (ii) whether or not S phis depends on whether S psis when psying is something S has rational control over.
    Also, I’m not sure I share the intuition that the eight-year-old is able to see to it that she proves the incompleteness theorem. Seeing to it seems to require understanding what the act in question is and that seems to require being in the space of reasons.

  13. Hi Jussi,
    I think that we both agree that we should connect options with our rational control. The only possible issue is whether or not all our options need to be directly under our rational control. I’m think that they don’t, for I have the intuition that Sara’s proving Godel’s incompleteness theorem in college is an option for Sara (who is too young to understand the reasons for proving Godel’s incompleteness theorem but not too young to understand the reasons for majoring in math in college) if she is now deliberating about whether to study math in college and she will prove Godel’s incompleteness theorem if she forms now the intention to study math in college and not, if she forms now, the intention to study history in college.

  14. Hi Doug,
    You write: “I do think that the best formulations of CAO and CAO* will include restrictions to intentions that I have, in my current state, the capacity to form. My main motivation is not to give an analysis of the agential ‘can’ that makes no appeal to other modals. My main motivation is the thought that there is some account of an agent’s options such that the idea that agents ought to perform their best options is maximally plausible.”
    Fair enough. Here’s my worry regarding talk of an ability to determine how I intend. This is not supposed to be an agential ‘can’. Rather, you say it is a capacity sense of ability. Well, with respect to other capacity notions with respect to mental states, say the capacity to have and form beliefs, have and form desires, entertain thoughts, etc., these are capacities which, ordinarily, I believe, we’d say I retain when I’m asleep. So if you want to appeal to a capacity notion like this, I think you’ll be committed to saying that many things count as options for me when I’m asleep. Do you want to say this? This will also have the consequence that I have numerous obligations that I think we may not want to say I have when I’m asleep. If on the other hand, you want to construe this capacity notion as lying somewhere between the agential ‘can’ and the general capacity notion that we ordinarily employ when we talk about my capacities with respect to mental states, then I think I’m way more at a loss about what that notion is than the simple agential ‘can’. In which case, I’m better off taking the agential ‘can’ as primitive as opposed to trying to explicate it in terms of the even more mysterious notion of ability to which you seem to be appealing.
    You say: “If you have the time, could you please give me a concrete instance of the sort of example that you think is going to be problematic for me.”
    Here goes: Suppose (1) that the world is deterministic, (2) that we have a counterfactual intervener, Black, who is lurking and she will close a circuit that makes button B go live only if neuron X fires in my brain, (3) that the pressing of B at t3 if it is live will save two people from drowning at t4, and that the pressing of B when it is not live does nothing, (4) that there is only one way for me to intend at t2 to save the two people from drowning at t4 and that intending is identified with the firing of neuron Z in my brain at t2, (5) that I in fact don’t intend at t2 to save the two and I don’t in fact save the two, (6) there are two equally close possible worlds to the actual world in which neuron Z fires in my brain at t2: w–in which just a little bit before t2, t0, neuron X fires, leading to neuron Z firing at t2, just before which, at t1, Black closes the circuit, and after which at t3 I press B and at t4 thereby save the two from drowning–and v–in which just a little bit before t2, t0, neuron Y fires, leading to neuron Z firing at t2, just before which, at t1, Black DOES NOT close the circuit, and after which at t3 I press B and at t4 (because B is not live) I do not save the two from drowning. In this case, if I understand the Lewisian semantics of counterfactuals correctly (which I grant I very well may not) neither the counterfactual, if I had intended at t2 to save the two at t4 I would have saved them, nor the counterfactual, if I had intended at t2 to save the two at t4 I would not have saved them, is true. And this can happen when the world is deterministic and there is only one way in which I can intend the thing in question (namely the intending identified with Z’s firing).
    You write: “”I don’t have an analysis to offer for ‘S can phi at t’. (Though I think we should accept that there are true claims of the form S can at t0 phi at t0 and then analyze cross-temporal can claims (S can at t0 phi at t1) in terms of them.) For right now we may have to settle with there being no analysis of ‘S can at t0 phi at t0’.”
    On your view, is it the case that I can at present touch my nose at present even though I’m not touching my nose at present? It seems to me that given that I’m not now touching my nose, the only thing that I can do is touch my nose at some time in the future. How can I now do now what I’m not doing now? (I hope that makes sense.) In any case, can you give me an example where S can at present phi at present even though S is not at present phi-ing at present?”
    This is just an issue having to do with temporally extended actions. I can go one of two ways here depending on whether there are such things as instantaneous actions. If there are instantaneous actions–say choosings–then only choosings are such that, strictly speaking, I can at a time, do them at that time, even though I am not, in fact doing them at that time. So on this supposition, if I’m not now choosing to phi now, nonetheless I can now choose to phi now. We then take care of all other actions in terms of our abilities with respect to instantaneous actions. If, on the other hand, there are no instantaneous actions, we can go as follows: Though I am not now touching my nose, I can now, begin to touch my nose, now. {Either a fleshing out of this proposal, or another alternative, would be to treat temporally extended actions, like touching ones nose as follows, I can, at t0, touch my nose at T, where t0 is an instant, and T is an interval that has t0 as its first instant and some later instant as its last.]
    Question: As regards the capacity notion you appeal to with respect to intentions, do you also think that it only holds cross-temporally? Do you think that if I am not now intending to phi I don’t have the capacity to intend to phi now? If not, then why is there a problem with the agential sense of ‘can’? Or do you take this to be one of the hallmarks of the difference between the two ‘can’ notions?

  15. Hi Pete,
    You’ve given me a lot to think about. I have to teach a class this afternoon, but I will think about what you’ve said during the day and reply tomorrow morning. In the meantime, could you answer these questions for me:
    You claim: “if I’m not now choosing to phi now, nonetheless I can now choose to phi now.”
    (Q1) Why should I accept this? I accept that even if I’m not now choosing to phi, there is a possible world in which I’m now choosing to phi. But why should I accept that even if I’m not now choosing to phi, I can, in the actual world, now choose to phi when in the actual world I am not now choosing to phi? This sort of claim doesn’t make sense to me. If my now choosing to phi is instantaneous and I’m not in the present instant choosing to phi, then I don’t see how my now choosing to phi is an option for me now.
    You write: “We then take care of all other actions [that is, non-instantaneous actions such as touching my nose] in terms of our abilities with respect to instantaneous actions [such as choosing to touch my nose].” How do we do that?
    (Q2) Are you denying that I have now at this present moment the option of touching my nose at some near future point in time? If so, don’t you think that’s pretty radical? If not, can you fill in the following blank for me:
    I have at t0 the option of touching my nose at t1 if and only if…
    In other words, can you explain how we “take care” of all other actions?
    (Q3) You seem to think that whether or not there are instantaneous actions is somehow relevant to our debate. Why? Whether an act is instantaneous or not just depends on whether that act occurs in an instant or over a number of instants. But what’s at issue between us, it seems to me, is whether I can have an option at an instant t0 to perform an instantaneous action at an instant t1. I think that I can. And you seem to think, if I’m understanding you, that I can have an option at an instant t0 only to perform an instantaneous action at t0 and that I can have such an option even if as a matter of fact I am not performing this instantaneous option at t0. But that seems implausible to me.

  16. Hi Doug,
    You write: “You claim: “if I’m not now choosing to phi now, nonetheless I can now choose to phi now.”
    (Q1) Why should I accept this? I accept that even if I’m not now choosing to phi, there is a possible world in which I’m now choosing to phi. But why should I accept that even if I’m not now choosing to phi, I can, in the actual world, now choose to phi when in the actual world I am not now choosing to phi? This sort of claim doesn’t make sense to me. If my now choosing to phi is instantaneous and I’m not in the present instant choosing to phi, then I don’t see how my now choosing to phi is an option for me now.”
    My answer: I guess if the claim “I can, at t0, choose, at t0, to phi” when in fact I don’t at t0 choose to phi makes no sense, then there’s not much I can do to make it make sense to you. It doesn’t seem to be senseless or incoherent to me. You say that you find the pure modal claim “At t0 it was possible for the particle to be in location x at t0” when in fact the particle isn’t actually in location x at t0 not to be incoherent. I guess I just don’t see why you think the one is incoherent (and not simply coherent but false) but the other isn’t. [Of course I recognize that modal claims like the possibility claim and the “can” claim are different, it’s just odd to me to think that the one is coherent while the other is incoherent.]
    I wonder whether the present-tense nature of these claims is running some interference for you. Do you also find the following claims incoherent.
    – “Yesterday, at three o’clock I could right then and there have chosen to do something other than what I right then and there had chosen to do.”
    – “Tomorrow, at noon, I’ll be able to choose, right then and there, whether to lie or not.”
    You’d reply to the latter as follows, I take it: “No no. Tomorrow at noon you won’t be able to choose, right then and there, whether to lie or not. For you see, tomorrow, at noon you will then have either chosen to lie or chosen not to lie, and whichever choice you will, in fact, then make will be such as to rule out your then being able then to choose otherwise than you in fact do choose.” This seems mistaken to me. Just because, tomorrow at noon I will at that time make my choice (which we’re assuming (for ease of exposition) is instantaneous) doesn’t entail that I won’t at the same time also still then be able then to choose differently than I actually choose then.
    But, as I say, if it is just incoherent to you, I’m not sure I know what to say to try to make it not incoherent to you.
    You write: “You write: “We then take care of all other actions [that is, non-instantaneous actions such as touching my nose] in terms of our abilities with respect to instantaneous actions [such as choosing to touch my nose].” How do we do that?”
    There are a number of different ways of doing this (and which way we go (surprise, surprise) may be important for the actualism possibilism debate):
    Way 1: S can at t0 phi at t1 just in case there is a psi such that S can at t0 psi at t0 and were S to psi at t0 it would be the case that S phis at t1
    Way 2: S can at t0 phi at t1 just in case there is a psi such that S can at t0 psi at t0 and were S to psi at t0 it would be the case that at t1 S can phi at t1.
    There may be (and indeed are) yet other refinements that would be required to deal with certain further complexities.
    You write: “(Q2) Are you denying that I have now at this present moment the option of touching my nose at some near future point in time?”
    Not at all! nowhere have I implied this, I don’t think.
    You write: “If so, don’t you think that’s pretty radical?”
    Yeah. That would be pretty radical. What it is for me to now have the option of touching my nose at some future point is for it to be the case that:
    There is something I can do now such that were I to do it now I would later perform the temporally extended action of touching my nose (or, ….I would at some later time be able then to perform some action then that would be the beginning of the temporally extended action of touching my nose).
    You write: “If not, can you fill in the following blank for me:
    I have at t0 the option of touching my nose at t1 if and only if…
    In other words, can you explain how we “take care” of all other actions?”
    See above.
    You write: “(Q3) You seem to think that whether or not there are instantaneous actions is somehow relevant to our debate. Why? Whether an act is instantaneous or not just depends on whether that act occurs in an instant or over a number of instants. ”
    You’re right the instantaneousness of actions isn’t the crux. What’s crucial is that you seem to think that if the action is already underway at a certain time I can’t then at that time still have the option not to do it at that time. I focus on instantaneous actions because their durations are as long as the lengths of time to which the ability claims you’re focusing on are indexed. I could equally well have just focused on the question whether a person can at a certain instant, when one has begun phi-ing, still then have had the option then to have begun doing something else. You think not. I disagree.
    You write: “But what’s at issue between us, it seems to me, is whether I can have an option at an instant t0 to perform an instantaneous action at an instant t1. I think that I can. And you seem to think, if I’m understanding you, that I can have an option at an instant t0 only to perform an instantaneous action at t0 and that I can have such an option even if as a matter of fact I am not performing this instantaneous option at t0. But that seems implausible to me.”
    No. Not at all. Of course I allow that I can have cross-temporal abilities at certain instants to perform actions, be those actions instantaneous or non-instantaneous, at later instants (in the case of instantaneous actions) or later intervals (be those actions non-instantaneous). See the treatments above of cross-temporal ability claims above.
    What I do claim about instantaneous actions is that they are the only kinds of actions that it is possible to perform at an instant, whether the ability to perform them is had at the instant of performance itself or prior to it. But everyone should agree to that. Who thinks that it is possible that a temporally extended action be performed in an instant?

  17. Hi Pete,
    “S can at t0 phi at t1 just in case there is a psi such that S can at t0 psi at t0 and were S to psi at t0 it would be the case that S phis at t1”
    That’s pretty much my view. Just let psi range over intentions. But I thought that you reject such conditional analyses?

  18. Hi Doug,
    (1) That’s a conditional analysis of a cross-temporal ability claim, the kind of claim which I do think is to be given an interpretation in terms of both conditionals AND agential ‘can’ claims. But, I DO NOT offer a conditional analysis of “S can at t0 phi at t0”.
    (2) The conditional analyses which I think get blown out of the water are precisely the ones that try to analyze agential ‘can’ claims (be they synchronic or diachronic) in terms of conditionals the antecedents of which make no use of the agential ‘can’–e.g., yours, Locke’s, etc.
    (3) Also, as I’ve indicated, I don’t really know how to understand your “can intend” locution and so I don’t really understand your view. See my previous comments concerning my puzzlement over your use of that locution and the potential problems I see for appealing to capacities with respect to mental states.

  19. Hi Doug,
    Just to be absolutely precise here: what I do not offer a conditional analysis of is:
    “S can at tx phi at ty” where tx and ty are overlapping times, i.e., times such that the last instant of tx is identical to the first instant of ty.

  20. Hi Pete,
    So you accept:
    (1) “S can at t0 phi at t1 just in case there is a psi such that S can at t0 psi at t0 and were S to psi at t0 it would be the case that S phis at t1.”
    And you’re willing to allow that psi could be some mental state such as choosing to phi (Does it has to be choosing as opposed to intending or deciding?), where phi is some ordinary physical act such as touching my nose. And you’ve said explicitly that you don’t deny that S can have at t0 the option to phi at t1, and you told me to refer to the above analysis for an analysis of such an option. So I’m assuming that you would accept:
    (2) For all physical acts phi and all mental states psi, phi-ing at a future time t1 is an option for a subject S at the present time t0 if and only if there is a psi such that S can at t0 psi at t0 and S would phi at t1 if S were to psi at t0.
    Now, we may disagree (and I’ll need to think more about what you said) whether it makes sense to talk of S’s being able at t0 to psi at t0 where psi is instantaneous, it is t0, and, as a matter of fact, S is not psi-ing, but perhaps you would accept the following, which seems weaker than (2) in that it’s neutral on this potential source of disagreement:
    (3) For all physical acts phi and all mental states psi, phi-ing at a future time t1 is an option for a subject S at the present time t0 if and only if there is a psi such that S can at t0 psi and S would phi at t1 if S were to psi.
    Perhaps, we can agree that (3) is right insofar as we ignore issues concerning the possibility that it is indeterminate whether S would phi at t1 if S were to psi.
    And, given that indeterminism is possible, I would suggest that we accept:
    (3*) For all gambles G and all mental states psi, taking G is an option for a subject S at the present time t0 if and only if there is a psi such that S can at t0 psi and S would be taking G if S were to psi.
    Do you have any problem with (3*)?

  21. Hi Doug,
    You write: “So you accept:
    (1) “S can at t0 phi at t1 just in case there is a psi such that S can at t0 psi at t0 and were S to psi at t0 it would be the case that S phis at t1.””
    PG: No I do not accept (1). I offered (1) as one of the ways one might try to cash out cross-temporal ability claims. (1) is the more actualist-friendly interpretation. I’m partial to something more along the lines of Way 2 that I offer above or a suitable revision thereof.
    You write: :And you’re willing to allow that psi could be some mental state such as choosing to phi (Does it has to be choosing as opposed to intending or deciding?), where phi is some ordinary physical act such as touching my nose.”
    PG: This makes all the difference in the world. Choosings and decidings, I take it, are mental actions. Intendings are mental states and NOT actions of any kind. So saying I can choose (or decide) and I can intend are two very different kinds of ‘can’ claims. I’ve already registered my confusion about how to understand your ‘can intend’ locution above.
    You write: “And you’ve said explicitly that you don’t deny that S can have at t0 the option to phi at t1, and you told me to refer to the above analysis for an analysis of such an option. So I’m assuming that you would accept:
    (2) For all physical acts phi and all mental states psi, phi-ing at a future time t1 is an option for a subject S at the present time t0 if and only if there is a psi such that S can at t0 psi at t0 and S would phi at t1 if S were to psi at t0.”
    PG: No I don’t accept this. Nor would I accept a similarly modified version of my Way 2 above. Why? You are quantifying over mental states like intentions and I don’t know what the ‘can psi’ for mental states like intending is supposed to mean.
    You write: “Now, we may disagree (and I’ll need to think more about what you said) whether it makes sense to talk of S’s being able at t0 to psi at t0 where psi is instantaneous, it is t0, and, as a matter of fact, S is not psi-ing, but perhaps you would accept the following, which seems weaker than (2) in that it’s neutral on this potential source of disagreement:
    (3) For all physical acts phi and all mental states psi, phi-ing at a future time t1 is an option for a subject S at the present time t0 if and only if there is a psi such that S can at t0 psi and S would phi at t1 if S were to psi.”
    PG: No. For similar reasons as above.
    You write: “Perhaps, we can agree that (3) is right insofar as we ignore issues concerning the possibility that it is indeterminate whether S would phi at t1 if S were to psi.
    And, given that indeterminism is possible, I would suggest that we accept:
    (3*) For all gambles G and all mental states psi, taking G is an option for a subject S at the present time t0 if and only if there is a psi such that S can at t0 psi and S would be taking G if S were to psi.
    Do you have any problem with (3*)?”
    PG: I have all the same problems with this as I have with the ones that preceded it above and also I’m not sure what taking a gamble involves here.
    PG: But, all of this aside, my original point was that we’re still gonna have the problem of indeterminacy irrespective of whether indeterminism is true and as far as I can see objective chances won’t help there. (But maybe this is just a different problem that you’re not trying to address here.)

  22. Hi Pete,
    I apologize for the delay in getting back to. It’s a busy weekend. And I realize that there is a lot of points of contention between us, but I’m going to take them one at a time, the first being your suggestion that we’re going to have the problem of the relevant counterfactuals having no indeterminate truth value even if determinism is true.
    In your example involving Black, the laws of nature and the state of world at t0 are such as to necessitate that X will not fire at t1. So I have to ask: Is whether or not X fires at t1 under my rational control at t0? That is, is the firing of neuron X at t1 to be identified with my having some judgment sensitive attitude (such as a desire, belief, or other intention) at t1, and are we to assume that I am at least moderately responsive to reasons for and against my having this attitude?
    On the one hand, suppose that the firing of neuron X at t1 is not to be identified with my having some judgment sensitive attitude at t1 over which I have rational control at t0. In that case, my view (which is more complicated than what is described above) implies that my saving the two is not even an option for me at t0. It’s not an option, because, in order for it to be an option on my view, it would have to be that my intending at t2 to save the two at t4 would result in my saving the two so long as I have certain other judgment-sensitive attitudes that are under my rational control at t0. But, in your example, it’s stipulated that I cannot save the two if button B is not live and that button B won’t go live unless X fires. So, under the supposition that X’s firing is not to be identified with my having some judgment-sensitive attitude, it is not the case that my intending at t2 to save the two at t4 would result in my saving the two so long as I have certain other judgment-sensitive attitudes that are under my rational control at t0. Thus, saving the two is not an option.
    On the other hand, suppose that the firing of neuron X at t1 is to be identified with my having some judgment sensitive attitude at t1 over which I have rational control at t0. In that case, the relevant counterfactual (namely: if I had both possessed the relevant background attitudes and intended at t2 to save the two at t4, I would have saved them) is true.

  23. Hi Doug,
    You write: “In your example involving Black, the laws of nature and the state of world at t0 are such as to necessitate that X will not fire at t1. So I have to ask: Is whether or not X fires at t1 under my rational control at t0? That is, is the firing of neuron X at t1 to be identified with my having some judgment sensitive attitude (such as a desire, belief, or other intention) at t1, and are we to assume that I am at least moderately responsive to reasons for and against my having this attitude?”
    PG: The firing of X at t1 needn’t be under your rational control at t0 for it to be the case that a world in which X fires at t1 be one of the closest possible worlds to the actual world in which it is true that you intend at t2 to save the two at t4 (something which we can stipulate is something that is under your rational control (however that is to be cashed out)) according to something like the Lewisian semantics of counterfactuals (if I’m not mistaken about the theory that is (which I grant I very well may be)). After all, to determine what possible world is closest to the actual world for the purposes of evaluating a counterfactual to the effect “If it had been that A, it would have been that C” we often go to possible worlds that diverge from the actual world slightly before the timing of the occurrence of the event described in A (for those counterfactuals for which A concerns an event).
    You write: “On the one hand, suppose that the firing of neuron X at t1 is not to be identified with my having some judgment sensitive attitude at t1 over which I have rational control at t0. In that case, my view (which is more complicated than what is described above) implies that my saving the two is not even an option for me at t0. It’s not an option, because, in order for it to be an option on my view, it would have to be that my intending at t2 to save the two at t4 would result in my saving the two so long as I have certain other judgment-sensitive attitudes that are under my rational control at t0. But, in your example, it’s stipulated that I cannot save the two if button B is not live and that button B won’t go live unless X fires. So, under the supposition that X’s firing is not to be identified with my having some judgment-sensitive attitude, it is not the case that my intending at t2 to save the two at t4 would result in my saving the two so long as I have certain other judgment-sensitive attitudes that are under my rational control at t0. Thus, saving the two is not an option.”
    PG: I’m not sure I see how this follows. It can be that X’s firing occurs in the closest possible world that makes true the claim that you exercised the option you in fact had to intend to save the two at t4 even though X’s firing isn’t an event you have “rational control over” in the actual world.
    You write: “On the other hand, suppose that the firing of neuron X at t1 is to be identified with my having some judgment sensitive attitude at t1 over which I have rational control at t0. In that case, the relevant counterfactual (namely: if I had both possessed the relevant background attitudes and intended at t2 to save the two at t4, I would have saved them) is true.”
    PG: I’ll just stipulate that in my case the firing of neuron X at t1 isn’t identified with your having some judgment sensitive attitude at t1 over which you have rational control at t0.

  24. Hi Pete,
    You write: “The firing of X at t1 needn’t be under your rational control at t0 for it to be the case that a world in which X fires at t1 be one of the closest possible worlds to the actual world in which it is true that you intend at t2 to save the two at t4.”
    Did something I said suggest otherwise?
    You also write: “I’m not sure I see how this follows.”
    Do you not see how given your stipulations, it follows that, in the actual world that you describe (a world in which I will not save the two unless X fires at t1 and the laws of nature combined with the state of world at t0 are such as to necessitate that X will not fire at t1), it is not the case that my intending at t2 to save the two at t4 would result in my saving the two?
    Or do you not see how (on my view as specified in my book and not above), my saving the two at t4 is not an option if it is not the case that my intending at t2 to save the two at t4 would result in my saving the two? If this, it’s no wonder since I haven’t specified the view here.

  25. I guess that assuming that in all the closest possible worlds relevant to assessing whether or not I would save the two if I were to intend at t2 to save the two at t4 are ones in which X does not fire. For I’m assuming that the relevant similarity relation is one that keeps fixed both the laws of nature and the state of world at t0. Is this problematic? Or am I missing something else, which is, I admit, entirely possible.

  26. Hi Doug,
    You write: “For I’m assuming that the relevant similarity relation is one that keeps fixed both the laws of nature and the state of world at t0. Is this problematic? Or am I missing something else, which is, I admit, entirely possible.”
    PG: If when evaluating any counterfactual of the form “Had e occurred at t2 (for some event e which in fact DID NOT occur at t2), then it would have been that blah” in a deterministic world, we employ a similarity metric that keeps fixed both the laws of nature and the state of the world at t0 (by which I take it you mean, a similarity metric that excludes for consideration any world that does not have the same laws of nature as the actual world and the same exact state of the world as the actual world at t0) all such counterfactuals will come out trivially true, for there will be no possible worlds which (1) match the actual world exactly at t0, (2) have the same laws of nature as the actual world, and (3) are such that e occurs at t2 in them. [This is because, if the actual world is deterministic, and e does not occur at t2 in the actual world, then any world that matches it at any time in the past (including t0) and which has the same laws of nature of the actual world will be a world in which e does not occur at t2.] If we want there to be some possible worlds in which e does occur at t2 you can’t assume the similarity metric you’re assuming.

  27. Hi Pete,
    Okay, right! Duh!
    But it seems to me that the relevant similarity relation will hold fixed as much as possible things that both are not under my rational control (such as X’s not firing) and affect whether or not my intention will be efficacious. Whether or not X fires is not under my rational control and it does affect whether my intention will be efficacious. Whether or not Y fires is also not under my rational control but, unlike X’s firing, it has no affect on the effectiveness of my intention. So I think that, on the relevant similarity relation (which will be one that holds X’s not firing constant) it is true that even if I had intended at t2 to save the two at t4, I would not have saved the two at t4.

  28. Hi Doug,
    You write: “But it seems to me that the relevant similarity relation will hold fixed as much as possible things that both are not under my rational control (such as X’s not firing) and affect whether or not my intention will be efficacious.”
    PG: I’m not sure I see why there should be this restriction on the similarity metric for the evaluation of counterfactuals on the standard resolution of vagueness with respect to counterfactuals. So there is some sort of intention-efficacy bias built into the semantics of counterfactuals? I guess I don’t see why I should accept this. But even so, I’m not sure I see how the conditions you stipulate here will get us out of the woods. (See below.)
    But I think there may be a deeper worry here. You say that the relevant similarity metric “will hold fixed as much as possible things that … affect whether or not my intention will be efficacious” but whether or not my intention will be efficacious is itself a counterfactual matter, the very issue it is up to such a similarity metric to settle.
    You write: “Whether or not X fires is not under my rational control and it does affect whether my intention will be efficacious. Whether or not Y fires is also not under my rational control but, unlike X’s firing, it has no affect on the effectiveness of my intention.”
    PG: I’m perplexed. Whether X fires certainly does affect whether my intention to save the two will be efficacious—if it fires, my intention (identified with Z’s firing) will be efficacious. But so too does whether Y fires affect whether that very intention will be efficacious—if it fires, my intention (again identified with Z’s firing) won’t be efficacious.

  29. Hi Pete,
    I’m following Ben Bradley (see his Well-Being and Death) in thinking both that a counterfactual could be true relative to one similarity relation but false relative to another and that which similarity relation is relevant depends on what, in the given context, we want to keep fixed. And I think that, in the context of determining what an agent’s options are, it’s important to keep fixed events that necessitate that an agent’s intention will be ineffective. That’s what the “event” of X’s not firing does. For you stipulated that the pressing of B at t3 (and thus the intention to save the two by pressing B at t3) will save the two only if X fires. Thus, your stipulations imply that if both X were not to fire and I were to intend to save the two, I would not save the two. Given that it follows from your stipulations, it seems that I must take this counterfactual to be true. And I don’t see how my preferred similarity relation is inconsistent with this counterfactual (if both X were not to fire and I were to intend to save the two, I would not save the two) being true. So I’m not seeing your deeper worry.
    You write: “I’m perplexed. …But so too does whether Y fires affect whether that very intention will be efficacious—if it fires, my intention (again identified with Z’s firing) won’t be efficacious.”
    I’m puzzled too. Your stipulations entail that my intention to save the two by pressing B at t3 will save the two if and only if X fires. So, presumably, button B will go live if X fires regardless of whether or not Y fires. And, presumably, button B will not go live if X does not fire regardless of whether not Y fires. So it seems to me that whether my intention will be efficacious has everything to do with whether X fires and nothing to do with whether Y fires.

  30. Just to be clear: X’s not firing does render my intention ineffective, but Y’s not firing does not. This is why, on my proposed similarity relation, we hold X’s not firing, but not Y’s not firing, fixed.

  31. Hi Doug,
    You write: “I’m following Ben Bradley (see his Well-Being and Death) in thinking both that a counterfactual could be true relative to one similarity relation but false relative to another and that which similarity relation is relevant depends on what, in the given context, we want to keep fixed.”
    PG: Yeah. I get completely lost with this stuff. Ben’s view entails that there is no such thing as harming or an event’s being bad for someone. The only relations there are are harming-relative-to-R1 (being-bad-for-S-relative-to-R1), harming-relative-to-R2, and harming-relative-to-R3, (R1, R2, R3, etc. being different similarity relations)… I don’t know what any of these relations is. True they look like what I’d ordinarily think of as the harming relation, but that’s only because you’ve (Ben has) given them names that are similar to the name we ordinarily give to the relation I think of when I’m talking about someone’s harming someone else. You’re view will have a similar result for the relation I’d ordinarily call the “being able to” relation. For you there is no such thing as what my options are, or what I can do, or what I am able to do. All there are are relations such as “option-relative-to-R1”, “option-relative-to-R2”, and “option-relative-to-R3”. I say I have no idea what these relations are.
    You write: “And I think that, in the context of determining what an agent’s options are, it’s important to keep fixed events that necessitate that an agent’s intention will be ineffective.”
    PG: Given that there are no options, only option-relative-to-R1, option-relative-to-R2, option-relative-to-R3, I have no idea what you are talking about here. What could your “it’s important” claim even mean here??
    You write: “And I think that, in the context of determining what an agent’s options are, it’s important to keep fixed events that necessitate that an agent’s intention will be ineffective.”
    PG: No you don’t. New case: Black will make the wire go live only if I intend to press it. In the actual world you don’t hold fixed the fact that the wire is actually not live in this case, but it is an event which necessitates that an agent’s intention will be ineffective. Strictly speaking what you must be claiming is that you hold fixed all events that actually occur which necessitate the ineffectiveness of an intention which occur prior to the time of the event in the antecedent of the relevant counterfactual. But why relativize to that time as opposed to the time of the divergence miracle? I don’t see what motivates this.
    PG: New question – Given that it seems that you, in a way similar to Bradley’s myriad harming relations, are going to end up with just a slew of option-relative-to-R relations as opposed to just an option relation, and you also think that what moral obligations a person has depends on what “options” she has, are you going to end up with just a host of relations morally-obliged-to-phi-relative-to-R1, morally-obliged-to-phi-relative-to-R2, morally-obliged-relative-to-R3, etc.?

  32. Hi Pete,
    Just when I think that I may be crawling out of a hole, it becomes clear that I’m just digging it deeper. So let me back way up (and *maybe* get out of the hole).
    I want to say that (1) if S will not phi at t4 no matter what judgment-sensitive attitudes S has at t1, then S’s phi-ing at t4 is not an option for S at t1. (2) In your case (as you’ve stipulated it to be), I will not save the two at t4 no matter what judgment-sensitive attitudes I have at t1. (I’m including, here, your subsequent stipulation that the firing of neuron X at t1 isn’t identified with my having some judgment sensitive attitude at t1 over which I have rational control at t0.) And I’m assuming that (3) when we want to know whether I would save the five if I had this or that set of judgment-sensitive attitudes, we must hold constant everything that is not a judgment-sensitive attitude such as the fact that X does not fire in the actual world.
    Do you think that I have to make any controversial claims about counterfactuals to accept (1), (2), and (3)?

  33. Hi Doug,
    You write: “I want to say that (1) if S will not phi at t4 no matter what judgment-sensitive attitudes S has at t1, then S’s phi-ing at t4 is not an option for S at t1. (2) In your case (as you’ve stipulated it to be), I will not save the two at t4 no matter what judgment-sensitive attitudes I have at t1. (I’m including, here, your subsequent stipulation that the firing of neuron X at t1 isn’t identified with my having some judgment sensitive attitude at t1 over which I have rational control at t0.) And I’m assuming that (3) when we want to know whether I would save the five if I had this or that set of judgment-sensitive attitudes, we must hold constant everything that is not a judgment-sensitive attitude such as the fact that X does not fire in the actual world.
    Do you think that I have to make any controversial claims about counterfactuals to accept (1), (2), and (3)?”
    PG: [I think the time indices are getting a little mixed up here. I’ve been focusing exclusively on counterfactuals having to do with intendings at t2 and thus have only been looking at your account of what options a person has at t2. But you’re now focusing on t1. The reason t1 is part of my case is that I was trying to exploit the fact that counterfactuals involving antecedents with events occurring at t2 often diverge with respect to the actual world earlier than t2.]
    You’re not suggesting that these claims aren’t counterfactuals are you? I believe your (1) can only be understood in terms of counterfactuals, and I also don’t believe (2) is true. I don’t think the case as I’ve described it is one in which it is true that I will not save the two at t4 no matter what judgment-sensitive attitudes I have at t1. As I set up the case, I took it that the following was true: If I had intended at t2 to save the two at t4 I MIGHT have saved them. It’s just that neither of the corresponding would-counterfactuals is true. (In fact, as I’ve been focusing exclusively on counterfactuals only involving antecedents concerning events at t2, I’ve not said anything about counterfactuals concerning what would or would not happen at t4 had things at t1 been different than they in fact were.)
    (3) clearly is about counterfactuals. But what motivates the idea that if I want to know whether I would save the five if I had this or that set of judgment-sensitive attitudes, we must hold constant everything that is not a judgment-sensitive attitude? When we want to know whether there would have been a nuclear holocaust at t2 if Nixon had pressed the button at t1, we don’t hold constant everything in the actual world that is not a non-button-pushing at t1. For instance, we don’t hold fixed certain neurological facts just prior to t1. We let there be a divergence from the actual world a little prior to t1, a divergence having to do with some neuron or other firing in some corner of Nixon’s brain.
    PG: The point I was trying to make with this complicated example, however, could be made more simply if you just allow me that there are two distinct neural states that are identical with my intending to phi at t2–neuron X firing at t2 and neuron Y firing at t2. (There shouldn’t be any particular concern about intra-personal multiple realizability here I don’t think.) It could turn out that though I can intend to phi at t2 (for whatever sense of ‘can’ you have in mind for ‘can’ with respect to intentions), but there just is no fact of the matter which of X or Y will fire at t2 if I had intended to intend to phi at t2 (in other words, the world in which X fires at t2 and the world in which Y fires at t2 are equally close to the actual world). And now we can just have Black be standing in the wings, waiting to activate the button at t2.5 depending on whether X fires at t2 or not. In such a case, it’ll be intuitive that it’s neither true that if I had intended to save the two, I would have saved them, nor true that if I had intended to save the two, I would not have saved them.

  34. Hi Pete,
    I’m not suggesting that the claims aren’t counterfactuals. But I don’t see how you can accept (3) but deny (2). If we are to hold X’s not firing at just before t2 fixed, then isn’t it true that I will not save the two at t4 no matter what judgment-sensitive attitudes I have at t1 or any other time?
    Regarding your new case, do I have rational control over whether I intend to save the two by either X’s firing or Y’s firing. I assume not. So my view would imply that it’s indeterminate whether I had the option to save the two. And so it’s indeterminate whether I was obligated to save the two. Is this bad? I don’t see why it should be worrisome that if it is indeterminate whether I could have saved the two had I intended to save the two, then it is indeterminate whether I was (objectively) obligated to have saved the two.

  35. Hi Doug,
    You write: “I’m not suggesting that the claims aren’t counterfactuals. But I don’t see how you can accept (3) but deny (2). If we are to hold X’s not firing at just before t2 fixed, then isn’t it true that I will not save the two at t4 no matter what judgment-sensitive attitudes I have at t1 or any other time?”
    PG: True, if (3) is true, then (2) is; but, as I said, I don’t see what licenses you to (3). Again, why the asymmetry between how we treat counterfactuals like the Nixon-counterfactual and counterfactuals concerning my intentions?
    You write: “Regarding your new case, do I have rational control over whether I intend to save the two by either X’s firing or Y’s firing. I assume not.”
    PG: Right
    You write: “So my view would imply that it’s indeterminate whether I had the option to save the two. And so it’s indeterminate whether I was obligated to save the two. Is this bad? I don’t see why it should be worrisome that if it is indeterminate whether I could have saved the two had I intended to save the two, then it is indeterminate whether I was (objectively) obligated to have saved the two.”
    PG: All I ever was trying to establish is that you’re going to have to let your options go indeterminate in certain cases (my original comment). You wanted to avoid that, it seemed to me, and thought that an appeal to gambles could deal with the problem. But here I’m not sure how you’re going to get an appeal to gambles going because the indeterminacy arises not from indeterminism with respect to the actual laws of nature, but from the structure of the space of possible worlds, and I have no idea how to understand objective chances with respect to facts concerning possible world space.

  36. Hi Pete,
    You write: I don’t see what licenses you to (3). Again, why the asymmetry between how we treat counterfactuals like the Nixon-counterfactual and counterfactuals concerning my intentions.
    So you don’t see what licences me to insist on holding X’s not firing fixed. Fair enough. But I don’t see what licences you to refuse to hold X’s not firing fixed. And I don’t see that there is an asymmetry in the way that I’m treating the counterfactual ‘what would have happened if I had had a different set of judgment-sensitive attitudes’ and the way that you’re treating counterfactuals like the Nixon-counterfactuals. Presumably, we’re both holding fixed as much as possible. And it seems to me that we can hold fixed that X does not fire while inquiring as to what would have happened if I had had a different set of judgment-sensitive attitudes than those that I in fact had, for I can intend to save the two without X’s firing.
    You also write: All I ever was trying to establish is that you’re going to have to let your options go indeterminate in certain cases (my original comment). You wanted to avoid that, it seemed to me, and thought that an appeal to gambles could deal with the problem.
    Okay, I accept that by appealing to gambles I will not avoid indeterminate options altogether. Thanks for making this point so vividly clear to me. But this seems fine to me so long as I don’t get indeterminate options in cases where it seems clear that my objective obligation are determinate. And I don’t find the idea that my obligations are indeterminate in your case (given that my options are indeterminate in your case) problematic. But I do find it problematic to suggest that my objective obligation are indeterminate in the following sort of case:
    Suppose that there is only one way for me to try to pick up the coffee cup and take a sip (say, by intending or deciding or choosing to do so), and suppose that it’s indeterminate whether, given this attempt/intention, I will either (a) pick up the coffee cup and take a sip (75% chance) or (b) pick up the loaded revolver and shoot you (25% chance). In this case, I don’t want to say that my objective obligations are just indeterminate. I want to say that I ought not to try to pick up the coffee cup. I’m not exactly clear how I want to formulate all this. But, in any case, it seems that this kind of case calls out for an account of my options that doesn’t leave my obligations indeterminate in a way that your type of case doesn’t. That is, in your case, I’m happy to say that my objective obligations are indeterminate even though I’m not happy to say that my objective obligations are indeterminate in this coffee-or-revolver case.

Comments are closed.