Hyperplans and vagueness

We’ve been having a reading group on Gibbard’s Thinking How
to Live. It’s been really interesting to go back to it after there having been
so much discussion about it recently. At the heart of Gibbard’s expressivist
semantics lie ‘the hyperplans’. This is a technical notion that is supposed to
be helpful in elucidating the content of our normative judgments. I’ve started
to become worried about whether there are or could be any hyperplans as Gibbard
understands them. I’m uncertain about how big of a worry this would be for him.
So, after quickly explaining my worry, I’ll leave you with some options
about how he might proceed.

According to Gibbard, hyperplans have two central features.
They can be understood as the following two claims:

  1. Hyperplans
    are maximal contingency plans (54).
  2. Plans
    must be couched in recognitional concepts (104).

First, a couple of words about what these claims mean and
what motivates them. 1. says that hyperplans are fully decided and complete
states. A planner (a hyperplanner) who accepts just one hyperplan has decided
which one action to do in every conceivable situation he could be in. He has
thus ruled out all other options in every possible situation of acting.

Gibbard is trying to give an
account of the content of normative utterances in terms of the
mental states they would conventionally express. These expressed attitudes
would thus have to have the kind of logical qualities (of conflicting with and
entailing one another) that would explain the ordinary logical features of indicative
sentences. So, he tries to give an account of the content of normative
utterances in terms of the attitudes of allowing some hyperplans and ruling out
(or disagreeing with) other hyperplans. For instance, roughly, to say that Ben
ought to phi is to rule out all the hyperplans in which one does not phi in
Ben’s situation.

The hope is that, in virtue of
this, he can provide a semantics of normative claims such that it resembles
possible world semantics so closely that the logical features of the claims are
preserved. Of course, James Dreier and Mark Schroeder have written much on this
suggesting that Gibbard’s account does not work in the end. It seems like for
being able to account for negation, Gibbard must allow that hyperplanners could
have distinct attitudes of indifference towards plans. And, it’s not clear
whether even that solves the problem. This is not my worry though. I'm worried whether he can even have the tools required for this theory.

What about 2.? Gibbard makes this
claim as a part of the argument that planners are committed to thinking that
natural properties constitute being okay to do even if there is a difference
between normative and naturalistic concepts. I think the motivation for saying
this is the following. Plans are mental states which we form for a purpose, and
not wordly entities like possible worlds. This means firstly that they must be
couched in terms of concepts (and not in terms of properties).

Plans are also something the
planner forms for herself to follow. A plan that was couched in terms of non-recognitional
concepts is not anything that one could follow. Following requires being able
to recognize what the given plan says about the situation in which one believes
to be in and the alternatives one has in it. So, to follow a plan, one must be
able to match one’s conception of the circumstances to the descriptions of the
circumstances in the plan. This is why the concepts of the plan cannot outstrip
one’s recognitional capacities. As Gibbard puts this, we form thought of what
to do with concepts we can use in recognising our circumstances and
alternatives.

Gibbard is explicit that this
goes for the hyperplans too: ‘only recognitional concepts figure in plans fully
specified’.

So, here’s the obvious worry. All
recognitional concepts there are and there could be are and must be vague
concepts. For any concept such that it allows us to recognise in some cases
that it applies and in others that it doesn’t, there are going to be cases in
which we fail to recognise either that it applies or that it doesn’t apply. I
take it that this is a basic fact of our concepts and recognitional abilities. And,
it goes all the way to scientific concepts too. True, they are also
recognitional concepts as Gibbard says, but they too are also vague concepts.

This means that, if a hyperplan
is couched in recognitional concepts (as 2. requires), then it will have
situations in which some options are neither an action which is planned to do
nor an option planned not to done. As a result, the hyperplan won’t be fully
decided (contra 1.) – and thus not a hyperplan after all. In contrast, if
hyperplan is a fully decided, complete state (as 1. requires), then it cannot
be couched in recognitional terms (contra 2.) which create undecidedness via
the unavoidable vagueness. So, there won’t be any plans that satisfy both 1.
and 2.

To illustrate this, imagine that
I am decided on going to the beach if it is warm and to the cinema if it is not
warm. Well, there’s still going to be cases in which, as far as what I am able
to recognise, the circumstances are inbetween – when it’s neither warm nor not
warm. For these cases, my plan won’t tell me what to do. And, no matter how I
try to sharpen my plan, it’s not clear whether I could ever get rid of this
sort of cases and still be using concepts that I could use to recognise other
cases. Assuming that there’s higher-order vagueness, even making a contingency
plan for the cases when it’s neither warm nor not warm will not help.

 

So, what could Gibbard do?

a)      He
could give up the idea that hyperplans are fully decided. So they could as
decided states as possible for us but still they would not say what to do in
each case. Maybe even such almost fully decided states could help him to give
an account of the content of our normative judgments. Our judgments would be either
allowing or ruling out these almost-hyperplans. Maybe this would fit the
vagueness of our judgments too.

b)      He
could give up the requirement that hyperplans are couched in recognitional
terms and thus the idea that they are plans proper. They could still play the
right theoretical role in his theory (perhaps – not sure what would happen to
the natural constitution argument in this case).

c)      Finally,
I’m not always sure how much he needs the hyperplans in the first place. A lot
of the stuff he can do with the smaller contingency plans. So, maybe it
wouldn’t matter for him that there are no hyperplans.

d)      Hyperplanners
have special concepts and recognitional skills such that they get rid of
all the vagueness. But, would this be conceivable? How could we then disagree with them with our concepts?

7 Replies to “Hyperplans and vagueness

  1. Interesting question, Jussi.
    Is this yet another route Gibbard might take?
    (e) He could say that any complete hyper-plan would say something about what course(s) of action to take in the contingency of not being sure exactly what circumstances one is facing.

  2. Well, I was thinking about that and I’m not sure. That’s why I brought up the bit about higher-order vagueness quickly. Say that in the beach case, I plan to stay at home if I cannot determine if it’s warm or not warm. The concept of being in that state too will have to be recognitional concept – so I have to be able to recognise whether I am in it or not. And, usually, sure enough, I will be.
    However, that concept too will be a vague one. There will also be borderline cases in which I will not be able determine whether or not I am able to determine if it’s warm or not warm. This is because there are not supposed to be completely luminous psychological states either. Of course, I can plan for these contingencies too but then we get vagueness again one level up. So, it seems like just as long as I make the plans in recognitional concepts, I’ll never get rid of the indeterminacy.

  3. Maybe the hyperplanner will need something more detailed that the “warm/not warm” distinction for the plan to work. It would be much easier to follow a hyperplan of the sort you mention with clearer parameters: “I will go to the beach if it is 72 degrees Farenheit or above, and I will go to the movies if it is 71 degrees Farenheit or below” would be considerably less vague. I’m not sure if those two options fit the bill of ‘recognitional concepts’ pace Gibbard, and it seems overly detailed, but it would prevent the vagueness worry.
    My inclination, from your options, is that Gibbard could go with b) and scrap (or massively alter) the idea of recognitional concepts.
    Option d) would seem to turn Gibbard into some kind of intutionist, which would be weird but not implausible.

  4. It’s true that with accurate recognitional concepts you could get rid of a lot of the indeterminacy. I think that, in order to get fully decided plans for all contingencies, you’d need to get rid of all of it. I doesn’t seem like temperature in degrees is a recognitional concept. We cannot directly recognise how many degrees is. But, of course, we can use measuring instruments and we could plan how to act given what the measuring devices say. You could have a plan – if the thermometer (or many of them) shows 72 or more, then beach, if it shows anything less, then cinema.
    This would get us back to recognitional concepts. However, it given the vagueness of the concept thermometer showing 72, I’m not sure this would give us a plan for all contingencies. What if the thermometer is flickering between 72 and less, and so on.
    Also, I’m worried that even the definition of the scientific concept it being 72 degrees is going to leave us with some borderline cases no matter how you defined them. So, even if we were not using recognitional concepts but rather concepts that can outstrip our recognitional abilities, it’s not clear that we even then would get rid of all indeterminacy in plans.
    I’m also starting to be worried about something about b) and d). Not sure how to put this. But, they both lead to the idea that the hyperplans are couched in different concepts from ours. Then I start to worry that his account of our normative thoughts would be an account given in terms of our attitudes of disagreeing with planners whose plans are couched in concepts very different from anything like our concepts. But, how could we have the attitudes of disagreeing or ruling out plans that are couched in concepts inaccessible to us? I know this is a mere rhetorical question to which there might be an answer.

  5. Jussi,
    I’m not totally convinced by your reply to Sven. Is the second-order vagueness really going to be a problem? Can’t I just have a plan about what to do if I feel any vagueness at all? I say I’ll go if it’s warm. If I am certain it is warm, I’ll go. If I am uncertain whether it is warm, I’ll go, too. What if I’m uncertain whether or not I’m really uncertain that it’s warm? Heck, I’ll go then, too! There is a regress, but it doesn’t seem like a troubling one. I plan to φ if I feel any uncertainty at any level.

  6. Hi Jussi,
    It’s an interesting question!
    One thought I have. Even if hyperplans are couched in terms of recognitional concepts—why should that mean that there are cases that the hyperplan doesn’t cover? Just to take your example: I am decided on going to the beach if it is warm and to the cinema if it is not warm. Take a borderline warm/not-warm situation. By law of excluded middle (which many theories of vagueness will accept), it’s either a warm situation or a not warm situation. Either way, my plan gives a verdict. I might not know what that verdict is, in the situation—but that’s not a failure of completeness of the plan, but rather a failure on my part to have the capacity to implement it in a small range of hard cases.
    Now one could say that the occurrence of these situations shows that the concept of warmth isn’t recognitional. We might say, for example, that for C to be recognitional, x is C iff it is feasible to know that that x is C. And Wright has used principles like this is inconsistent with LEM. But it’s *not* obvious that these strong principles are the right way to think about recognitional concepts (here’s a less demanding alternative—in *clear* cases of C, in the right circumstances, one can “tell by looking” whether or not C obtains).
    So here’s one question for people who know more about Gibbard than me (I’m actually just reading the book right now): what formal features does he need to build into “recognitional” for it to achieve his purposes? From the glosses you’ve given, the minimal characterization seems ok to me: a hyperplan couched in terms of minimally recognitional concepts is still concepty, and still could be something that one could follow (though not something that one could infallibly follow—sometimes one will hit a hard case).
    So it strikes me that what you’ve really got here is a place where what you say depends a great deal on (a) your account of recognitional concepts and (b) your theory of vagueness/indeterminacy. Illustrative poles: it’s hard to for me to see what someone like Wright could make of the idea of a “complete specification” of a hyperplan in terms of recognitional concepts. So in that setting I agree with you. But by contrast Williamsonian epistemicists shouldn’t have any trouble. I think for others (e.g. supervaluationists) it’s a very delicate and involved question—if I had to guess, I have the feeling that broadly classical theories won’t pose a problem to Gibbard.

  7. Thanks Robbie, this is extremely helpful. I agree with you that if can be that a concept can both be recognitional and have sharp boundaries, then the plan itself would give a verdict for every situation and a person who would accept that plan would be fully decided on what to do for each situation. I’m sometimes sceptical about whether concepts could have sharp boundaries as the epistemic views assume. It would also be interesting to think more about how this work with the supervaluationist views. I think it’s fairly interesting already that Gibbard’s view ties him into specific views about vagueness.
    Couple of other points. First, given that a hyperplan includes a contingency plan for all possible situations, this issue would equally rise for all (or almost all) hyperplans which Gibbard needs for the semantic framework.
    Of the question of what formal features does he has to build into recognitional; I think you are right about this. So, on page 107, Gibbard does have an example on the basis of which he concludes that even if one perfectly carries out all recognisable features of the plan one can make mistakes. So, maybe he can say that, in terms of what the formal semantic system and logic requires, all of what he needs is achieved with the minimally recognitional concepts that still a lot of the time can guide us to actions even if this isn’t always the case.
    However, I do think Gibbard brings up a worry for this solution himself on pages 107-8. He says that much more is at issue than mere logic when we think about how to live.
    Here’s what I think he has in mind. Let’s take the recognitional concepts warm and not warm and understand them as having sharp boundaries even if we are not able to tell where that is (but we can recognise the definitely warm and not warm). Let’s say that, unbeknowns to us, that boundary is at 25.00000000… So, if it is 25.00000001 it’s warm, and if it’s 24.99999999 it’s not warm (assume also that this is beyond our means of measurement). Yet, from the perspective of planning what to do, such a difference between the two states seems ‘utterly irrelevant’. From the point of view of sheer logic it would seem fine
    I’m not sure how to formulate this uneasiness I have in mind. But, if recognitional concepts have sharp boundaries that we cannot recognise, then the sheer logic commits us to that short irrelevant differences making a real difference to what is the thing to do in the cases of indeterminacy. So, I guess I would be happier that the kind of differences that make something a thing to do and some other option one that is not should somehow be ones that matter (when this is a question of how to live). And, the kind of differences that matter should be recognisable in the Wrightian sense. But, then they cannot be sharp differences. Sorry, this probably wasn’t as clear as it could have been…

Comments are closed.