# Logic Help

Consider:

(1) If x is morally wrong and S freely does x, then S is blameworthy for doing x.

(2) If x is rationally optimal, then S is not blameworthy for freely doing x.

(3) Therefore, if x is morally wrong, then x is not rationally optimal.

Skorupski (1999, 170) claims that we can derive (3) from (1) and (2). Intuitively, this seems right, but how exactly is the derivation supposed to go? My logic is a bit rusty.

## 25 Replies to “Logic Help”

1. anonymoustinternetuser says:

If x is morally wrong [and S freely does x], then S is blameworthy for doing x.
If S is blameworthy for doing x, then S is not not blameworthy for doing x.
If S is not not blameworthy for doing x, then x is not rationally optimal.

It *seems* to follow…Here is an informal gloss:
For the conclusion to be false, there must be some x for which x is morally wrong and x is rationally optimal.
There are two pertinent types of case: those in which the action is done freely and those in which it is not done freely.
First, consider cases in which S does do x freely. In these cases x is wrong and is done freely. By 1 and MP, the agent is not blameworthy for doing x. And, by 2 and MT, x is not rationally optimal (MT and 2). This contradicts the assumption.
Note: This uses an additional assumption — namely that if you are not blameworthy for doing x then you are not blameworthy for freely doing x.
Second, consider cases in which x is wrong but not freely done. If A does not freely x, then A cannot be blameworthy for freely doing x. So x is not rationally optimal (by MT and 2). This contradicts the assumption.
Right?

Ugg…Wrong!
The second case is wrong. The following is a counterexample to the initial proof: x is wrong, not freely done, but rationally optimal.
AND: Typo in the first case: should be “By 1 and MP, the agent is blameworthy for doing x”.

And you thought YOUR logic was rusty!

5. I got it by an indirect natural deduction, but I fudged it a little. The phrase “and S freely does x” is tricky. One could either delete it from the premises or add it to the conclusion (that’s what I did), so the conclusion reads “if x is wrong, and S freely does x, then it’s not the case that x is rationally optimal.” I’m not sure that hurts the overall import of the argument, though.
I’ll skip a few of the more pedantic steps.
1. ForAllx((Wx&Sfx)->Sbx) [Premise] (Wx: x is wrong; Sf: S freely does x; Sbx: S is blameworthy for x).
2. ForAllx((Rx&Sfx)->~Sbx) [Premise] (Rx: x is rationally optimal)
3. ~ForAllx((Wx&Sfx)->~Rx) [Ass. for IP]
4. ThereIsAnx~((Wx&Sfx)->~Rx) [From 3]
5. ~((Wa&Sfa)->~Ra) [Existential elimination]
6. Wa&Sfa [From 5]
7. Ra [From 5]
8. (Wa&Sfa)->Sba [From 1]
9. (Ra&Sfa)->~Sba [From 2]
10. Sba [From 6&8]
11. Sfa [From 6]
12. ~Sba [From 7, 9, 11]
13. ForAllx((Wx&Sfx)->~Rx) [From 10, 12, IP]
As Brad C writes, without adding that bit into the conclusion, it isn’t valid. The argument as is leaves open the possibility that a wrong action could be rationally optimal if not freely done. And that’s reflected in this proof, I think.

6. The proof looks invalid to me, and there is a simple countermodel for it in propositional logic. Obviously, such countermodels are not in general decisive, but this one looks right.
(1) If x is morally wrong and S freely does x, then S is blameworthy for doing x.
(2) If x is rationally optimal, then S is not blameworthy for freely doing x.
(3) Therefore, if x is morally wrong, then x is not rationally optimal.
1. If p & q then r
2. If z then ~r
3. If p then ~z
It’s invalid for a simple reason. It might be true that (i) X is morally wrong, (ii) X is rationally optimal and (iii) X is not freely performed. In that case it might also be true that (iv) S is not blameworthy for doing X. That makes all of the premises true and the conclusion false. That countermodel invalidates the inference.

7. Okay. I’m convinced that Skorupski’s argument is invalid as it stands, but Skorupski seems to be onto something. So, now, I want to try to come up with something that captures the spirit of his argument but is valid. And, also, I want to make sure that the conclusion is (3), and not some amended version of (3).
So, perhaps, this works:
(1′) If x is wrong, then someone can, under suitable conditions, be blameworthy for doing x.
(2′) Under no conditions can someone be blameworthy for doing x if x is rationally optimal.
Therefore, (3) if x is morally wrong, then x is not rationally optimal.
Is this valid? And does this get to the spirit of Skorupski’s argument for (3)?

8. I’m not sure.
I think it depends on how you translate “under suitable conditions”. One could read “under suitable conditions” to include “rationally impermissible”, in which case you could have blameworthy immorality and no blameworthy rationality so long as “being rationally optimal” is an excusing condition from being morally blamed. But I take it that’s not what you had in mind. You were trying to distinguish the modal properties of “morally wrong” and “rationally optimal”. Why not say this?
1. If x is wrong, there is some condition under which one could be blamed for x.
2. If x is rationally optimal, there is no condition under which one could be blamed for x.
3. Hence, if x is morally wrong, x is not rationally optimal.
That would ensure the translation of (1) and (2) as:
1. ForAllx(Wx->ThereIsAy(Cy&Sbxy)) (Cy: y is a condition; Sbxy: S is blameworthy for x under y)
2. ForAllX(Rx->~ThereIsAy(Cy&Sbxy))
This ensures a valid indirect proof. The other way doesn’t:
1. ForAllx(Wx->(~Sbx->ThereIsAyEBy)) (EBy: S is excused from blame based on y)
2. ForAllx(Rx->~Sbx)
(Boy, this comment was awfully nerdy…)

9. Jamie says:

I think you can fix it and preserve the spirit like this:
(J1) If x is morally wrong, then freely doing x is blameworthy.
(J2) If x is rationally optimal, then freely doing S is not blameworthy.
Hm, maybe Skorupski wouldn’t like the idea of an action being blameworthy, rather than a person being blameworthy for an action? But I don’t see why he wouldn’t like it, so the J-version looks okay.
Now the derivation is easy, right? Contrapose (J2).

10. Jamie,
I like its elegance, although I think that he might rightly quibble with the idea that it’s the act rather than the agent that is the proper object of blame. But we can easily modify your suggestion to avoid such quibbles:
(J1′) If x is morally wrong, then S would be blameworthy for doing x freely.
(J2′) If x is rationally optimal, then S would not be blameworthy for doing x freely.

11. Doug,
There’s a straightforward way to make the argument valid, and maybe he had something like this in mind. Anyway, maybe it accounts for the intuition that something is right about this argument. Just add the assumption in (1′)
(1′) x is morally wrong and S freely did x.
(1) If x is morally wrong and S freely does x, then S is blameworthy for doing x.
(2) If x is rationally optimal, then S is not blameworthy for freely doing x.
(3) Therefore, if x is morally wrong, then x is not rationally optimal.
The following scheme is valid.
1′. p & q
1. If p & q then r
2. If z then ~r
3. If p then ~z
Since the argument is an instance of a valid argument scheme, it’s valid.

12. Jamie says:

But Mike, I thought ‘x‘ was supposed to be a variable. There’s an invisible ‘∀x‘ binding each line (separately).

13. M says:

Wx = it is morally wrong to do x.
Rx = it is rationally optimal to do x.
Fsx = S is free wrt x.
Bsx = S is blameworthy for doing x.
1- (Wx & Fsx) -> Bsx (Prem)
2- Rx -> ~Bsx & Fsx (Prem)
3- Rx (Assume CP)
4- ~Bsx & Fsx (2,3 MP)
5- ~Bsx (4 simp)
6- ~(Wx & Fsx) (1, 5 MT)
7- ~Wx v ~Fsx (6 Dem)
8- Fsx (4 simp)
9- ~Wx (7,8 DS)
10- Rx -> ~Wx (3-9 CP)
11- Wx -> ~Rx (10 Contra)
The ‘x’s are all bound by a universal quantifier.

14. Paul Gowder says:

The issue underneath Mike’s countermodel is that “blameworthy for doing x” and “blameworthy for freely doing x” are not the same. I’d rewrite as follows (italics marking off my additions):
(1) If x is morally wrong and S freely does x, then S is blameworthy for doing x.
(2) If x is rationally optimal, then S is not blameworthy for freely doing x.
(3) Therefore, if x is morally wrong and S freely does x, then x is not rationally optimal.
1. If p & q then r
2. If z then ~(r & q)
3. If p & q then ~z
This seems valid, but at the price of stripping off some of the appeal with the addition to (3). We can get the appeal back by denying that it’s possible for a non-freely done act to be rationally optimal…

15. Paul, here are the premises with emphasis added.
(1) If x is morally wrong and S freely does x, then S is blameworthy for doing x.
(2) If x is rationally optimal, then S is not blameworthy for freely doing x.
I took the highlighted propositions in (1) and (2) to express a a propostion and it’s negation. In other words, I took (2) to be just equivalent to (2′).
2′. If x is rationally optimal, then S is not blameworthy for doing x.
This is the natural way to read these propositions. Especially so, since Doug did not suggest that the original argument was in some precise formulation.
Jamie,
I guess we could read X as a bound (maybe unbound) variable. I didn’t see that anything turned on this point for Doug. I simply instantiated the variable and showed how to make the argument valid. I took the main concern to be that some valid version of this argument be located. But as always, if the argument is valid with a simplier structure, adding more structure alone will not make it invalid (or, for any argument that has a valid propositional form, it will also have a valid predicate form).

16. Jamie says:

Mike, the problem with binding ‘x’ with a universal quantifier in your version is not that it would become invalid! The problem is that the premise (1′) will then say that everything is morally wrong and S freely did everything. This strikes me as a somewhat implausible premise, and I’m betting John Skorupski doesn’t want to be committed to it.
Mysterious M,
(∀x)(Rx -> ~Bsx & Fsx)
This also strikes me as rather implausible and unSkorupskian in its spirit. Who is this heroic s who does everything that is rationally optimal?

17. The problem is that the premise (1′) will then say that everything is morally wrong and S freely did everything.
Well, that wouldn’t be the most natural reading of (1′), would it? (1′) says,
If x is morally wrong [and S freely does x], then S is blameworthy for doing x.
If S is blameworthy for doing x, then S is not not blameworthy for doing x.
If S is not not blameworthy for doing x, then x is not rationally optimal.
Posted by: anonymoustinternetuser | July 01, 2008 at 08:31 AM
It *seems* to follow…Here is an informal gloss:
For the conclusion to be false, there must be some x for which x is morally wrong and x is rationally optimal.
There are two pertinent types of case: those in which the action is done freely and those in which it is not done freely.
First, consider cases in which S does do x freely. In these cases x is wrong and is done freely. By 1 and MP, the agent is not blameworthy for doing x. And, by 2 and MT, x is not rationally optimal (MT and 2). This contradicts the assumption.
Note: This uses an additional assumption — namely that if you are not blameworthy for doing x then you are not blameworthy for freely doing x.
Second, consider cases in which x is wrong but not freely done. If A does not freely x, then A cannot be blameworthy for freely doing x. So x is not rationally optimal (by MT and 2). This contradicts the assumption.
Right?
Posted by: Brad C | July 01, 2008 at 08:39 AM
Ugg…Wrong!
The second case is wrong. The following is a counterexample to the initial proof: x is wrong, not freely done, but rationally optimal.
AND: Typo in the first case: should be “By 1 and MP, the agent is blameworthy for doing x”.
Posted by: Brad C | July 01, 2008 at 08:46 AM
And you thought YOUR logic was rusty!
Posted by: Brad C | July 01, 2008 at 09:17 AM
I got it by an indirect natural deduction, but I fudged it a little. The phrase “and S freely does x” is tricky. One could either delete it from the premises or add it to the conclusion (that’s what I did), so the conclusion reads “if x is wrong, and S freely does x, then it’s not the case that x is rationally optimal.” I’m not sure that hurts the overall import of the argument, though.
I’ll skip a few of the more pedantic steps.
1. ForAllx((Wx&Sfx)->Sbx) [Premise] (Wx: x is wrong; Sf: S freely does x; Sbx: S is blameworthy for x).
2. ForAllx((Rx&Sfx)->~Sbx) [Premise] (Rx: x is rationally optimal)
3. ~ForAllx((Wx&Sfx)->~Rx) [Ass. for IP]
4. ThereIsAnx~((Wx&Sfx)->~Rx) [From 3]
5. ~((Wa&Sfa)->~Ra) [Existential elimination]
6. Wa&Sfa [From 5]
7. Ra [From 5]
8. (Wa&Sfa)->Sba [From 1]
9. (Ra&Sfa)->~Sba [From 2]
10. Sba [From 6&8]
11. Sfa [From 6]
12. ~Sba [From 7, 9, 11]
13. ForAllx((Wx&Sfx)->~Rx) [From 10, 12, IP]
As Brad C writes, without adding that bit into the conclusion, it isn’t valid. The argument as is leaves open the possibility that a wrong action could be rationally optimal if not freely done. And that’s reflected in this proof, I think.
Posted by: Dale Dorsey | July 01, 2008 at 09:17 AM
The proof looks invalid to me, and there is a simple countermodel for it in propositional logic. Obviously, such countermodels are not in general decisive, but this one looks right.
(1) If x is morally wrong and S freely does x, then S is blameworthy for doing x.
(2) If x is rationally optimal, then S is not blameworthy for freely doing x.
(3) Therefore, if x is morally wrong, then x is not rationally optimal.
1. If p & q then r
2. If z then ~r
3. If p then ~z
It’s invalid for a simple reason. It might be true that (i) X is morally wrong, (ii) X is rationally optimal and (iii) X is not freely performed. In that case it might also be true that (iv) S is not blameworthy for doing X. That makes all of the premises true and the conclusion false. That countermodel invalidates the inference.
Posted by: Mike | July 01, 2008 at 09:35 AM
Okay. I’m convinced that Skorupski’s argument is invalid as it stands, but Skorupski seems to be onto something. So, now, I want to try to come up with something that captures the spirit of his argument but is valid. And, also, I want to make sure that the conclusion is (3), and not some amended version of (3).
So, perhaps, this works:
(1′) If x is wrong, then someone can, under suitable conditions, be blameworthy for doing x.
(2′) Under no conditions can someone be blameworthy for doing x if x is rationally optimal.
Therefore, (3) if x is morally wrong, then x is not rationally optimal.
Is this valid? And does this get to the spirit of Skorupski’s argument for (3)?
Posted by: Doug Portmore | July 01, 2008 at 10:09 AM
I’m not sure.
I think it depends on how you translate “under suitable conditions”. One could read “under suitable conditions” to include “rationally impermissible”, in which case you could have blameworthy immorality and no blameworthy rationality so long as “being rationally optimal” is an excusing condition from being morally blamed. But I take it that’s not what you had in mind. You were trying to distinguish the modal properties of “morally wrong” and “rationally optimal”. Why not say this?
1. If x is wrong, there is some condition under which one could be blamed for x.
2. If x is rationally optimal, there is no condition under which one could be blamed for x.
3. Hence, if x is morally wrong, x is not rationally optimal.
That would ensure the translation of (1) and (2) as:
1. ForAllx(Wx->ThereIsAy(Cy&Sbxy)) (Cy: y is a condition; Sbxy: S is blameworthy for x under y)
2. ForAllX(Rx->~ThereIsAy(Cy&Sbxy))
This ensures a valid indirect proof. The other way doesn’t:
1. ForAllx(Wx->(~Sbx->ThereIsAyEBy)) (EBy: S is excused from blame based on y)
2. ForAllx(Rx->~Sbx)
(Boy, this comment was awfully nerdy…)
Posted by: Dale Dorsey | July 01, 2008 at 11:12 AM
I think you can fix it and preserve the spirit like this:
(J1) If x is morally wrong, then freely doing x is blameworthy.
(J2) If x is rationally optimal, then freely doing S is not blameworthy.
Hm, maybe Skorupski wouldn’t like the idea of an action being blameworthy, rather than a person being blameworthy for an action? But I don’t see why he wouldn’t like it, so the J-version looks okay.
Now the derivation is easy, right? Contrapose (J2).
Posted by: Jamie | July 01, 2008 at 11:13 AM
Jamie,
I like its elegance, although I think that he might rightly quibble with the idea that it’s the act rather than the agent that is the proper object of blame. But we can easily modify your suggestion to avoid such quibbles:
(J1′) If x is morally wrong, then S would be blameworthy for doing x freely.
(J2′) If x is rationally optimal, then S would not be blameworthy for doing x freely.
Posted by: Doug Portmore | July 01, 2008 at 11:31 AM
Doug,
There’s a straightforward way to make the argument valid, and maybe he had something like this in mind. Anyway, maybe it accounts for the intuition that something is right about this argument. Just add the assumption in (1′)
(1′) x is morally wrong and S freely did x.
And that goes into this, doesn’t it?
(1′) (Ex)(Wx & Dx)
I’m pretty sure the argument would still be valid, so long as the conclusion is modified in the right ways.

18. Geeze, I have no idea what’s happened there. I’m replying to Jamie’s comments.
Jamie says,
The problem is that the premise (1′) will then say that everything is morally wrong and S freely did everything.
I say,
Well, that wouldn’t be the most natural reading of (1′), would it? (1′) says,
(1′) x is morally wrong and S freely did x.
And that goes into this, doesn’t it?
(1′) (Ex)(Wx & Dx)
I’m sure the argument would remain valid, given consistent substitution for x.

19. Jamie says:

Mike,
I’m sure the argument would remain valid, given consistent substitution for x.
So, you’re putting an existential quantifier on premise (1′):
(1′) (∃x)(Wx & Dx)
The rest of the lines must be these:
(1) (∀x) [(Wx & Dx) → Bsx].
(2) (∀x) (Rx → ~Bsx)
∴ (3) (∀x) (Wx → ~Rx)
But that’s certainly not valid.
Maybe what I’m missing is whatever you meant by ‘consistent substitution for x’. What did you mean by that? Or in any case, what is the valid argument with quantifiers?
(Typepad seems to be in pretty bad shape right now, by the way. Is it something we did here?)

20. Defnitely right, Jamie. A few posts up, and lost in the confusion, I say,
I’m pretty sure the argument would still be valid, so long as the conclusion is modified in the right ways.
So you’d have to change the conclusion from,
∴ (3) (∀x) (Wx → ~Rx)
to
∴ (3) (Ex) (Wx → ~Rx)
You have at least one instantiation of (1′) on which, say, a, is Wa & Da. Since the remaining premises are universal, you should be able to arrive at Wa → ~Ra. That’s what I had in mind, but it might not be what Doug is after.
I have no idea what is happening with the comment posting. Really confusing.

21. Hi Mike –
That certainly does not seem to be what Doug is after, given that Ex(Wx->~Rx) is trivially true, just so long as there is some action that is either morally correct or irrational. Doug needs the universal.
Leaving aside the logic issues (which, I think, Doug has settled with his previous correction of the argument), I wonder whether we should think this argument is plausible? It seems to me that the fan of rationally optimal immorality would do either one of two things, either flatly deny (1), or mark a distinction between types of blameworthiness–moral blameworthiness and, if you like, blameworthiness tout court. If we’re talking about blameworthiness tout court, (1) is false, if we’re talking about moral blameworthiness, (2) is false. I’m curious about people’s reaction to the argument itself.

22. Whoops. I should have said that if we’re talking about blameworthiness tout court, then (1) follows only under conditions that render the argument invalid, viz., that “conditions” can include the rational status of the act. Those conditions have to be excluded so that the right “conditions” don’t include the acts irrationality, which would allow rationally optimal immorality along with (1) and (2).

23. That certainly does not seem to be what Doug is after, given that Ex(Wx->~Rx) is trivially true, just so long as there is some action that is either morally correct or irrational. Doug needs the universal.
Well, it’s not trivially true, since it is not a trivial truth that there are the sorts of action in question. But there’s much more to this conclusion than some action instantiates the property of being wrong and not being rationally optmal. The conclusion holds for any action that instantiates the properties of being morally wrong and freely done. All of them turn out not to be rationally optimal. That’s interesting, sort of.

24. Thanks everyone for the helpful comments. Sometime in the next few days, I’ll post the version of the Skorupski-type argument that I like best and try to defend its premises. Hopefully, that thread will be easier to follow. I’m at a loss as to what happened with this one.

25. I know this is old but in light of the “logic curriculum” post a few days back I thought it would be fun to apply some logic.
The argument as it stands is indeed formally invalid. But I think what Skorupski clearly intends is that premise (1),
(1) If x is morally wrong and S freely does x, then S is blameworthy for doing x.
is interpreted as “If x is morally wrong, then S is blameworthy for freely doing x”. Now the argument can be formalized as:
∀x (Wrong(x) → Blame(x))
∀x (ROpt(x) → ¬ Blame(x))
∴ ∀x (Wrong(x) → ¬ROpt(x))
And this is a valid syllogism (cesare or camestres, I think).
The apparent invalidity of the argument discussed above arises from the fact that “S is blameworthy for freely doing x” cannot be glossed as “If S does x freely, then S is blameworthy for doing x” (with a material if-then) because this would make S blameworthy for any x which S does not do freely (including x’s which S does not do at all, but that’s not so essential). You need at least a strict conditional here, better yet, a subjunctive: “If S were to do x freely, S would be blameworthy for doing x”.
However, the argument is valid as long as “morally blameworthy for freely doing x” is formalized the same way in (1) and (2), even if it is formalized incorrectly (using the material conditional). For in that case you’d have
(1′) ∀x ((Wrong(x) ∧ DoesFreely(S, x)) → Blameworthy(S, x))
which is equivalent to
(1”) ∀x (Wrong(x) → (DoesFreely(S, x) → Blameworthy(S, x)))
which is true even if there is an x which is wrong but not done freely by S, but the second premise would have to be
(2′) ∀x (ROpt(x) → &not(DoesFreely(S, x) → Blameworthy(S, x))
which is false if there is an x which is rationally optiomal but not done freely by S.
The alternative formalization of (2) which makes the argument come out invalid is:
(2”) ∀x (ROpt(x) → (DoesFreely(S, x) → ¬Blameworthy(S, x))
This can indeed be true even if there is an x which is rationally optimal, wrong, but which S does not do freely.