My previous blog post on this subject was quite abstract. To help our imagination we need to have an example.
Result that we had. Let A→ be a Boolean algebra with additional operator →. Let P(A→) be the set of all probability measures on A→such that m(a → b) = m(b|a) when defined (“Stalnaker’s Thesis’ holds). Then:
Theorem. If for every non-zero element a of A→ there is a member m of P(A→) such that m(a) > 0 then P(A→) is not closed under conditionalization.
For the example we can adapt one from Paolo Santorio. A fair die is to be tossed, and the possible outcomes (possible worlds) are just the six different numbers that can come up. So the proposition “the outcome will be even” is just the set {2, 4, 6}. Now we consider the proposition:
Q. If the outcome is odd or six then, if the outcome is even it is six.
For the probability function m we choose the natural one: the probability of “the outcome will be even” is the proportion of {2, 4, 6} in {1, 2,3, 4, 5, 6}, that is, 1/2. And so forth. Lets use E to stand for “the outcome is even” and S for “the outcome is six”. So Q is [(~E ∪ S)→ (E → S)].
PLAN. What we will do is first to determine m(Q). Then we will look at the conditionalization m# of m on the antecedent (~E v S), and next on the conditionalization m## of m# on E. If everything goes well, so to speak, then the probability m(Q) will be the same as m##(S). If that is not so, we will have our example to show that conditionalization does not always preserve satisfaction of Stalnaker’s Thesis.
EXECUTION. Step One is to determine the probability m(Q). The antecedent of Q is (~E ∪ S), which is the proposition {1, 3, 5, 6}. What about the consequent, (E → S)?
Well, E → S is true in world 6, and definitely false in worlds 2 and 4. Where else will it be true or false?
Here we appeal to Stalker’s Thesis. The probability of (E →S) is the conditional probability of S given E, which is 1/3. So that proposition (E → S) must have exactly two worlds in it (2/6 = 1/3). Since it is true in 6, it must also be true in precisely one of {1, 3, 5}. Which it is does not affect the argument, so let it be 5. Then (E → S) = {5, 6}.
Now we can see that the probability of Q is therefore, by Stalnaker’s Thesis, the probability of {5,6} given {1, 3, 5, 6}, that is, 1/2. (Notice how often Q is false: if the outcome turns out to be 1 then the antecedent is true, but there is no reason why “if it is even it is six” would be true there, etc.)
Step Two is to conditionalize m on the antecedent (~E ∪ S), to produce probability function m#. If m# still satisfies Stalnaker’s Thesis then m#(E → S) = m(Q). Next we conditionalize m# on E, to produce probability function m##. Then, if things are still going well, m##(S) = m(Q).
Well, m##(S) = m#(S|E) = m(S| E ∩ (~E ∪ S)) = m(S|E ∩ S) = 1.
Bad news! That is greater than m(Q) = 1/2. So things did not go well, and we conclude that conditionalization has taken us outside P(A→).
Why does that show that conditionalization has taken us outside P(A→)? Well suppose that m# obeyed Stalnaker’s Thesis. Then we can argue:
m##(S) = 1, so m#(S|E) = 1. Therefore m#(E → S) = 1 by Stalnaker’s Thesis. Hence m(E → S | ~E v S) = 1. Finally therefore m((~E v S) → (E → S)) = m(Q) = 1. But that is false, as we saw above m(Q) = 1/2.
So, given that m obeys the Thesis, its conditionalization m# does not.
Note. This also shows along the way that the Extended Stalnaker’s Thesis, that m(A → B|X) = m(B|A ∩ X) for all X, is untenable. (But this is probably just the 51st reason to say so.)
APPENDIX
Just to spell out what is meant by conditionalization, let’s note that it must be defined carefully to show that it is a matter of adding to any condition already present (and of course to allow that it is undefined if the result is a condition with probability zero).
So m(A|B) = m(A ∩ B)/m(B), defined iff m(B) > 0. Hence m(A) = m(A|K), where K is the tautology (unit element of the algebra).
Then the conditionalization m# of m on B is m(. | K ∩ B), and the conditionalization m## of m# on C is m#(. |K ∩ C) = m(. | K ∩ B ∩ C), and so forth. Calculation:
m##(X) = m#(X|C) = m#(X ∩ C)/m#(C) = m(X ∩ C |B) divided by m(C|B),
that is [m(X ∩ C ∩B)/m(B)] divided by [m(C ∩ B)/m(B)],
which is m(X ∩ C ∩B) divided by m(C ∩ B),
that is m(X| C ∩ B).