Conditionals, Probabilities, and ‘Or to If’

In his new book The Meaning of If  Justin Khoo discusses the inference from “Either not-A or B” to “If A then B”.  Consider: “Either he is not in France at all, or he is in Paris”.  Who would not infer “If he is in France, he is in Paris”? Yet, who would agree that “if … then” just means “either not … or”, the dreaded material conditional?

I do not want to argue either for or against the validity of the ‘or to if’ inference.  The curious fact is that just thinking about it brings out something very unusual about conditionals.  Perhaps it will have far reaching consequences for the concept of logical entailment.

To set out the traditional concept of entailment let A be a Boolean algebra of propositions and P(A) the set of all probability measures with domain A.  I will use “&” for the meet operator. Then entailment, as a relation between propositions, can be characterized in three different ways, which are in fact, in this case, equivalent:

(1) the natural partial ordering of A, with (a ≤ b) defined as (a&b) = a.

(2) For all m in P(A), if m(a) = 1 then m(b) = 1

(3) For all m in P(A), m(a) <= m(b)

The argument for their equivalence, which is spelled out in the Appendix, requires just two facts about P(A):

  • P(A) is closed under conditionalization, that is, if m(a) > 0 then m(. |a) is also in P(A), if defined.
  • If a is a non-zero element of A then there is a measure m in P(A) such that m(a) > 0.

Enter the Conditional: the ‘Or to If’ Counterexample

The Thesis, aka Stalnaker’s Thesis, is that the probability of conditional (a → b) is the conditional probability of b given a, when defined:

m(a →b) = m(b|a) = m(b & a)/m(a), if defined.

Point:  if the special operator→ is added to A with the condition that m(a → b) = m(b|a) when defined, then these three candidate definitions are no longer equivalent. For:

(4) For all m in P(A), if m(~a v b) = 1 then m(b|a) = 1 

(5) For many m in P(A), m(~a v b) > m(b|a)

For (4) note that if m(~a v b) = 1 then m(a & ~b) = 0 so m(a) = m(a&b).  Therefore m(b|a) = 1.  So on the second characterization of entailment, the “if to or” inference is valid.  If you are sure of the premise you will be sure of the consequent.

But not so for the third characterization of entailment.  For (5) take this example (I will call it the counterexample): we are going to toss a fair die:

Probability that the outcome will be either not even or six (i.e. in {1, 3, 5, 6}) = 4/6 = 2/3.

Probability that the outcome is six, given that the outcome is even = 1/3.

So in this context the traditional three-fold concept of entailment comes apart.

Losing Closure Under Conditionalization

Recalling that to prove the equivalence of (1) –(3) for a Boolean algebra, we needed just two assumptions, we can use that, together with the counterexample, to draw a conclusion that holds for every and any logic of conditionals with Stalnaker’s Thesis.

Let A→ be a Boolean algebra with additional operator →.  Let P(A→) be the set of all probability measures on A→such that m(a → b) = m(b|a) when defined.  Then:

Theorem. If for every non-zero element a of A→ there is a member m of P(A→) such that m(a) > 0 then P(A→) is not closed under conditionalization.

I was surprised.  Previous examples of such lack of closure were due to special principles like Miller’s Principle and the Reflection Principle.

I do not think this result looks really bad for the Thesis, though it needs to be explored. It does mean that from a semantic point of view, there are in the same set-up two distinct logics of conditionals.   

However, it seems to look bad for the Extended Thesis (aka ‘fully resilient Adams Thesis’):

            (*) m(A → B| E) = m(B | E & A) if defined

For if we look at the conditionalization of m on a proposition X, namely the function m*(. | ..) = m( . | .. & X), then if m  is well defined and satisfies (*) we get

            m*(A → B| E) = m(A → B| E & X) = m(B | E & A & X) = m*(B| E & A)

that is, m* also satisfies the Extended Thesis.  So it appears that the Extended Thesis entails or requires closure under conditionalization for the set of admissible probability measures.  

But it can’t have it, in view of the ‘or to if’ counterexample.

Appendix.

 That (1) – (3) are equivalent for a Boolean algebra (with no modal operators).

Clearly, if (a & b) = a then m(a) <= m(b), and hence also that if m(a) = 1 then m(b) = 1.  This includes the case of a = 0.

So I need to show that if the first relation does not hold, that is, if it is not the case that a ≤ b, then neither do the other two.

Note:  I will make use of just two features of P(A):

  • P(A) is closed under conditionalization, that is, if m(a) > 0 then m(. |a) is also in P(A), if defined.
  • If a is a non-zero element of A then there is a measure m in P(A) such that m(a) > 0.

Lemma.  If it is not the case that (a&b) = a then there is a measure p such that p(a & ~b) > 0 while p(b & ~a|) = 0.

For if (a & b) is not a then (a & ~b) is a non-zero element.  Hence there is is a measure m such that m(a & ~b) >0, and so also m(a) > 0.  So m(.|a) is well defined.  And then m(a & ~b|a) >0 while m(b & ~a| a) = 0.

Ad condition (3): Suppose now that (a & b) is only part of a, and m(a & ~b) > 0).  Then m(a) > 0, so m(. |a) is well defined and in P(A). Now m(b|a) = m(b & a)/[m(b & ~a) + m(b & a)] hence < 1, hence < m(a|a) = 1.

Ad condition (2): All we have left now to show is that if (a & b) is not a, and a is not 0, then condition (2) does not hold either.  But that follows from what we just saw: there is then a member m of P(A) such that m(a) > m(b & a). So consider the measure m(.|a), which is also in P(A): m(b|a) < 1, while of course m(a|a) = 1.

Leave a comment