Logic of conditional belief, neighborhood semantics

1.   Initial set-up: an agent’s belief as neighborhood  1

2.   About true belief  2

3.   Supposition: of two sorts    2

4.   Belief under supposition that something is the case  3

5.   Modeling belief under supposition that something is the case   3

6.   Supposition and a conditional operator   4

7.   Models with conditional operator   5

8.   Logic of simple belief 5

9.   Self-transparency (?)   5

10.  Logic of conditional belief  6

11.  Ubiquity of Moore’s Paradox  6

The logic of belief and the logic of belief change, as developed by various authors, has for the most part had the following focus:

the reaction of an agent who has opinions about the external world, and changes these opinions upon receipt of new information. 

If logical constraints on belief change are spelled out in some way, it should also be possible to consider answers to “what I would believe if” kinds of questions. Or, to put it differently, possible to entertain a conception of conditional (relative, suppositional) belief, already present in the agent’s initial resources.

NB.  I am not assuming that rational belief change must consist in ‘mobilizing’ those conditional beliefs, or that such change must be like ‘conditionalization’.  That I regard as a separate question, likely to have its own complexities.

These ‘what would I believe’ questions are of two sorts, as I will explain.  To accommodate both sorts, it seems best to go to neighborhood semantics. 

1.   Initial set-up: an agent’s belief as neighborhood

Class LB-0 of models.  A model is an n-tuple <W, F, N, …> The worlds form a set, W.  The propositions are a Boolean algebra F of subsets of W, and F includes W. The function N assigns to each world w a set of propositions N(w), called the neighborhood of w.  The dots indicate that the model may include special operators on propositions, or relations between worlds, or the like.

In the intended interpretation for the logic of belief, a world is ‘centered’ on an agent, and the agent’s full beliefs in world w form ‘neighborhood’ N(w) of propositions. 

While there are many options here for exploring more and less minimal logics, I will for now go with all of the following four conditions (from Chellas, who has different interpretations in mind):

(cm) If  p ∩ q is in N(w), so are p, q

(cc) If  p, q are in N(w), so is p ∩ q

(cn) W is in N(w)

(cd) Λ is not in N(w)

It follows that N(w) is a proper filter on the family of propositions.  A filter is a family closed under finite intersection and superset formation.  It is proper if it does not contain the empty set.[1]

Thus the agent is assumed to be consistent (but see the qualification below, about infinity). 

I’ll assume to begin the syntax of sentential logic, and a standard way of assigning propositions to those sentences.  The syntax may include other elements not specified as yet.  If A is a sentence then [A] is the proposition that A expresses.

For propositions I use lower case letters and for sentences upper case letters. 

2.   About true belief

To what extent is this agent right about what things are like?  His world w may be outside some propositions in N(w), and in others.  The set RIGHT(w) = {p in N(w): w in p}.  RIGHT(w) is not empty, since W is in it.

I think we can’t assume that the agent has such a superior intellect that N(w) is closed under infinite intersection, and so RIGHT(w) is not either. But we may observe that the infinite intersection of the members of RIGHT(w) cannot be empty, since w is a member of each member of RIGHT(w).  

However, the infinite intersection of N(w) may be empty.  The agent does not believe any one self-contradictory proposition, but the infinite totality of his beliefs might not be satisfiable.  

For example he might believe of each natural number n that there are at least n stars, and also believe that the number of stars is finite.  Finite intersections of members of N(w) will all be satisfiable, but it is not possible for all members of N(w) to be true together (in any world) . In this case RIGHT(w) will clearly lack some of those trouble-making beliefs: some must be false.

3.   Supposition: of two sorts

The agent can reason under supposition, and we should make a distinction:

Q. What would I believe if I believed that A?

R. What would I believe if A were  true?

To illustrate the distinction (though in connection with the logic of conditionals) Rich Thomason gave the example

[*]  [My wife Sally is so clever that] if Sally were unfaithful I would not believe it

Given this, there is clearly a difference between what Thomason would believe if Sally were unfaithful, and what he would believe if he believed that Sally was unfaithful.

Discussion of question Q typically starts with this principle:

If [A] is consistent with N(w) then the answer is {[A] ∩ p: p is in N(w)}

Difficulties to be solved then concern what the answer should be if the agent believed ~ A and now receives the information that A.  

Let’s set this much discussed issue aside, and consider question R.  

Thomason’s example is about what beliefs he would not have, under certain conditions.  It is easy enough to make up examples of the same sort, of a positive kind.  Example: when Thomas turned 18 his legal parents told him finally that he had been adopted.  He says

[**] If my parents had lied to me about my having been adopted, I would still believe that Herman is my father.

It is clearly not the case that if Thomas learned that his parents had lied to him about having been adopted, he would then still have the belief that his legal parents were his biological parents.  But [**] could be a correct expression of a current (conditional) belief.

4.   Belief under supposition that something is the case

Whether or not A is consistent with previous beliefs, the answer to “what would I believe if A” will depend on various other, relevant beliefs the agent has.  And there will be a selection involved.

For example, in Thomason’s example, if A = (Sally is deceiving me), the relevant belief he cites is that Sally is so clever.  That is what he is taking into account when framing his answer.  But he could instead have selected other beliefs, perhaps ones that entail that Sally would not be able to live with the deceit.  Then his answer would have been substantially different.

Additional selection is involved after the initial selection of (what is to the agent) the most salient belief already in place.  If asked to elaborate, Thomason could say that he would then believe that Sally had developed new interests that account for her novel behavior.  Or he might say, realizing his own limitations, that he would be just like Ford Maddox Ford’s good soldier, oblivious of any tensions calling for explanation. Or he might finally take into account other aspects, and add that eventually, he would come to believe it, for example because Sally would eventually tell him.

We could introduce a relationship into the models, between worlds and worlds, or worlds and propositions, that will once and for all settle how those selections are made. That could be like the ‘nearness’ relation imposed by Stalnaker and Lewis in their logics of conditionals.  I think that approach resulted in an oversimplification that harms the subject, so will not take that direction.

So for now, all  I want to say, is that the neighborhood of w relative to A, or conditional on A (in sense of question R), call it N(w, [A]), is another filter on the family of propositions.  It is the agent’s doxastic neighborhood conditional on the supposition that A is the case.

5.   Modeling belief under supposition that something is the case

Class LB-1 of models.  A model is an n-tuple <W, F, N1, N2,…> in which <W, F, N1, …> is a model in Class LB-0.  The function N1 assigns to each world w a set of propositions N1(w), called the neighborhood of w.  The function N2assigns to each world w and each proposition p a set of propositions N2(w, p), called the neighborhood of w relative to p. Moreover: 

Constraints on the models in Class LB-1:

(c0)  N1 ( w ) = N2(w, W)

It guarantees that what I would believe on the supposition of a tautology is just what I believe.  There are various ways in which this could be strengthened, but I’ll leave that to think about later.  

The practical effect is that it will now cause no confusion if we suppress those superscripts and just write N(w), N(w, p).

(cm, cc, cn) N(w,p), is a filter on F

But not (cd): we must allow that N(w, A) is not proper.  For this point I recall an example I heard from Bob Stalnaker.  “If Hitler had won the war, we would be taught German in high school” seems quite reasonable.  But, Bob said, if he were to learn, and came to believe, that Hitler had won the war, he would conclude that he was insane.  For there would be no rationalizing how he had been living in an alternative world this long.  

I think we can be less extreme, and just allow that for certain statements A, N(w,A) is an improper filter, it contains the proposition that is false in every world.  Below we’ll have an example where this would be clearly indicated.

For a further example, imagine that Thomas believes both that he would catch the 9am train, and that if he catches the 9am train he will get to work on time.  It must surely follow that he believes that he will get to work on time.  So we must add the constraint:

(cx) If p is in N(w) then N(w, p) ⊆ N(w)

This guarantees a sort of Modus Ponens, or detachment: if the agent believes that p, and also believes q on the supposition that p, then he believes that q.  This point concerns answers to questions of form Q, the special case.

All the constraints listed in this section are part of the definition of Class LB-1 of models.

6.   Supposition and a conditional operator

We can consider the possibility that those conditionals, such as “If Sally were unfaithful then I would (would not) believe that …” are already present, explicitly formulated, in the original set of full beliefs.

What would that look like?  To formalize a bit, let’s use the expression “>b>” homonymously for a sentential connective and the operator it expresses.

“A >b> B” is to be read as “If A, I would believe that B”

Again taking the cue from Chellas, 

            w is in [A] >b> [B] iff  [B] is in N(w, [A])

That is at odds with the intuitive idea of arrows as symbolizing “if … then” since Modus Ponens will not be a principle governing this.  For if w is in [A], it does not at all follow, as we have seen with Thomason’s example, that [A] will be in N(w,[A]).  For the supposition that A is true does not imply that A is believed under those circumstances.

What about the sentence “if A, I would not believe that B”?

That will be true in w exactly if [B] is not in N(w, [A]).  But that is just the negation of the truth condition for “if A, I would believe that B”.  So we just symbolize it as “~[A >b> B]”.  Of course that does not entail that I would believe that ~B.

7.   Models with conditional operator

Class LB-2 of models.  A model is an n-tuple <W, F, N1, N2,…> is a model in Class LB-0, in which, for all propositions p and q in F,  

            (p >b> q) = {w: q is in N(w,p}

is in F.

In the syntax we now take it that there is a non-Boolean connective, homonymously written as “>b>”, such that A >b> B is a sentence whenever A and B are sentences.  In the interpretation, [A >b> B] = [A] >b> [B].

8.   Logic of simple belief

Given that the neighborhood is in effect the agent’s set of full beliefs in that ‘world’ (situation, set-up), we can introduce the ‘believes that’ propositional operator.

BA is true in w exactly if [A] is in N(w)

Using B equally for the connective and for the operator it stands for, that operator is B is defined by:  Bp = {w: p is in N(w)}

But B is definable because N(w) = N(w, W).  So:

Definition.  BA = W >b> A

Some familiar principles hold, simply because N(w) is a proper filter:

  1. If A is valid, so is BA  (for W is in N(w))
  2. If A implies C then BA implies BC
  3. BA, BC implies B(A & C)
  4. BA implies ~B~A   (because N(w) is a proper filter)

9.   Self-transparency (?)

Believers are wrong about much, they have many false beliefs.  But elsewhere I’ve taken up a classical (normal) modal logic of belief, for believers who are self-transparent. Whatever else they may be wrong about, they are exactly right about what beliefs they have.

This is a special case, a special class of models, a subclass of Class LB-2.  I will not limit the discussion to this subclass, but make some remarks about it.

To represent transparency of belief let’s begin with a single case:  in specific world w, the agent believes that he believes that A if and only if he believes that A. And this is also for his conditional beliefs: the agent believes, on the condition that A, that the believes E on condition that A if and only if he believes E on the condition that A.

First, so for all propositions p, if p is in N(w, q) then Bp is in N(w, q).  That guarantees that

V. (BA ⊃ BBA) is valid in this class of models 

In this class of models, if an agents believes something then he correctly believes that he believes that.

Secondly, if the agent believes that he believes that A, then that is correct, he does indeed believe that A.  So, if Bp is in N(w, q) then so is p.  This guarantees that 

VI. (BBA ⊃  BA) is valid in this class of models.

V. and VI. hold also for the connective “A >b>”, for any sentence A, in the place of connective “B“.

10. Logic of conditional belief

That neighborhoods are proper filters immediately gives us:

VII.A >b> (B & C) implies [(A >b> B) and  (A >b> C)]

VIII. [(A >b> B) and  (A >b> C)] implies A >b> (B & C)

IX. if {E1,  … En,} implies E then {A >b> E1, …,  A >b> En} implies A >b> E

But this notion of conditionality lacks two salient features:

X*.    A >b> A is not a valid principle

XI*.     Modus Ponens, {A, A >b> B} implies B, is not a valid principle.

For example, in world w the agent does not believe that B, although A is true in w, but does believe that he would believe B if it were the case that A.

Recall condition

(cx) If p is in N(w) then N(w, p) ⊆ N(w)

This guarantees that:

X. {BA, A >b> E} implies BE

For suppose BA, A >b> E  are true in world w.  Then [A] is in N(w).  By the second condition listed here, N(w, [A]) is part of N(w).  That means that for any proposition q, if q is in N(w,[A] then q is N(w).  Since A >b> E is true in w, [E] is in N(w, [A]).  Hence [E] is in N(w), and therefore BE is true in w.

11. Ubiquity of Moore’s Paradox

Since Modus Ponens is not valid for >b>, the following triad is consistent:

(***) [Sally is not able to keep a secret]  If Sally were deceiving me, I would believe that. I do not believe that Sally is deceiving me.  Sally is deceiving me.

The three statements could all be true together, for the first and second just describe my beliefs, while the third describes a fact independent of anything I believe or disbelieve.

It is a virtue of having >b> in the language that we are able to represent this situation.

But of course, (***) is not something that could express a coherent agent’s belief.  Principle X guarantees that, if the third statement is replaced by “I believe that Sally is deceiving me”, the result is inconsistent.

In the class of models as set up so far, an agent’s beliefs are consistent (setting aside problems with infinity).  Therefore these agents are pretty safe from paradox.  But what if we ask about conditional belief, with the condition being a Moorean statement?

There is no problem with N(w, [A & ~BA]) since nothing guarantees that [A & ~BA]) is in N(w, [A & ~BA]).  I would naturally take that neighborhood to exclude A, for the agent in a world where [A & ~BA] is true would not know or believe that A is true, and might or might not know or believe that he does not believe that A.

But what about N(w, [BA & B~BA])? Again, not a problem with the representation in general, but this harks back to the above remarks about transparency.  

The transparency conditions imply that both [A] and [~BA] will then be in N(w, [BA & B~BA]), and [~BA] = W – [BA].  

Now, in this special case where the agent is self-transparent, [BA] will also be in there, because [A] is, and hence so will the empty set, the filter will be improper.

So here we have a case where the agent’s question “What would I believe if …?” forces the answer, in effect, that under those conditions his beliefs would be incoherent.

REFERENCE

Brian F.Chellas (1980) Modal Logic: An Introduction. Cambridge U Press.


[1] If p is in N(w) then the filter {q: p ⊆ q} generated by p is part of N(w), and is proper. Those filters themselves form a proper filter as well, in the family of subsets of N(w).   

Leave a comment