The curious roles atomic sentences can play (2)

[A reflection on some papers by Hallden, Lemmon, Hiz, Makinson, and Segerberg, listed at the end.  Throughout I will use my own symbols for connectives, to keep the text uniform.]

Lemmon (1966) proved a theorem that I found quite startling at first sight, and I wondered what to make of it.  But then I found that Makinson (1973) used it, to prove an interesting result for modal logics – one that is related to the role of atomic sentences — and it really kindled my interest. 

It’s a story that spans several decades, and I will begin with Hallden.  But let me note beforehand that each of these authors had the same target, which they specified clearly though informally, namely the class of propositional modal logics.  

A class of logics.

Some logics, e.g. FDE, have a consequence relation but no theorems.  There are also logics that have the same theorems but different consequence relations.  So I’ll take a logic to be identified by a consequence relation or operator.  I’ll spell this out just for propositional logics.

Definition.  A syntax is sentential iff its non-logical vocabulary is a set of zero or more propositional constants and countably infinitely many propositional variables. 

The propositional constants and variables are the atomic sentences from which the complex sentences are built.

Definition. If S is a sentential syntax and SS its set of sentences then L is a logical consequence operator (or briefly, a logic) on S exactly if

  1. L is a closure operator on the subsets of SS
  2. L is invariant under substitution for propositional variables

That is, if X is a set of sentences and X = L(X), and P a propositional variable, and Y the result of substituting a specific sentence A for all occurrences of P in the sentences in X, then Y = L(Y).

So I make closure under the relevant rule of substitution part of the meaning of “logical consequence operator” or “logic”. [1]  

The set L(∧) is the set of theorems of L, or the set of L-theorems.  

Definition.  A logic L on a sentential syntax is classical iff that syntax has at least all the ‘truth-functional’ connectives (primitive or defined), and the set of theorems of L contains (in some form or other) all theorems of the classical propositional calculus, and for all sentences A, B, if A and (A ⊃ B) are in X then B is in L(X). (Compare with Lemmon’s formulation. [2])

For the last clause I will also use “ X is closed  under detachment”. I will here abbreviate “classical logic on a sentential syntax” to just “classical logic”, since we will look only at the sentential logic case.[3]   A modal logic, as defined by the authors I mentioned, is a classical logic on a syntax that contains a modal propositional operator.

Hallden 

When C. I. Lewis introduced his family of modal logics they were five, but only two of them, S4 and S5, seemed easy to interpret.  Sören_Halldén (1951) diagnosed the strangeness of S1,S2, and S3 as due to what we now call 

Hallden incompleteness.  A  logical system L is Hallden incomplete iff there are sentences A and B which share no propositional variables and are not theorems of L while their disjunction (A v B) is a theorem of L.

The incompleteness, Hallden suggests, is this: “for S1, S2, and S3 the class of true formulas cannot coincide with the class of theorems” (Hallden 1951: 127).  His reason is straightforward:  if the class of true formulas includes a disjunction then it also includes the disjuncts.

But this looks quite beside the point: a logic is not meant to capture all truths, but all logical truths.  Hallden is not attending to the difference between a logic and a theory.  

Definition.  If L is a classical logic on syntax S then a set of sentences of S is an L-theory iff X contains all L-theorems and is closed under detachment.

The rule of substitution does not enter into it: we cannot deduce q from atomic sentence p.  Unlike detachment, the rule of substitution is not a rule designed to preserve truth but to preserve validity.  The set of L-theorems is an L-theory, but unlike most L-theories it is closed under substitution.  

We can do better than Hallden, I think, with the reflection that if A and B have propositional variables, but have none in common, then they could be interpreted to be about just any two different things altogether, with no connection at all – so if they are not logical truths, how could their disjunction be?  If they are not logical truths then they could be false, and if there is no connection between them they could both be false.  We’ll see arguments like this below.

 It was soon shown, after Hallden’s paper, that our familiar normal modal logics T, B, S4, S5 are all Hallden complete.

Lemmon

What Lemmon (1966) proved was this:

A classical logic L is Hallden incomplete if and only if there are classical logics L1 and L2, distinct from each other and from L, such that L = L1 ∩ L2.

Attention to closure under substitution is important for this theorem.  For example, If L i s a classical logic, X is any L-theory, and P a propositional variable, then X may well be the intersection of two least L-theories that contain (X ∪ {P}) and (X ∪ ~P) respectively.  But the least logic L’ that contains the theorems of a logic L, and also P, has as theorems all sentences of the syntax – because of closure under substitution. And similarly for the one with ~P, if the logic is classical, due to double negation, so those two extensions of L are then identical.

In the Appendix I will sketch Lemmon’s proof.  But we can see from the definitions that if L is Halden-incomplete, with A v B the relevant disjunction, then we can take L1 and L2 to be the least logics containing L that have A, B respectively as theorems. 

The proof that in that case L1 and L2 are distinct from each other, as well as from L, is straightforward.  For suppose that B is a theorem also of L1 as well.  In that case there are certain substitution instances A1, …, Am of A, which are of course not in L, such that (A1 & …& Am) ⊃ B is a theorem of L.  But L also contains all of A1 v B, …, Av B, since they are substitution instances of its theorem A v B.  Their conjunction is tautologically equivalent to (A1 & …& Am) vB.  By Modus Ponens and disjunction elimination, it follows that B is a theorem of L, contrary to supposition. 

Makinson and Segerberg

Hiz’s paper that I discussed in the previous post was called “A Warning …”.  

David Makinson also called his 1973 paper “A Warning …”.  Segerberg titled a section in his book “Makinson’s warning”.  I won’t say just now what the warning was, I’ll come to that later.  

Instead I’ll begin by present Makinson’s argument in a general form, somewhat generalized from his presentation, and only afterward turn to his specific conclusion.

Makinson defines the class of modal logics, his target, as follows. (I will use my terminology as set out at the beginning here.)  The sentential syntax S has as all ‘truthfunctional’ connectives (whether primitive or as defined, we will leave this open for now), as well as the unary connective □, and it has no propositional constants.  

Definition.  modal logic is a classical logic on S.

So modal logics will differ by including some formulas in which □ occurs (axioms for specific modal logics like K or S5).  Any intersection of modal logics is a modal logic.  The smallest is L0, the intersection of all, and its theorems are all and only instances of classical tautologies. 

But here is a variant: we introduce the notion of a propositional constant to form a slightly different family of logics.  The sentential syntax S+ is exactly like S except that it has one propositional constant, q.  

Definition. modal+ logic is a classical logic on S+.

The smallest modal+ logic on S+, let us say, is L1. Its theorems too are just the instances of classical tautologies. Propositional constants and propositional variables are both atomic sentences.  As a syntactic distinction it may look arbitrary, but the difference comes in the logic: the rule of substitution applies only to the propositional variables. 

Theorem. L1, the smallest modallogic is Hallden incomplete.

Relying on Lemmon’s theorem this follows from the proof that L1 is the intersection of two logics that are distinct from each other and from L1.

The first part is interesting but easy.  The proof is by display of an example.  Let La be the least modal logic that contains L1 and □~q, and let Lb be least modal logic that contains L1 and ~ □~q.  Obviously L1 is part of both La and Lb.  Also, since the added formulas are not classical tautologies, and are contraries, La and Lb are distinct from each other and from L1.

For the converse, suppose that A is a theorem of both La and Lb.  Note that there are no propositional variables in either □~q or ~ □~q.   Therefore, if formula A belong to both La and Lb then (□~q ⊃ A) and (~□~q ⊃ A) are theorems of L1. But then, A is a theorem of L1, by classical sentential logic.

Hence by Lemmon’s theorem,  this means that Lis Hallden-incomplete.   And the point is very general, no axioms for □ were assumed.  The result is just due to the presence of a propositional constant.  Illustrative examples of an ‘offending’ disjunction in L1 are easy to find.  (□~q v ~ □~q) is a tautology, hence a theorem of L1, but its disjuncts are not.  For an example that has some propositional variables involved, let r and s be distinct propositional variables.  Then L1 has theorem

            (r ⊃ □~q) v (s ⊃ ~□~q)

for that too is a truth-functional tautology, but neither disjunct (which have propositional variables but share none) is a theorem of L1.

Makinson’s result about choice of primitive operators

The full title of Makinson’s paper is “A Warning about the Choice of Primitive Operators in Modal Logic”. Makinson’s logic L0 is like my L0 (the least modal logic on S) except that Makinson considers two specific options for syntax.  The first is that the primitive ‘truth-functional’ connectives are ⊃ and  ⊥ (the falsum) and the second option is that they are & and ~.  Then he proves that if we take the first option, then L0 is the intersection of two logics L and L’, distinct from each other and from L.  (And so, we may note, Hallden incomplete.)

His proof is the one of which I gave the general version above, except that “q” is replaced by “⊥”.  We can classify the falsum as a logical sign, a 0-adic operator, but syntactically it (also) plays the role of a sentence:  ~⊥ is a sentence,  ⊥&⊥ is a sentence, ⊥ is a sentence ….  The proof does not rely on any specific features of the falsum, but only on the fact that it plays the same role as any propositional constant.

On the second option L0 is not Hallden-incomplete (see Appendix for a sketch of his proof).  But the two options yield languages that are entirely inter-translatable.  As Segerberg (1982: 104) comments, the two options give us languages that “even  though [they] have the same ‘internal’ properties, they do not share all the ‘external’ ones”.  

Makinson recognizes that his result about the choice of primitives does not affect any of the more familiar modal logics.  In those, □~⊥ is a theorem.[4]  He took the result as being important for an insight into the structure of the lattice of modal logics. But as we also saw, his main argument generalizes to the presence of any propositional constant.  So it also gives an insight into the curious roles that atomic sentences can play.

APPENDIX.  Sketches of Lemmon’s and Makinson’s proofs

Outline of Lemmon’s proof that a classical logic L is Hallden incomplete if and only if there are classical logics L1 and L2, distinct from each other and from L, such that L = L1 ∩ L2.

Lemmon’s proof of the ‘only if’ part is straightforward, but made lengthy by the need to take the rule of substitution into account.  Suppose that L is Hallden incomplete.  Let A v B be an L-theorem, while A, B are not L-theorems and share no propositional variables.  Let L1 and L2 be the extensions of L made by adding A,  B respectively.  Clearly the L-theorems are all L1-theorems as well as L2-theorems. 

Suppose now that C is both an L1-theorem and an L2 theorem.  The proofs for C in L1 and in L2 must be from premises that are substitution instances A1, …, Am of A,  and substitution instances B1, …, Bn of B respectively. So the following are both L-theorems:

(A1 & …& Am) ⊃ C

(B1 & … & Bn) ⊃ C

therefore [(A1 & …& Am) v (B1 & … & Bn)] ⊃ C is an L-theorem.  But that is tautologically equivalent to [(A1 v  B1 ) & …& (Am v Bn)] ⊃ C.  Since the L-theorems include all substitution instances (Ai v  Bi) of L-theorem (A v B), it follows that C is an L-theorem.

The proof that if L is the intersection of two distinct logics L1 and L2 then it is Hallden incomplete is shorter but more interesting.  Select a theorem A of L1 that is not a theorem of L2, and a theorem B of L2 that is not a theorem of L1.  Clearly, neither is a theorem of L.  Since L1 is closed under substitution it will contain an ‘isomorphic’ substitution instance A’ of A formed by substituting propositional variables foreign to B, for the propositional variables in A.  Both L1 and L2 contain the disjunction (A’ v B).  Therefore so does their intersection L.

Outline of Makinson’s proof that on the second option, L0 is not Hallden-incomplete.

Suppose per absurdum that sentences A and B have no propositional variables in common, and that (A v B) is a theorem of L0, while A, B are not theorems.  So (A v B) is a classical tautology.  Let f assign truth-values 0, 1 to the atomic sentences in A such that f(A) = 0.  This is possible since A is not a tautology and does not contain the falsum or any propositional constant.  Similarly let g assign truthvalues to the atomic sentences in B such that g(B) = 0.   Since the domains of f and g do not overlap, we can combine them to yield a function h such that h(A) = h(B) =0.  Thus h(A v B) = 0 which contradicts the supposition that (A v B) is a tautology.

REFERENCES

Sören Halldén (1951) “On The Semantic Non-Completeness Of Certain Lewis Calculi”. The Journal Of Symbolic Logic 16: 127-129. 

E. J. Lemmon (1966) “A Note On Hallden-Incompleteness”. Notre Dame Journal Of Formal Logic VII 1966: 296-300

David Makinson (1973) “A Warning about the Choice of Primitive Operators in Modal Logic”. Journal of Philosophical Logic 2: 193- 196.

Notes


[1] I do not mean that it is a substantive constraint.  Rather, we classify logics by what is substitutable.  A propositonal logic is one for which the class of substitutables is a set of sentences, a predicate logic is one where the substitutables are or include a set of primitive predicates.  If a closure operator on a syntax has no substitutables at all, however, I do not think it can count as a logic, whatever else it may be.

[2] Compare Lemmon (1966: 300) “Throughout this paper, a logical system is understood to be a propositional logic whose class of theorems is closed with respect to substitution as well as detachment, and which contains (in some form or other) the classical propositional calculus.”

[3] Since I refer to Segerberg below, I should note that this is not the same as his definition of “classical logic”, though it is not far.

[4] Makinson makes the stronger point that choice of primitive truth-functional operators will not make a difference in any congruential  modal logic. 

The curious roles atomic sentences can play (1)

[A reflection on papers by Hiz and Thomason, listed at the end.  Throughout I will use my own symbols for connectives, to keep the text uniform.]

Atomic sentences, we say, are not a special species.  They could be anything; they are just the ones we leave unanalyzed.  What we study is the structures built from them, such as truth-fuctional compounds.

But that innocuous looking “They could be anything” opens up some leeway.  It allows that the atomic sentences could have values or express propositions that the complex sentences cannot.  I will discuss two examples of how this leeway can be exploited for proofs of incompleteness.

The story I want to tell starts with a small error by Paul Halmos in 1956.

Halmos and Hiz

 In his 1956 paper Paul Halmos wanted to display the classical propositional calculus with just & and ~ as primitive connectives.  (Looks familiar, what could be the problem?)  As guide he took the presentation in Hilbert and Ackermann, with v and ~ as primitives. For brevity and ease of reading they had introduced “x ⊃ y” as abbreviation for “~x v y”.

  1. (x v x) ⊃ x
  2. x ⊃ (x v y)
  3. (x v y) ⊃ (y v x)
  4. (x v y) ⊃ (z v x . ⊃ . z v y )

Knowing how truth functions work, Halmos (1956: 368) treated “x v y”  as abbreviation of “~(~x & ~y)” and “x ⊃ y” as abbreviation of “~(x & ~y), to read Hilbert and Ackermann’s axioms. That means that his formulation, with ~ and & primitive, was this:

  1. ~[~(~x & ~y) & ~x]
  2. ~[x & ~~(~x & ~y)]
  3. ~[~(~x & ~y) & ~~(~y & ~x)]
  4. ~[~(~x & ~y) & ~[~[~(~z & ~x) & ~~(~z & ~y)]]]

But, unlike what it translates (Hilbert and Ackermann’s), this set  of axioms is not complete!

Henryk Hiz (1958) showed why not.  (He mentioned that Halmos had raised the possibility himself in a conversation, and Rosser had done so as well, in a letter to Halmos.)

Let’s look for a difference in the roles of atomic sentences and of complex sentences in Halmos’ axiom set.  What springs to the eye in Axiom b. is that there is an occurence of x that is preceded by ~, and one that is not so preceded but ‘stands by itself’.  So we can make trouble by allowing an atomic sentence x to take values that a negated sentence ~x cannot have.  

That is what Hiz does, with this three-valued truth-table where an atomic sentence x could have value 1, 2, or 3, but ~x can only have values 1 or 3. 

(He writes A and N for my  & and ~.) 

So if x has value 2 then ~(~x & x) has value ~ (~2 & 2) = ~(1 & 2) = ~1 = 3, which is not designated.  So there is a classical tautology, the traditional Non-Contradiction Principle, that does not receive a designated value.  

In this three-valued logic neither conjunction nor negation behaves classically, but all of Halmos’ axioms have the designated value 1.  So his formulation of classical sentential logic is sound but not complete.

Thomason

Thomason’s (2018) argument and technique, which I discussed in a previous post, were very close to Hiz’, but applied to modal logic.

In modal logic the basic K axiom can be formulated in at least these three ways:

  1. □(x ⊃ y) ⊃ (□x ⊃ □y)
  2. (x v y) ≡  (x v y)
  3. ~◊~(x ⊃ y) ⊃ (~◊~x ⊃ ~◊~y)

The third is a translation of the first with “□” translated as “~~”.  In the previous post (“Is Possiblity-Necessity Duality Just a Definition”, 07/17/2025) I explained Thomason’s model in which that third formulation of K is satisfied, but the Duality principle is shown to be independent.  Here I will show that satisfaction of Axiom (iii) is compatible with a violation of Axiom (ii). 

Thomason presented a model with 8 values for the propositions.  I’ll use here the smaller 5-valued model which I described in the post. My presentation here, in a slightly adapted form, is sufficient for our purpose.  

This structure (matrix)is made up of the familiar 2-atom Boolean lattice B = {T, 1, ~1, ⊥} with the addition of an ‘alien’ element k.  The meet and join on B are operators ∧ and  +. The operator ~ is the usual complement on on B.  The only designated element is T.

To extend the operators to the alient element, we set ~k = ~1.  So x can take any of the five values but ~x can only have a value in B.

What about the joins and meets of elements when one of them is alien?  They are all in B too, with these definitions:

Define.  x* = ~~x, called the Twin of x.  (Clearly x = x* except that k* = 1.) 

Define.  For any elements x and y:   x & y = x* ∧ y*, and x v y = x* + y*.

Finally the possibity operator is defined by: ◊x = T iff x = 1 or T;  ◊x =  ⊥ otherwise.  

Instances of Axiom (iii.) always get the desigated value (by inspection; note that every non-modal sentential part starts with ~). 

But in Axiom (ii) we see the leeway, due to the fact that x can be any element.  The negation, join, or meet of anything with anything can only take values in B.  So Axiom (ii) does not always get a designated value, for if we set x = y = k, we get the result:

(k v k) = (k* + k*) = 1 = T

(k v k) =  ⊥* +  ⊥* =  ⊥

In Thomason’s article this technique is used to show that with formulation (iii) of K, the duality ¬◊¬x = □x is independent, and needs to be added as an axiom rather than a definition.  

Axiom (ii.), with the attendant rules changed mutatis mutandis, and the Duality introduced as a definition, is a complete formulation of system K (cf. Chellas 1980: 117, 122).  A formulation that has Axiom(iii) instead of Axiom (ii) is not.  

Hiz’ warning was well taken.

References

Chellas, Brian F. (1980) Modal Logic: An Introduction. Cambridge.

Hiz, Henryk  (1958) “A Warning about Translating Axioms”. Am. Math. Monthly 65: 613-614.

Thomason, Richmond H. (2018) “Independence of the Dual Axiom in Modal K with Primitive  ◊”.  Notre Dame Journal of Formal Logic 59: 381-385.

Is Possibility-Necessity Duality Just a Definition?

[A reflection on Richmond Thomason’s “Independence of the Dual Axiom in Modal K with Primitive ◊”]

Within modal logic we customarily take necessary to be equivalent to not possibly not.  Thomason shows that the answer to my title question is NO: if we formulate the logic with ◊ as primitive, we need to add that equivalence as an axiom.  There is an interpretation of K in which the duality fails.  The interpretation, he says, is exotic ….  It is, but also startling and provocative.  

I’ll explain his model and reasoning, and then explore his method with a smaller model (which gives a weaker result).  His method: construction of a small language that is classical (truth-functional connectives) and hyper-intensional.

In Thomason’s paper the family of propositions is represented by the union B ∪ B’ of two Boolean algebras with two atoms each: 

The partial order within this union is just within each of the parts: if x and y belong to B and B’ they are not ordered relative to each other, they have no meet or join.  Think of this as a matrix, with only V and V’ designated values (‘true’).

There is a sort of complement operator, which I will symbolize as ¬, that is ordinary in B but takes elements in B’ to B.  

As a result, of course, ¬¬ x is always in B, and is called Twin(x).  Specifically, ¬1 = 2,  ¬2 = 1.  But  ¬ 2’ = 1 as well, so Twin(2’) = ¬ ¬ 2’ = 2.  Similarly, ¬1’ = 2, so Twin(1’) = 1.

Then there is sort of conditional operator (x → y) = (¬Twin(x) v Twin(y)); its values are always in B.  Thus when sentences are given these propositions as their semantic values, and “not” and “if then” are assigned ¬ and →, classical propositional logic is sound.

Clearly ¬ is not an involution on B ∪ B’, but like in Intuitionistic logic, doubly complimented propositions act classically.  

So, as far as that is concerned, the strange algebraic structure of the family of propositions is hidden from sight.  But that strangeness can be utilized in the interpretation of ◊.

That interpretation is simple:  each of B and B’ is divided into ‘possible’ and ‘impossible’ regions: the ‘possible’ propositions are those inside the dotted ellipses.

The possibility operator, too, when applied to an element of B’, shifts attention to B:

◊1 = V, ◊V = V, but if x is in B’, we still refer to V in B:  ◊2’ = V, ◊V’ = V.  In all other cases ◊x=  ∧.   Notice that for all x, ◊x is in B, so this addition cannot affect the soundness of classical sentential logic.

With this interpretation we can verify the K axiom formulated with ◊ as primitive:

¬  ◊¬ (x →  y) → [¬◊¬x → ¬◊¬y]

To check that no assignment of values to x, y yields a counterexample is straightforward.

What has happened to duality?  If we define □ x as ~◊~x, what is the status of 

            Duality. ◊x → ~□~x,  and ~□~x   → ◊x ?

The first part is the same as ◊x  → ~~◊~~x, which is the same as ◊x → ◊~~x.  If x = 2’ then this is (◊2’ → ~~2’), which is (V → 2) = 2, hence not true.

The second part is similarly seen not to be true, using 1’ rather than 2’.

Adding Duality as an axiom will eliminate the ‘exotic’ interpretation.

PART TWO.

Thomason’s method has a general form: 

choose a structure and interpretation in such a way that all the semantic values of complex sentences belong to a Boolean algebra, and use extra structure in the interpretation of non-Boolean operators.  

That makes it possible to construct non-standard interpretations of even normal modal logics.

I thought I’d try my hand it with a small familiar lattice that has a Boolean sublattice.  As it turns out (not surprisingly) it (only) gets half of Thomason’s result.

This is N5, the smallest non-modular (hence non-distributive) lattice, the ‘pentagon lattice’.  Let us define operations on it in the way Thomason did:

a sort of complement:  ┐1 = ⎯ 1, ┐ ⎯ 1 = 1, ┐k = ⎯ 1, ┐T = ⊥,   ┐⊥ = T.

We define the Twin x* of x to be ┐┐x.  Note that k* = 1.

N5 has a Boolean sublattice = {T, 1, ⎯ 1, ⊥} = {x*: x in N5}

 a sort of conditional:   (x  → y) = (┐x* v y*)

Only T is designated (‘true’). Interpretation of the syntax: as above; once again all semantic values of complex sentences are located in the Boolean sublattice, so classical sentential logic theorems are all true.  

a sort of possibility operator: ◊x = T iff x = 1 or T;  ◊x = ⊥ otherwise.

Verification of the K axiom

            ¬ ◊¬ (x → y) → [¬ ◊¬x → ¬◊¬y]

Note:

¬◊¬T = T,                               ¬◊¬1 = T

¬ ◊¬k = ¬◊(⎯  1) =  T              ¬◊¬(⎯ 1) = ¬◊1 = ⊥

¬◊¬ ⊥ = ⊥

For the consequent  of the K axiom to be ⊥ means that [◊¬x v ¬◊¬y] =  ⊥, so:

 x is k or T or 1, and y is (⎯1) or ⊥

 In these cases the antecedent is ¬◊¬ followed by:

(k  → ⎯ 1) =  (¬k*  v ⎯ 1) =  ⎯ 1

(k  → ⊥) =    (¬k*  v ⊥) = ⎯ 1

(T→ ⎯ 1) =    ⎯ 1       

(T  →  ⊥ ) =      ⊥

(1 → ⎯ 1) =    ⎯ 1        

(1  →  ⊥ ) =    ⎯ 1

and the result of prefixing ¬◊¬ is in each case  ⊥.  So any attempt at a counterexample fails.

Now for the duality axiom:

Duality. ◊x → ¬¬□~x,      and ¬¬x   → ◊x 

The second part is the same as  ¬¬¬¬x→ ◊x, which is the same as ◊¬¬x → ◊x.  But  (◊¬¬k → ◊k) = (◊1 →◊k) = (T  →  ⊥) = ⊥.

However, the converse holds, so only half of Duality is refuted.

PART THREE.  How can we generalize this method?

Note that in the above, for all x, ¬x = ¬¬¬x = ¬Twin(x).  So the mapping of B’ into B does not need to be defined in terms of ¬.

So we can simply say: we have a map Twin, and we define ¬ and  → to be as usual on B, and for x, y in B’ we define   (x → y) = (Twin(x) → Twin(y)), and define ¬x = ¬Twin(x).  

For x in B, set Twin(x) = x to make it a map of the entire structure into itself.

So that is one map, then choose another map of the structure into itself, call it α , whose values are all in B.  (That is necessary to make sure that classical propositional theorems remains valid.) Any other properties you like.  

Now you have a model of a classical sentential calculus extended with addition of a single unitary propositional operator, which will satisfy any axioms of your desire.  

For example, thinking about the K□ axiom for α, you could specify that if x  ≤ y then Twin(x) ≤  Twin(y).    But you could do the opposite, so that if p implied q then αq would imply αp, acting like negation (but perhaps not just like negation).  Or you could want α to be read “It is so in the story that …” and the story could be by Graham Priest.

Remark: hyper-intentionality

It is remarkable that by such simple means Thomason created a language that is at once classical and hyperintensional:  

Hyper-intensionality. For all x in B and in B’, x → ¬ ¬x and ¬¬x  → x.  But it is not the case that ◊x  → ◊¬ ¬x.  For example, ◊2’ = V but ◊¬ ¬2’ = ◊2 =  ⊥.  

Note that if x is in B then so is ¬ ¬x.  For how Thomason interprets the language, we can add that if A is any complex sentence then A will imply ◊¬ ¬A. Only by using an atomic sentence (with value in B’) there is a counterexample to Duality.

In the case of axiom ¬ ◊¬ (A → B) → [¬◊¬A → ¬◊¬B], the  ◊ operator is applied only to sentences that begin with ¬, hence are complex.  So no such way of providing counter-examples applies there.

The created language, at once classical and hyper-intensional, is intriguingly unusual! 

REFERENCE

Thomason, Richmond (2018) “Independence of the Dual Axiom in Modal K with Primitive  ◊”.  Notre Dame Journal of Formal Logic 59: 381-385.

Logic of belief, omega-inconsistency

In the past I have mainly thought of the logic of belief (if not connected with probability) as one of the family of normal modal logics.  Now I realize that there is a problem which that cannot accommodate, while it seems that neighborhood semantics can.

Think of the agent who believes, for each natural number n, that there are at least n stars,  but also believes that the number of stars is finite.  Such examples take the simple form:  something is F, 1 is not F, 2 is not F, ….  

We get a geometric example with intervals: I dropped a point-particle on the interval [0,1] on the real line.  My first belief is that it fell in the half-open interval [1/2,1).  My other relevant beliefs are that it fell in each of the intervals (1/2, 1], (1/4, 1], …, (1/2^n,1]….  My first belief has a non-empty intersection with each of the other beliefs.  But the intersection of those other beliefs is [1], which is excluded by my first belief.

From any finite subset of such a family of beliefs it is not possible to deduce a contradiction but the family as a whole is not satisfiable.  Goedel called this omega-inconsistency.

These examples are, to be reader friendly, generated by simple recipes, and hence amenable to an argument by mathematical induction, leading to a straight contradiction.  There are examples not of that sort, I just can’t write them down.  In any case, the agent may not have mastered mathematical induction.

The Problem.  In the normal modal logic approach, the agent in world w believes that A if and only if A is true in all the worlds w’ which bear a certain relation R to w.  If propositions are sets of worlds, and [A] is the proposition that A expresses, with B the ‘the agent believes that’ connective, this amounts to: 

BA = {w’: wRw’} ⊆ [A].

But in the case of the omega-inconsistent believer {w’: wRw’} would then have to be part of every proposition in his set of beliefs.  And there is no world in which all of those are true.  Thus, that case, {w’: wRw’}is empty.  But that is no different from a believer who believes that A & ~A.

So there is, in normal modal semantics, no way to distinguish the omega-inconsistent believer from the believer who believes a simple self-contradiction.

The Solution.  Here I will rely on my previous post, about logic of belief and neighborhood semantics.

Given simply that world w has neighborhood N(w) and p is believed in w iff p is a member of w, that distinction between ‘ordinary’ and omega-inconsistency can be respected.  For N(w) is a filter on the algebra of propositions, merely closed under finite intersections and superset formation. 

So suppose the agent in w believes p = (something is F), q(1) = (1 is not F), q(2) = (2 is not F), ….  and those propositions generate filter N(w).  In that case ~p is not in N(w), so ~B~p is true in w.  At the same time, there is no world in which all of N(w) is true, so the agent’s beliefs, taken altogether, imply all propositions, including B~p.

This is an adequate representation of omega-inconsistent belief, with the empty set not in N(w).  That shows that condition

            (cd)  Λ is not in N(w)

must not be read as ‘N(w) is consistent’ but only as ‘Each finite subset of N(w) is consistent’.

The Upshot

What this means is that you can be entirely wrong about what the world is like, with ideas that are not realistic under any possible conditions, and still live a useful, productive, and happy life.  And your family and friends might never find out.