What Could Be the Most Basic Logic?

It was only in the 19th century that alternatives to Euclidean geometry appeared.  What was to be respected as the most basic geometry for the physical sciences: Euclidean, non-Euclidean with constant curvature, projective?  Frege, Poincare, Russell, and Whitehead were, to various degrees, on the conservative side on this question.>[1]  

In the 20th, alternatives to classical logic appeared, even as it was being created in its present form.  First Intuitionistic logic, then quantum logic, and then relevant and paraconsistent logics, each with a special claim be more basic, more general in its applicability, than classical logic.

Conservative voices were certainly heard.  John Burgess told his seminars “Heretics in logic should be hissed away!”.  David Lewis described relevant and paraconsistent logic as logic for equivocators.  The other side was not quiet.  Just as Hans Reichenbach gave a story of coherent experience in a non-Euclidean space, so Graham Priest wrote a story of characters remaining seemingly coherent throughout a self-contradictory experience.

Unlike in the case of Euclidean geometry, the alternatives offered for propositional logic have all been weaker than classical logic.  So how weak can we go?  What is weaker, but still sufficiently strong, to qualify as “the” logic, logic simpliciter

I am very attracted to the idea that a certain subclassical logic (FDE) has a better claim than classical logic to be “the” logic, the most basic logic.  It is well studied, and would be quite easy to teach as a first logic class. Beall (2018) provides relevant arguments here – the arguments are substantial, and deserve discussion.  But I propose to reflect on what the question involves, how it is to be understood, from my own point of view, to say why I find FDE attractive, and what open questions I still have.

1.      A case for FDE

The question what is the most basic logic sounds factual, but I cannot see how it could be.  However, a normative claim of the form

Logic L is the weakest logic to be respected in the formulation of empirical or abstract theories

seems to make good sense.  We had the historical precedent of Hilary Putnam’s claiming this for quantum logic.  I will come back to that claim below, but I see good reasons to say that FDE is a much better candidate.

2.      Starting a case for FDE

FDE has no theorems.  FDE is just the FDE consequence relation, the relation originally called tautological entailment, and FDE recognizes no tautologies.  Let us call a logic truly simple if it has no theorems.

To be clear: I take L to be a logic only if it is a closure operator on the set of sentences of a particular syntax.  The members of L(X) are the consequences of X in L, or the L-consequences of X; they are also called the sentences that X entails in L.  A sentence A is a theorem  of L iff A is a member of L(X) for all X.  The reason why FDE has no theorems is that it meets the variable-sharing requirement: that is to say, B is an L-consequence of A only there is an atomic sentence that is a component of both B and A.

So the initial case for FDE can be this: it is truly simple, as it must be, because

logic does not bring us truths, it is the neutral arbiter for reasoning and argumentation, and supplies no answers of its own. 

To assess this case we need a clear notion of what counts as a logic (beyond its being a closure operator), and what counts as supplying answers.  If I answered someone’s question with “Maybe so and maybe not”, she might well say that I have not told her anything.  But is that literally true?  A. N. Prior once made a little joke, “What’s all the fuss about Excluded Middle?  Either it is true or it is not!”.  We would have laughed less if there had been no Intuitionistic logic.

3.      Allowance for pluralism

My colleague Mark Johnston like to say that the big lesson of 20th century philosophy was that nothing reduces to anything else.  In philosophy of science pluralism, the denial that for every scientific theory there is a reduction to physics, has been having a good deal of play.

As I mentioned, FDE’s notable feature is the variable-sharing condition for entailment.  If A and B have no atomic sentences in common, then A does not entail B in FDE.  So to formulate two theories that are logically entirely independent, choose two disjoint subsets of the atomic sentences of the language.  Within FDE, theories which are formulated in the resulting disjoint sublanguages will lack any connection whatsoever.    

4.      Could FDE be a little too weak?

The most conservative extension, it seems to me, would be to add the falsum, ⊥.  It’s a common impression that adding this as a logical sign, with the stipulation that all sentences are consequences of ⊥, is cost-less.  

But if we added it to FDE semantics with the stipulation that ⊥ is false and never true, on all interpretations, then we get a tautology after all: ~⊥.  The corresponding logic, call it FDE+, then has ~ ⊥ as a theorem.   So FDE+ is not truly simple, it fails the above criterion for being “the” logic.  Despite that common impression, it is stronger than FDE, although the addition looks at once minimal and important.  Is FDE missing out on too much?

How should we think of FDE+?  

Option one is to say that ⊥, a propositional constant, is a substantive statement, that adding it is like adding “Snow is white”, so its addition is simply the creation of a theory of FDE.

Option two is to say that FDE+ is a mixed logic, not a pure logic.  The criterion I would propose for this option is this:

A logic L defined on a syntax X is pure if and only if every syntactic category except that of the syncategoremata (the logical and punctuation signs) is subject to the rule of substitution.

So for example, in FDE the only relevant category is the sentences, and if any premises X entails A, in FDE, then any systematic substitution of sentences for atomic sentences in X entails the corresponding substitution in A.  

But in FDE+ substitution for atomic sentence ⊥ does not preserve entailment in general.  Hence FDE is a pure logic, and FDE+ is not.

The two options are not exclusive.  By the usual definition, a theory of logic L is a set of sentences closed under entailment in L.  So the set of theorems of FDE+ is a theory of FDE.  However, it is a theory of a very special sort, not like the sort of theory that takes the third atomic sentence (which happens to be “Snow is white”) as its axiom.  

Open question: how could we spell out this difference between these two sorts of theories?  

5.      Might FDE be too strong?

FDE is weak compared to classical logic, but not very weak.  What about challenges to FDE as too strong?  

It seems to me that any response to such a challenge would have be to argue that a notion of consequence weaker than FDE would be at best a closure operator of logical interest.  But the distinction cannot be empty or a matter of fiat.

Distributivity

The first challenge to classical logic that is also a challenge to FDE came from Birkhoff and von Neumann, and was to distributivity.  They introduced quantum logic, and at one point Hilary Putnam championed that as candidate for “the” logic.  Putnam’s arguments did not fare well.[2]  

But there are simpler examples that mimic quantum logic in the relevant respect.

Logic of approximate value-attributions  

Let the propositions (which sentences can take as semantic content) be the couples [m, E], with E  an interval of real numbers – to be read as “the quantity in question (m) has a value in E”.

The empty set 𝜙 is counted as an interval.  The operations on these propositions are defined:

[m, E]  ∧ [m, F] = [m, E ∩ F]

[m, E]  v [m, F]  =  [m, E Θ F], 

where E Θ F the least interval that contains E ∪ F

Then if E, F, G are the disjoint intervals  (0.3, 0.7), [0, 0.3], and [0.7, 1],  

[m, E]  ∧ ([m, F] v [m, G]) = [m, E] ∧ ([ m, [0,1]]  = [m, E]

([m, E]  ∧ ([m, F]) v ([m, E]  ∧ ([m, G]) = [m, 𝜙]

which violates distributivity.

This looks like a good challenge to distributivity if the little language I described is a good part of our natural language, and if it can be said to have a logic of its own.

The open question:  

if we can isolate any identifiable fragment of natural language  and show that taken in and by itself, it has a logical structure that violates a certain principle, must “the” logic, the basic logic, then lack that principle?

Closure and conflict

We get a different, more radical, challenge from deontic logic.  In certain deontic logics there is allowance for conflicting obligations.  Suppose an agent is obliged to do X and also obliged to refrain from doing X, for reasons that cannot be reconciled.  By what logical principles do these obligations imply further obligations?  At first blush, if doing X requires doing something else, then he is obliged to do that as well, and similarly for what ~X requires.  But he cannot be obliged to both do and refrain from doing X: ought implies can.

Accordingly, Ali Farjami introduced the Up operator.  It is defined parasitic on classical logic: a set X is closed under Up exactly if X contains the classical logical consequences of each of its members.  For such an agent, caught up in moral conflict, the set of obligations he has is Up-closed, but not classical-logic closed.

If we took Up to be a logic, then it would be a logic in which premises A, B do not entail (A & B). Thus FDE has a principle which is violated in this context.

To head off this challenge one reposte might be that in deontic logic this sort of logical closure applies within the scope of a prefix.  The analogy to draw on may be with prefixes like “In Greek mythology …”, “In Heinlein’s All You Zombies …”.  

Another reposte can be that FDE offers its own response to the person in irresolvable moral conflict.  He could accept that the set of statements A such that he is obliged to see to it that A, is an FDE theory, not a classical theory.  Then he could say: “I am obliged to see to it that A, and also that ~A, and also that (A & ~A).  But that does not mean that anything goes, I have landed in a moral conflict, but not in a moral black hole.”

Deontic logic and motivation from ethical dilemmas only provide the origin for the challenge, and may be disputed.  Those aside, we still have a challenge to meet.

We have here another departure from both classical logic and FDE in and identifiable fragment of natural language.  So we have to consider the challenge abstractly as well.  And it can be applied directly to FDE.

Up is a closure operator on sets of sentences, just as is any logic.  Indeed, if is any closure operator on sets of sentences then the operator

Cu:   Cu(X) = ∪{C({A}): A in X}

is also a closure operator thereon.  (See Appendix.)

So we can also ask about FDEu.  Is it a better candidate to be “the” logic?  

FDEu is weaker than FDE, and it is both pure and truly simple.  But it sounds outrageous, that logic should lack the rule of conjunction introduction!

6.      Coda

We could give up and just say: for any language game that could be played there is a logic – that is all.

But a normative claim of form

Logic L is the weakest logic to be respected in the formulation of empirical or abstract theories

refers to things of real life importance.  We are not talking about just any language game.  

Last open question:  if we focus on the general concept of empirical and abstract theories, can we find constraints on how strong that weakest logic has to be?

FDE is both pure and truly simple. Among the well-worked out, well studied, and widely applicable logics that we already have, it is the only one that is both pure and truly simple.  That is the best case I can make for it so far.

7.      APPENDIX

An operator on a set X is a closure operator iff it maps subsets of X to subsets of X such that:

  1. X ⊆ C(X)
  2. CC(X) = C(X)
  3. If X ⊆ Y then C(X) ⊆ C(Y)

Definition.  Cu(X) = ∪{C({A}): A in X}.  

Proof that Cu is a closure operator:

  •  X ⊆ Cu(X).  For if A is in X, then A is in C({A}), hence in Cu(X).
  •  CuCu (X) = Cu(X).  Right to left follows from the preceding.  Suppose A is in CuCu (X).  Then there is a member B of Cu(X) such that A is in C({B}), and a member  E of X such that B is in C({E}). Therefore A is in CC({E}).  But CC({E}) = C({E}), so A is in  Cu(X).  
  • If X ⊆ Y then Cu(X) ⊆ Cu(Y).  For suppose X ⊆ Y. Then {C({A}): A in X} ⊆ {C({A}): A in Y}, so Cu(X) ⊆  Cu(Y).

8.      REFERENCES

Beall, Jc. (2018) “The Simple Argument for Subclassical Logic”. Philosophical Issues.

Cook, Roy T.  (2018) “Logic, Counterexamples, and Translation”.  Pp. 17- 43 in Geoffrey Hellman and Roy T. Cook (Eds.) (2018) Hilary Putnam on Logic and Mathematics.  Springer.

Hellman, Geoffrey (1980). “Quantum logic and meaning”. Proceedings of the Philosophy of Science Association 2: 493–511.

Putnam, Hilary (1968) “Is Logic Empirical” Pp. 216-241 in Cohen, R. and Wartofsky, M. (Eds.). (1968). Boston studies in the philosophy of science (Vol. 5). Dordrecht.   Reprinted as “The logic of quantum mechanics”. Pp. 174–197 in Putnam, H. (1975). Mathematics, matter, and method: Philosophical papers (Vol. I). Cambridge.

Russell, Bertrand (1897) An Essay on the Foundations of Geometry. Cambridge.

NOTES


[1] For example, Russell concluded that the choice between Euclidean and non-Euclidean geometries is empirical, but spaces that lack constant curvature “we found logically unsound and impossible to know, and therefore to be condemned a priori (Russell 1897: 118).

[2] See Hellman (1980) and Cook (2018) especially for critical examination of Putnam’s argument.

Boolean Aspects of De Morgan Lattices

  1. Trivial answers                       1
  2. Important answer                    2
  3. Example of a non-trivial Boolean center                    2
  4. Generalization of this answer             2
  5. Non-trivial Boolean families              3
  6. Analysis, and generalization              3
  7. Non-minimal augmentation of Boolean lattices         5
  8. Discussion : what about logic ?                      6

Appendix and Bibliographical Note               7

The question re classical vis-a-vis subclassical logic

After the initial astonishment that self-contradictions need not be logical black holes, there is a big question:  how can classical reasoning find a place in a subclassical logic, such as the minimal subclassical logic FDE?

Classical propositional logic is, in  a fairly straightforward sense, the theory of Boolean algebras.  In the same sense we can say that the logic of tautological entailment, aka FDE, is the theory of De Morgan algebras.  

In previous posts I have also explored the use of De Morgan algebras for truthmakers of imperatives and for the logic of intension and comprehension.  So FDE’s algebraic counterpart has some application beyond FDE. 

How, and to what extent, can the sub-classical logic FDE accommodate classical logic, or classical theories, as a special case?  A corresponding question for algebraic logic is how, or to what extent, Boolean algebras are to be found inside De Morgan algebras.  

Terminology. Unless otherwise indicated I will restrict the discussion to bounded De Morgan algebras, that is, ones with top (T) and bottom (⊥), these are distinct elements and ¬T =  ⊥.  If L is a De Morgan lattice, an element e of L is normal iff e ≠¬e, and L is normal iff all its elements are normal. Both sorts of algebras are examples of distributive lattices.  From here on I will use “lattice” rather than “algebra”.

1 Trivial answers

There are some simple, trivial answers first of all, and then two answers that look more important.

First, a Boolean lattice is a De Morgan lattice in which, for each element e, (e v ¬e ) = T (the top), or equivalently, (e  ∧¬e) = ⊥ (the bottom).

Secondly, in a De Morgan lattice, the set {T, ⊥} is a Boolean lattice.  

Thirdly, if L is De Morgan lattice and its element e is normal, then the quadruple {(e v ¬ e), e, ¬e, (e  ∧ ¬e)} is a Boolean sub lattice of L.

2 Important answer

More important is this:  If L is a De Morgan lattice then B(L) = {x in L: (x v ¬ x) = T} is closed under  ∧, v, and ¬ and is therefore a sub-lattice of L.  It is a Boolean lattice: the Boolean Center of L.

3 Example of non-trivial Boolean center

Mighty Mo:

Figure 1 The eight-element De Morgan lattice Mo

The Boolean Center B(Mo) = {+3, +0, -0, -3}.

4 Generalization of this answer

My aim here is to display Boolean lattices that ‘live’ inside De Morgan lattices. My general term for them will be Boolean families. They will not all be of the type, and I hope that their variety will itself offer us some insight.

The fact that a De Morgan lattice has a Boolean center can be generalized: 

 Suppose element e is such that (e v ¬e) is normal, and define B(e) = {x in L: (x v ¬ x) = (e v ¬e)}.  Then (see Appendix for proof) B(e) is a Boolean family, with top = (e v ¬e) and bottom = (e  ∧ ¬e).

5    Non-trivial Boolean families

The big question:  are there examples of non-trivial Boolean families distinct from the Boolean center?

We can construct some examples by adding ‘alien’ points to a Boolean lattice. For example this, which I will just call L1.

Figure 2  L1, augmented B3

This lattice L1 is made up from the three-atom Boolean lattice B3 by adding an extra top and bottom.  This sort of addition to a lattice I will call augmentation, and I will call L1 augmented B3.  For the involution we keep the Boolean complement in B3, and extend this operation by adding that T = ¬ ⊥ , and ⊥ = ¬T.  

L1 is distributive, hence a De Morgan lattice (proof in Appendix).  The clue to the proof is that for all elements e of L1, T  ∧ e = e and T v e = T.

The Boolean center B(L1) = {T, ⊥} is trivial, and the sublattice B3 is a non-trivial Boolean family.

6     Analysis of this example, and generalization

In the above reasoning nothing hinged on the character of B3, taken as example.  Augmenting any Boolean lattice B in this way will result in a De Morgan lattice with trivial Boolean center and B as a Boolean sublattice.  But this still does not go very far.  For the concept of Boolean families in De Morgan lattices to be possibly significant requires at least that there is a large variety of non- or not-nearly trivial examples.

To have a large class of examples with more than such a single central Boolean sublattice, we have to look for a construction to produce them. And this we can do by ‘multiplying’ lattices.  I will illustrate this with B3, and then generalize.

B3 as a product lattice

The Boolean lattice B3 is the product B1 x B2 of the one-atom and two-atom Boolean lattices.  The product of lattices L and L’ is defined to be the lattice whose elements are the pairs <x,y> with x in L and y in L’, and with operations defined pointwise.   That is:

<x,y> v <z,w>   =  <x v z, y v w>

<x,y>  ∧ <z,w>=  <x  ∧ z, y  ∧ w>

¬<x, y> = < ¬x, ¬y>

<x,y> ≤  <z,w>   iff   <x,y>  ∧ <z,w>  = <x, y>, iff  x ≤ z,  and  y ≤ w

Any such product of Boolean lattices is a Boolean lattice.

Figure 3. B3 as a product algebra.

It looks a bit like ordinary multiplication:  B1 has 2 elements, B2 has 4 elements, 2 x 4 = 8, the number of elements of their product B3.

Inspecting the diagram, and momentarily ignoring the involution, we can see that B3 has two sublattices, that are each isomorphic to B2.  (The definition of ‘sublattice’ refers only to the lattice operations  ∧ and v.)  That is to say, the components of the product construction show up as copies in the product.  And that is also the case once we take the involution into account, given a careful understanding of this ‘copy’ relation.  

The way we find these sublattices:  choose one element in B1 to keep fixed and let the second element vary over B2:

sublattice B3(1) has elements T2, T1, T ⎯ 1, T0

sublattice B3(2) has elements ⊥2, ⊥1, ⊥⎯ 1, ⊥0

Sublattices so selected will be disjoint, for in one the elements have T as first element and in the other the elements have ⊥ as first element.

These sublattices are intervals in B3, e.g. B3(1) = {x in B3: T0 ≤ x ≤ T2}.  

What about the involution? The restriction of the operator ¬ on B3 to interval B3(1) is not well-defined, for in B3, ¬ T2 is not in B3(1), it is  ¬ T2 = ⊥0 which is in B3(2). 

However, there is a unique extension to a Boolean complement on B3(1): start with what we have from B3, namely that ¬T1 = T ⎯1 and  ¬ (T ⎯1) = T1, then add that ¬T2 = T0 and ¬T0 = T2  (“relative complement”; cf. remark about Theorem 10 on Birkhoff page 16, about relative complements in distributive lattices).  It is this relatively complemented interval that is the exact copy of B2, which is a different example of a Boolean family.

(Looking back to section 5, we can now see that the example there was a simple one, where the restriction of the lattice’s involution, to the relevant sublattice, was well-defined on that sub lattice.)

Thus if we have a product of many Boolean algebras, that product will contain many Boolean families:  

If L1, L2, …, LN, are Boolean lattices and L = Lx L2 x … x LN, then L has disjoint Boolean families isomorphic to L1, L2, …, LN

For example, if e1, e2, …, eN are elements of  L1, L2, …, LN respectively, then the set of  elements S(k) = {<e1, e2, …, ek-1, x, ek+1, …eN>: x in Lk} form a sublattice of L that is (with the relative complement on S(k) defined as above) isomorphic to Lk. (See Halmos, page 116, about the projection of L onto Lk, for precision.)

And if we then augment that product L, in the way we formed L1,  we arrive at a non-Boolean De Morgan lattice, augmented  L.  The result contains many Boolean families, but (e  ∧ ¬e) is in general not the bottom, so it lends itself to adventures in sub classical logic.

But we need to turn now to a less trivial form of augmentation.

7 Non-minimal augmentation of Boolean lattices

A product of distributive lattices is distributive (by a part of the argument that a product of Boolean lattices is Boolean).

The product of De Morgan lattices is a De Morgan lattice.  To establish that, we need now only to check that the point-wise defined operation ¬ on the product is an involution (see Appendix for the proof).

So suppose we have, as above, a Boolean lattice product B, that has many Boolean families, and we form its product with some other, non-trivial non-Boolean De Morgan lattice, of any complexity.

The result is then a non-trivial non-Boolean De Morgan lattice with many Boolean families.

8 Discussion:  what about logic?

The basic sub-classical logic FDE has as non-logical signs only ¬ , v, and  ∧.  That is not enough to have Boolean aspects of De Morgan lattices reflected in the syntax.  

For example, the equation  (a v ¬ a) = (b v ¬ b) defines an equivalence relation between the propositions (a, b).  But the definition involves identity of propositions, which for sentences  corresponds to a necessary equivalence.  To express this, a modal connective, say <=>, could be introduced, in order to identify a fragment of the language suitable for formulating classical theories.

There is much to speculate.

APPENDIX

[1]  Define, for any De Morgan lattice L,  B(e) = {x in L: (x v ¬ x) = (e v ¬e)}.  

Theorem. If L is a De Morgan lattice and its element (e v ¬e) is normal, then B(e) is a Boolean sublattice of L.

First, all elements of B(e) are normal. For (e v ¬e) is normal, and if x is not normal then (x v ¬x) = x.

For B(e) to be a sublattice with involution of L it suffices that B(e) is closed under under the operations ∧, v, and ¬ on L. 

Define t = (e v ¬e).  If d and f are in B(e) then 

  • ¬d is also in B(e), for ¬d v ¬ ¬d = d v ¬d = t
  • (d v f) is also in B(e) because 

[(d v f) v ¬(d v f) ]      = [(¬ d  ∧ ¬f) v (d v f)]

                                    = [(¬ d  v d v f)]  ∧ (¬f  v d v f)]

                                    = t  ∧ t

(d  ∧ f) is also in B(e) because 

[(d  ∧ f) v ¬(d  ∧ f) ]   = [(d  ∧ f) v (¬ d  v ¬f)]

                                    = [(d  v ¬ d v ¬ f)]  ∧ (f v ¬ d v ¬ f)]

                                    = t  ∧ t

So B(e) is a sublattice of L, and hence distributive.  It has involution ¬, its top is t and bottom ¬t.  So B(e) is a bounded De Morgan lattice.  B(e) is Boolean, because all its elements x are such that (x v ¬ x) = t.

B(T) is the Boolean center of the lattice.

[2] Theorem.  The lattice L1, the augmented lattice B3, is a De Morgan lattice.

(a) The operation ¬ extended from B3 to L1 by adding that T = ¬ ⊥ , and ⊥ = ¬T, is an involution.  The addition cannot yield exceptions:  each element e of L1 is such that ⊥ ≤ e ≤ T, which is equivalent to, for all elements e of L1,   ¬ T ≤  ¬ e  ≤   ¬⊥.

(b) To prove that L1 is distributive, we note that, for all elements e of L1,

             T  ∧ e = e  and  T v e = T.

            ⊥ ∧ e = ⊥  and ⊥ v e = e.

To prove: If x, y, z are elements of L1 then x  ∧ (y v z) = (x  ∧ y) v (x  ∧ z).

  • clearly, that is so when x, y, z all belong to B3
  • T ∧ (y v z) = (y v z) and (T  ∧ y) v (T ∧ z) = y v z
  • x  ∧ (T v z) = (x  ∧ T) = x and  (x  ∧ T) v (x  ∧ z) = x v (x  ∧ z) = x 
  • ⊥ ∧ (y v z) = (⊥ ∧ y) v (⊥ ∧ z) = ⊥
  • x  ∧ (⊥  v z) = (x  ∧ ⊥) v (x  ∧ z) = ⊥  v (x  ∧ z) = x  ∧ z

The remaining cases are similar.

[3] Theorem.  A product of De Morgan lattices is a De Morgan lattice.

The product of any distributive lattices is a distributive lattice (Birkhoff 1967: 12)

To establish that the product of De Morgan lattices is a De Morgan lattice, we need then only to check that the point-wise defined operation ¬ on the product is an involution.

Let L1 and L2 be De Morgan lattices and L3 = L1 x L2. Define ¬<x. y> = <¬x, ¬y>

  • ¬¬<x,y> = ¬<¬x, ¬y> =  <¬ ¬x, ¬ ¬y> = <x, y>
  • suppose <x, y>  ≤  <z, w>.  Then, x  ≤ z and y  ≤ w, and therefore ¬ z  ≤  ¬ x and ¬ w  ≤ ¬ y.  So ¬<z, w>  ≤ ¬<x, y>

Bibliographical note.

For the relation between FDE and De Morgan lattices see section 18 (by Michael Dunn) of Anderson, A. R. and N. D. Belnap (1975) Entailment: The Logic of Relevance and Necessity. Princeton.

For distributive lattices in general and relative complementation see Birkhoff, Garrett (1967)  Lattice Theory.  (3rd ed.).  American Mathematical Society and Grätzer, George (2009/1971) Lattice Theory: First Concepts and Distributive Lattices.  Dover.

For products of Boolean lattices see section 26 of Halmos, P. R. (2018/1963) Lectures on Boolean Algebras. Dover. 


Extension, Intension, Comprehension – Revisited

There are traditional examples that move us quickly away from the idea that our language is just extensional.  And there are some that put into doubt that our language is only intensional, with no distinctions between any concepts that are necessarily co-extensional.  These examples suggest that predicates have, or may have, three distinct semantic values: extension, intension, and comprehension.[1]

What the examples leave largely open is the character of relations between these three levels or modes of meaning.  Seemingly best understood is the relation between extension and intension.  I shall explore that first, and then explore how the same sort of relationship may obtain between intension and comprehension.  There will be a connection with paraconsistent logic.

1.      Traditional examples

From Medieval philosophy we have a notable theory of distinctions.  There is a distinction de re between featherless biped and rational animal: Diogenes the Cynic displayed a plucked chicken as real instance of the one and not the other.  Today we say: these two concepts do not have the same extension.  But there is only a distinction of reason between woman and daughter: there are no real entities instantiating the one but not the other, but there could be, as are represented in pictures, stories, and myths.  Today we say:  these concepts have the same extension but not the same intension.  

Are there examples that push us still further?  We can symbolize has a property, as x̂ (∃F)Fx, and is identical with something, as x̂(∃y)(y = x).  Let’s refer to these concepts as being and existence.  Then we can puzzle over them:  there surely could not be, or even be fantasized to be, an instance of one that would not be an instance of the other.[2] As the Medievals would say, not even God could create something that has one but not the other.  If there is a distinction nevertheless, it is what Duns Scotus called a formal distinction. Today we would say, if anything, that the two concepts do not have the same comprehension, and that if so, our language is hyper-intentional.

2.      How extension is related to intension

Terminology. I will use “property” only for what can be an intension of a predicate in a given language.  Only different terms are to be used for extensions and comprehensions.[3]  

There are well-known ways in which properties (intensions of predicates) are represented in semantic analyses of modal logics.  

Let’s abstract from the details. Properties, in this sense, form an algebra AI.  As a working hypothesis I will take this to be a Boolean algebra.  It has a top, T, and a bottom, f. In a given world, our world say, any property x has an extension |x|, which is a set of entities in the world.  Necessarily, T‘s extension includes everything and f‘s extension is empty. A distinction of reason is then a distinction between properties that have the same extension.

The function | | is a homomorphism:  if x, y are properties and x  ≤ y then |x| ⊆ |y|, ⎯ |x| = | ⎯ x|, and |x . y|  = |x| ∩ |y|. As a result the sets, which are the extensions of properties, form a Boolean algebra, AE.    

To formulate the distinction of reason we can define an equivalence relation, extensional equivalence, on the properties:  (x ≡ y) iff |x| = |y|.  Then AE is isomorphic to the quotient algebra AI modulo  ≡ : the elements of this are the equivalence classes  of elements of AI,  [x] = {y: x ≡ y}, with [x] ≤  [y]  iff x  ≤ y,  ⎯[x] = [⎯x],  and |x|  ∧ |y| = |x  ∧ y|. 

But there is another nice way of thinking about this relation between properties and their extensions.  Think about the properties that | | maps into the empty set.  Let us call these the ignorable properties from a extensionalist point of view. 

properties x and y are co-extensional  iff they differ only by an extensionally ignorable part, that is, there is some extensionally ignorable property z such that x v z = y v z.

The distinction of reason pertains then to exceptions to co-extensionality.  For example, woman = (woman who has parents) v (woman who has no parents), and that is the join of daughter with an extensionally ignorable property.[4]

Now the ignorable properties form an ideal in the Boolean algebra BI of properties: if x ≤ y and y is ignorable then so is x, and moreover, (x v y) is ignorable iff both x and y are ignorable.  This is not coincidental: for any equivalence relation E on a Boolean algebra there is an ideal J such that x E y iff for some z in J, x v z = y v z.

Are co-extensional and extensionally equivalent the same relation?  In other words, if x and y have the same extension must there then be an ignorable property z such that x v z = y v z?

The answer is yes:  z is the symmetric difference between x and y, that is, the join of (x ⎯ y) and (y ⎯ x).  If either of those had a non-empty extension then |x| and |y| would not be the same.

I would like to explore this idea of ‘ignorables’ to get at the relationship between comprehension and intension.

3.      Positing the same relationship for intension to comprehension

The idea is that the above abstract form of the relationship between extension and intension obtains also for the relationship between intension and comprehension.  That is, the comprehensions of concepts form an algebra AC, and AI is (isomorphic to) the quotient algebra formed by reducing AC by an appropriate equivalence relation.

In view of the above, the way to identify that appropriate equivalence relation is to specify an ideal in AC: the ideal of intensionally ignorable comprehensions.  

If x is in AC then it has an intension ||x|| in AI, and x is intentionally ignorable exactly if ||x|| = the absurd property f.  Define x and y to be intensionally equivalent (x ⇔ y) exactly when x v z = y v z for some ignorable comprehension z.  And I submit that AI is (is isomorphic to) AC modulo ⇔.

Let’s see how this plays out with the example of being and existence.  A quick check shows that

(∃y)(y = x)   v  [(∃y)(y = x) & ~(∃F)Fx]  v  [(∃F)Fx & ~(∃y)(y = x)]

 is logically equivalent to 

[(∃F)Fx]  v  [(∃y)(y = x) & ~(∃F)Fx]  v  [(∃F)Fx & ~(∃y)(y = x)]:

that is 

existence or [existence and nonbeing] or [being and nonexistence]

being or [existence and non-being] or [being and non-existence]

Therefore to complete the example we must declare that [existence and non-being] as well as [being and non-existence] are intensionally ignorable comprehensions.  The mapping || || sends these into the absurd property .  

In this case the three comprehensions being, existence, and being cum existence have the same intension.  Necessarily, any real things that have being exist, and any that exist  have being.   So the intension is the summum genus among properties, T.

4.      Pertinence of paraconsistent logic

The example of being and existence already shows that the logic pertaining to comprehension cannot be classical, where the definitions of those properties are tautologically equivalent.  

Medieval discussions of such concepts as being, (‘transcendentals’), included also examples such as finite or infinite.  While distinct from being, that property it is clearly not distinguishable from being by any real or possible instances.  From a classical logic point of view, finite or infinite is just an ‘excluded middle’, hence tautological, and there is no logical leeway.

To do justice to the formal distinction between being and ‘excluded middles’, therefore, the logic pertaining to comprehension must allow for ‘excluded middles’ that do not imply each other.   

The most modest logic of this sort is FDE, which corresponds in algebra to De Morgan lattices: distributive lattices equipped with an involution.  The involution ⎯ is like a Boolean complement:

x = ⎯ ⎯ x

if x ≤ y then ⎯y ≤ ⎯x, 

and from these two, given distributivity, the De Morgan laws follow:

⎯(x v y) = (⎯x  ∧  ⎯y)

⎯(x  ∧ y) = (⎯x v ⎯y)

But (x v ⎯x) may not be the top, (x  ∧ ⎯x) may not be the bottom, and it can happen that x = ⎯x. 

As models for the logic of comprehension I propose the De Morgan lattices with top (Θ), bottom  ⊥ , and such that ⎯⊥= Θ. The function || || assigns an intension to each comprehension; ||  ⊥ || = f, and ||Θ|| = T.

Boolean algebras are De Morgan lattices, with the characteristic that their involution ⎯  is such that for each element x, (x v ⎯ x) = the top element.  Happily the connection between ideals, homomorphisms, and equivalence relations holds as well for De Morgan lattices.[5]

To explore how comprehensions fare in this landscape, let us take a simple example of a De Morgan lattice for a model.

The 8-element De Morgan lattice DM8 (aka Mighty Mo) looks quite like the three atom Boolean algebra B3, but the involution is different:

Suppose we take as our ideal of intensionally ignorable elements the set of elements marked with ⎯, which consists of ⎯ 0 and everything below that.  To represent the two classical tautologous ‘excluded middles’ of finite or infinite and round or not round let us choose +2  for finite  and +1 for round.  Then we see that:

finite or infinite = (+2 v ⎯2) = +2

round or not round = (+1 v ⎯1) = +1

+2 v ⎯0 = +1 v ⎯0    ( = +3), 

so the two ‘excluded middles’ differ by the ignorable element ⎯ 0, hence are intensionally equivalent though distinct.  And yes, they are also equivalent to the top +0 and +3, their intension is the property T, the summum genus.

5.      Identifying the intensionally ignorable comprehensions

There is a first-blush troubling question about the insistence that AC must be a De Morgan lattice, and not Boolean. Above I had advanced as working hypothesis that AI is Boolean.  But now we have AI as a quotient algebra, namely AC modulo ⇔, which implies that AI is a De Morgan lattice as well.   That does not rule out that AI is Boolean.  But is it?

I will take this question as the clue to how to identify the intensionally ignorable comprehensions.  

First, to have a clear example, let’s suppose again that AC is Mighty Mo, DM8.  We chose an ideal of ignorables, namely the ideal generated by ⎯0.  Then we already saw that with that choice +0, +1, +2, +3 are intensionally equivalent.  A quick check shows that ⎯1 v ⎯0 = ⎯2 v ⎯0 = ⎯0 v ⎯0 = ⎯3 v ⎯ 0 = ⎯3.  So ⎯0, ⎯1, ⎯2, and  ⎯3 are all equivalent as well.  Therefore in this case AI has just two elements, a top and a bottom (‘the True’ and ‘the False’).  That is the two element Boolean algebra. 

Can this be the case in general? What candidates do we have for intensionally ignorable comprehensions?  Clearly the elements (x  ∧ ⎯x).  They are mapped to the absurd property: ||x  ∧ ⎯x|| =  f.  So let us choose for the ideal of ignorables an ideal that includes {z: for some x in AC, z = x  ∧ ⎯x}.[6]

But then in AI, whose members are exactly the properties ||x|| with x in AC

||x  ∧ ⎯x|| =  f =   ||x||  ∧ ⎯||x||  

hence by De Morgan’s laws, ⎯||x|| v || x|| =   ⎯  f = T, the summum genus.     

Therefore AI is a Boolean algebra.

6.      Coda

I have not so far mentioned comprehensions that are not ignorable although they strike us at once  as self-contradictory.   At first blush 

            brother of someone with no siblings, 

sister of someone with no siblings

are different concepts, despite their apparent self-contradictory-ness.  For one is a concept of a male and the other is a concept of a female.  

What we can say about it is only this, I think:  they do not have the same intention, and are not intensionally equivalent, but each is the meet of something with the intentionally ignorable comprehension being a sibling of someone who has no siblings.

I doubt that this is the end of the matter.  The logic of comprehension must be, with respect to self-contradictions at least as liberal as FDE … yes, but who knows what else lurks in these deep-black logical waters, yet to be appreciated?

INDEX TO SYMBOLS 

≤ , v,  ∧ ,  ⎯ : partial order, join, meet, involution in an algebra

the absurd property:  f,   the summum genus (top property): T

equivalence class of property x: [x]

assignment of extensions to properties:  | |

assignment of intensions (properties) to comprehensions:  ||  ||

extensional equivalence of properties:   ≡

intensional equivalence of comprehensions:   ⇔

top of a De Morgan lattice: Θ

bottom of a De Morgan lattice: ⊥

NOTES


[1] Terminology varies.  Alonzo Church’s review of C. I. Lewis’ The Modes of Meaning begins with “As different mtodes, or kinds, of meaning of terms the author distinguishes the denotation of a term, the comprehension,  the connotation or intension, the signification, the analytic meaning.”  (JSL 9 (1944): 28-29).  Lewis’ terminology was not standard, and as Church shows, not clear; though influential, his work did not standartize usage in this respect.  

[2] More familiar is the example of a distinction between triangle and trilateral, discussed by Leibniz among others.  His oft quoted passage on the matter: “[T]hings that are conceptually distinct, that is, things that are formally but not really distinct, are distinguished solely by the mind. Thus, in the plane, Triangle and Trilateral do not differ in fact but only in concept, and therefore in reality they are the same, but not formally. Trilateral as such mentions sides; Triangle, angles.”

[3] I am tempted to adopt “concept” for the comprehension of a predicate.  But its associations to the mental might become a constant worry.

[4] To help my intuition and imagination I keep in mind reduction modulo sets of measure 0 as paradigm example.

[5] Thm. 5 on p. 27 of Birkhoff  1967 Lattice Theory 3rd edition

[6] Minimally, the ideal generated by that set.  But we may want additions to the ignorables, like the meet of being and non-existence.