What I called the scheme is a general pattern, with many instances. I’ll call a particular instance a propositional scheme.
A propositional scheme therefore has four ingredients: a set of states, a set of propositions, and two different relations, namely » (from states to propositions) and ≤ (subordination, from states to sets of states, and derivatively, from states to states). We can conveniently represent it by a four-tuple S = <K, P, », ≤ >, with K and P non-empty sets, and the relations » and ≤ have the properties displayed in the previous post.
In the examples we looked at, propositions are either arbitrary or special sets of states. Therefore ≤, a relation of states to sets of states, is also a relation that states can bear to propositions. We also defined it to be derivatively a relation between states: x is subordinate to y, x ≤ y, to mean that x ≤ {y}. Note though that {y} may in general be one of those sets that is not a proposition, unless all sets of states are propositions.
In this post I want to spell out something that is appropriate for all the examples I gave, though not for all examples that there might be. It is still very general, and identifies an underlying logical structure that will be quite familiar in form.
It will almost never be natural for propositions to be just arbitrary sets of states. Intuitively we can think about it like this:
Suppose p is a proposition and x is one of its members, that is, p is true in x. Let y be some state subordinate to x, which means that all the propositions that are true in x are true in y. Then obviously p is true in y as well. So p is closed under subordination: if any state x is in p then all states subordinate to x are in p as well.
This suggests the idea that propositions are sets of states closed under subordination.
Given any set W of states, let us call the closure of W under subordination the span of W, and denote it [W]. That is, [W] = {x: x ≤ W}. This operation [ ] is a closure operator, which means a function that has the following properties:
- W ⊆ [W]
- if U ⊆ W then [U] ⊆ [W]
- [[W]] = [W]
These properties follow from the properties of subordination that were listed in the previous post. They don’t depend on any special assumptions.
We call W closed exactly if W = [W].
Theorem 1. If U and W are both closed sets of states then U ⊆ W if and only if all states in U are subordinate to W.
For assume that U and W are both closed sets of states. Suppose first that U ⊆ W. Then U is part of [W], so all states in U are subordinate to W. Suppose secondly that all states in U are subordinate to W. But W = [W], so all states subordinate to it are in it, so U is part of W.
So for the closed sets, subordination goes hand in hand with the usual partial ordering of set inclusion. The following is an instance of a standard theorem about closure operations.
Theorem 2. The closed sets of states in a propositional scheme form a complete lattice, partially ordered by set-inclusion; the meet of a set of closed sets is their intersection, while their join is the closure of their union.
Given these points, I take it that, quite clearly, one natural way to think of propositions, if they are to be sets of states at all, is to take them to be the closed sets of states.
Definition. A propositional scheme S = <K, P, », ≤ > is a propositional closure scheme if and only if P is the set of all closed subsets of K.
Theorem 2 then gives us an immediate grasp of the basic underlying logic, if we interpret a language by taking its sentences to stand for the propositions in a propositional closure scheme.
The range of the propositional closure schemes is wide, and they can also provide bases on which other interesting structures can be mounted, for example, by introducing an operation that corresponds to negation or a logical conditional.
A minimal logic
We introduce a sentential syntax Synt: it has atomic sentences, and the connectives v and &, and the usual formation rules.
An interpretation of Synt in propositional closure scheme S is a function | | that maps sentences into propositions such that |A & B| = |A| ∩ |B| and |A v B| = [ |A| ∪ |B| ].
The following definition is an independent item in the story, a choice separate from all the above, but again, in view of a long and venerable tradition, a natural choice.
Definition. Set of sentences X semantically entails sentence A (in symbols: X ╞ A) if and only if the intersection of the propositions {|B|: B is in X} is part of |A|, for every interpretation | | of Synt in every propositional closure scheme .
Theorem 3. The semantic entailment in this language is entirely characterized by the following,
- If A is in X then X ╞ A
- If X ╞ A then X ∪ Y╞ A
- If X ╞ A for all members A of Y, and Y ╞ B, then X ╞ B AND ….
- (A & B) ╞A; (A & B) ╞ B;
- If C ╞ A and C ╞ B, then C ╞ (A & B)
- A ╞ (A v B); B ╞ (A v B);
- If A ╞ C and B ╞ C then (A v B) ╞ C
The first three correspond to the Structural Rules and the other four characterize the inferential roles of the connectives.
Theorem 3 follows directly from Theorems 1 and 2, for properties 4. – 7. of ╞ are just read off from the definition of a lattice.
The Minimal Logic displayed here is part of both classical and intuitionistic logic, and actually of quantum logic as well.
A note about the Structural Rules, and a look outside minimal logic
There are so-called sub-structural logics in which not all the Structural Rules hold.
These Structural Rules hold simply because of the form of the definition of semantic entailment that I adopted here (the usual one, of course), regardless of what sorts of sets the propositions are.
So some other sorts of logic can still be related to closure algebras and lattices in the same way, so that 4.- 7. hold, while some Structural Rule does not hold. The definition of the entailment relation has to be different then.