Truthmakers and propositions

This is a reflection on Kit Fine’ (2017) survey of truthmaker semantics.  I will describe the basic set-up presented by Fine, aiming to clarify the logical relations between exact and inexact truthmakers.

1 Exact truthmakers           p. 1

2 Sentences and truthmaking    p. 2

3 Propositions and logic: an isomorphism   p. 3

1.        Exact truthmakers

Exact truthmakers are entities of some sort (‘states’, ‘facts’, ‘events’) which combine in just one way (‘fusion’, I’ll use symbol +).  Kit Fine gives as examples windrain, and their fusion wind and rain.  That fusion is associative is assumed:

  • x + (y + z) = (x + y) + z

So we can just write e.g. wind and rain and snow  without ambiguity.  In mathematical terms, this property makes the exact truthmakers a semigroup.  But it has two additional properties:

  • x + x = x                           idempotency
  • x + y = y  + x                    commutativity

So the exact truthmakers form an idempotent, commutative semigroup.  That structure is at the same time a semilattice by the following:

Definition x ≤ y  iff  x + y = y   and  x ≥ y iff  x + y = x     

So  wind is part of wind and rain, for the fusion of wind  and rain is just wind and rain.  That the exact truthmakers thus form a a semilattice means first of all that ≤ , and its converse ≥, are partial orders.  

I find it convenient to focus attention on the converse order. For that will make it easier to see the relation between exact and inexact truthmakers, when we discuss that below.

  • x ≥ x                                              reflexivity
  • if x ≥ y and y ≥ x then x= y          anti-symmetry
  • if x ≥ y and y ≥ z then x ≥ z          transitivity

In addition, a fusion is the least upper bound of its parts in the ≤ ordering (as Fine has it).  That is the same as the greatest lower bound in the ≥ ordering:

  • x + y ≥ x   and   x + y ≥ y
  • if z ≥ x and z ≥ y then z ≥ x + y

By anti-symmetry it follows quickly that x + y is the unique element for which (7) and (8) hold in general, hence 

Lemma 1. {z: z ≥ x + y} = {z: z ≥ x} ∩ {z : z ≥ y} 

For by (7), if z ≥ s + t then z ≥ s  and z ≥ t.  And by (8), if z ≥ s  and z ≥ t then z ≥ s + t.

Idempotent commutative semigroup and semilattice are really one and the same.  For starting with the semilattice and defining x + y to be the least upper bound of x and y in the ≤ ordering, we arrive at properties (1)-(3) above for the defined notion.  

I have followed Fine here in defining the ≤ order so that wind and rain looks in the mathematical representation like a disjunction rather than like a conjunction.  So this semilattice <S, ≤ >is a join semilattice, while  equivalently <S, ≥ > is a meet semilattice, in which the operator selects the greatest lower bound.  (Ignore the visual connotation of symbol “≥” in the latter case.)

Kit Fine added to his definition of a state space that the family of exact truthmakers with the fusion operation forms a complete semilattice. That is, each set of exact truthmakers (and not just pairs or finite sets) has a least upper bound.  That is audacious, and may introduce difficulties if probabilities are eventually introduced into this framework.  So I will just stay with the finitary operation here.  

I will make the definition redundant, so as to display the two equivalent ways of looking at this structure.

Definition.  state space is a triple S =(S, +, ≥), with S is a non-empy set (the exact truthmakers), + is a binary operator on S, and  ≥ is a binary relation on S such that (S, +) is an idempotent commutative semigroup, (S, ≥) is a semilattice, and for each x, y in S, x + y is the greates lower bound of x and y in that semilattice.

Although this is more elaborate, to make some details explicit, this definition is equivalent to Fine’s definition of a state space (S, ≤) as a semilattice in which the fusion of x and y is the least upper bound of x and y in the ≤ ordering.

2.        Sentences and truth-making

Let language L have sentence connectors &, v, to be read as ‘and’ and ‘or’.  An interpretation of L in state space is a function |.|+ assigns to each sentence A the set |A|+ (the exact truthmakers that verify A, that make A true), subject to the conditions 

  • |A & B|+  = {t + u : t is in |A|+ and u is in |B|+}
  • |A v B|+  = |A|+ ∪ |B|+

Kit Fine defines (page 565) the notion of an inexact truthmaker for sentence A so that e.g. wind and rain inexactly verifies “It is raining” just because rain is an exact truthmaker of that sentence.  That is, if s exactly verifies A then s + t inexactly verifies A.

Definition.  inexactly verifies A if and only if there is some element t of |A|+ such that t ≤ s.

I will use the notation || . ||+ for the set of inexact truthmakers of a sentence:

            || A ||+ = {s:  s ≥ t for some t in |A|+}

that is, the set of inexact truthmakers of A is the upward closure of its set of exact truthmakers in the ≤ ordering.

This notion is so far defined only in terms of the relation to language. But there is obviously a corresponding language-independent notion, and the two go well together.  For each exact truthmaker engenders a specific set of inexact truthmakers:

Definition. 𝜙(t) = {s: s ≥ t}

Clearly u ≥ t exactly if 𝜙(u)  ⊆ 𝜙(t), and so t is in  |A|+ exactly if 𝜙(t) is part of || A ||+.  In fact,  

|| A ||+ = ∪{ 𝜙(t): t is in |A|+}

This is an upward closed set in the ≤ ordering.  

3.        Propositions and logic :  an isomorphism

What should we take to be the proposition expressed by sentence A, should we take it to be |A|+ or || A ||+ ?   We can just say there are two, the ‘exact proposition’ and the ‘inexact proposition’. 

Just how different are the families { |A|+: A a sentence of L} and {|| A ||+: A a sentence of L}?  They are certainly different; for example be |A & B|+ is not in general part of  |A|+, while || A & B ||+ ⊆ || A ||+.  So as far as exact truthmakers are concerned, A & B does not entail A, while for the inexact truthmakers A & B does entail A.

But the relationship that we saw just above shows that from a structural point of view there is an underlying identity.  

Lemma 2.  For s, t in S, 𝜙(s) ∩ 𝜙(t) =  𝜙(s +t)

This is just Lemma 1 transcribed for function 𝜙.

Lemma 3.  The system  Φ(S) = < {𝜙(t): t in S} , ∩, ⊆ > is a (meet) semilattice.

The Lemma follows Lemma 2 and the informal discussion at the end of section 2.  In fact, by our earlier definition, this system is also a state space.

Theorem.  The function 𝜙 is an isomorphism between state space  S = <S, +, ≥ > and state space Φ(S) = < {𝜙(t): t in S} , ∩, ⊆ >

So the inexact truthmakers provide a set-theoretic representation of the semilattice of exact truthmakers.  The proof is standard textbook fare for semilattices. 

Sketch of the proof.  

To begin, the ordering in is mirrored in the range of 𝜙 because  ≥ is also a partial ordering: 

  • if x  ≥ y then by transitivity, if z  is in  𝜙(x) then z is in 𝜙(y); hence   𝜙(x) ⊆ 𝜙(y).
  • x is in 𝜙(x) so if  𝜙(x) ⊆ 𝜙(y) then x  ≥ y
  • 𝜙 is one-to-one by the anti-symmetry of  ≥

This establishes an order isomorphism between S and the range of 𝜙.  In addition:

𝜙(x + y) = {z: z ≥ x + y} = {z: z ≥ x and z ≥ y} = 𝜙(x)  ∩  𝜙(y)

by Lemma 2.

How much should we conclude from this?

Perhaps not much.  As Fine shows, there are many distinctions to be made in terms of exact truthmakers that can be exploited in different ways, and might be obscured by turning to the inexact truthmakers alone.  And many examples can be illuminated by focusing just on the exact truthmakers of atomic sentences, with propositions, in either sense, built up from there.  In addition, we have only been discussing the most basic set-up, and there are many interesting complications when negation and modal operators are introduced, as Kit Fine shows.

But at the same time,  it may help to reflect that the state space of exact truthmakers does have a ‘classical’ structure, with a  set-theoretical representation.

REFERENCES

Kit Fine (2017) “Truthmaker Semantics”.  Pp. 556-577 in A Companion to the Philosophy of Language (eds B. Hale, C. Wright and A. Miller)

Tautological entailment (1)

(A note beforehand: after intuitive introduction, this is going to follow the general outline of the scheme proposed in the posts called “An oblique look at propositions”.)

When is an argument a good argument? The traditional answer takes two forms. The first is syntactic, the definition of a derivation: a finite sequence of sentences each of which either is a logical truth, or a premise, or follows from preceding sentences by a logical rule. The second is semantic, the definition of a valid argument: a set of premises and a conclusion which are such that, necessarily, if all the premises are true then the conclusion is true.

I will call these the traditional criteria. By either criterion, the Structural Rules hold, and especially, the one that is generally called Weakening:

If ‘X therefore A’ is a good argument and X ⊆ Y then ‘Y therefore A’ is a good argument.

Relevant logic (also called relevance logic) was born from the conviction that this is not good enough, because a good argument shouldn’t have any irrelevant premises or an irrelevant conclusion. Relevance, unfortunately, is a subtle and complex relationship, itself a lively topic in philosophy of science for example, but it casts only a faint shadow in any general study of language and logic. Still, there are some glaring examples of irrelevancy taught in every elementary logic class:

if p, q have nothing to do with each other then both [p & ~p, therefore q] and [p, therefore q v ~q] are valid arguments by traditional criteria, but they commit fallacies of irrelevance.

This was the message of Anderson and Belnap’s article “Tautological entailments” in 1962. Their title was the name they gave to arguments in sentential logic that are genuinely good arguments, involving no irrelevancies.

I won’t spell out the entire logic of tautological entailment right now, but it is easy to state its first principle, which picks out the simple cases. Let us use the term atom for any sentence that is either an atomic sentence or the negation of an atomic sentence:

If A is a conjunction of atoms, and B a disjunction of atoms, then A tautologically entails B if and only if at least one conjunct in A is a disjunct in B.

So p tautologically entails (p v ~p), but q does not; (p & ~p) tautologically entails p, but (q &~q) does not.

However, (p&q) tautologically entails p; why isn’t that a fallacy of relevance? After all, the inference from the two premises p, q to conclusion p has an irrelevant premise, and together don’t they have the same logical force as that conjunction? The answer is no. Any one sentence can be a relevant premise, however complex it may be, if it bears on the conclusion in a relevantly right way.

Important note: the concept of tautological entailment applies only to arguments with one premise and one conclusion. (Anderson and Belnap and their students and collaborators developed an entire species of relevant logics that were much more ambitious, but I want to discuss only this one, which is the most basic part.)

In a preceding post (“An oblique look at propositions”) I related propositional closure schemes to a Minimal Structural Logic, via the traditional criteria for a good argument. Now I will do something similar for Tautological Entailment, though in the way that is proper to the relevance way of thinking, not accepting those traditional criteria.

I am exploiting a connection with the notion of fact as it appeared in the Tractatus and in Russell’s The Philosophy of Logical Atomism. The idea is this: the fact that it is raining makes true the sentence “It is raining or not raining”, just as would the fact that it is not raining. But the fact that it is sunny has just nothing to do with that, and so does not make either disjunct true, and a fortiori, does not make the disjunction true.

Just now we are thinking in terms of facts making sentences true. We can think of the atomic sentences as standing for atomic facts, and their negations also: that the apple is red is one atomic fact and that the pear is non-red is another atomic fact.

A complex fact is a whole that consists of many atomic facts: something like (b.c.d.e…n). You may think of a complex fact as a set, or as a mereological sum, or a pile of bricks, as long as you think of it as consisting entirely of atomic facts, its components, with no importance attaching to any order they might be in. So (b.c) = (c.b) and (b.b) = b. A complex fact will make true any sentence that any of its parts make true.

Let’s shift from sentences to propositions. A proposition Q is a set of facts, and is made true (relation ») by any of its members. But if b is one of its members, we have just indicated that the complex fact (b.c) will make true whatever b makes true, so (b.c) will also make Q true. This shows us what the relation ≤ of subordination is, among facts: for any facts b, c, whether they are atomic or complex, (b.c) ≤ b. And we have just given the reason to say: a proposition Q contains all the facts that are subordinate to any of its members. Define the closure operation [ ]:

[X] = {y: y is a fact, and y ≤ x for some fact x in X}

Thus the propositions are the closed sets of facts, and so they form a complete lattice; the argument for this is just the same as in that earlier post. But the situation is simpler, for the only subordination relation we have is the one between facts, not something more complicated.

Distributivity

We have a good idea about conjunction and disjunction, when the propositions form a lattice. In general, if each sentence A expresses a proposition, |A|, then

|A & B| = |A| ∩ |B|, and |A v B| = [|A| ∪ |B|]

There is something special about this case: for the soundness of the logic of tautological entailment, this particular lattice of propositions must be distributive. That does not follow from the propositions being the closed sets, or at least not from that alone. In the next post we’ll see that this has to do with the especially simple form of subordination in this case, and the consequences thereof for how conjunctions and disjunctions are made true.

Complementation

One special feature, that has nothing to do with subordination has already appeared: an atomic fact like that the pear is red has an evil twin that is also an atomic fact, namely that the pear is non-red. Let’s assume that all atomic facts have evil twins; of course the evil twin of the fact that the pear is non-red is the fact that the pear is red.

But that still leaves us with a lot of questions about how to think about negation, or its analogues, in general.

It is time to end the intuitive introduction, and to put definitions and proofs where our intuitions were giving tongue. I will start that in the next post.