Thesis: all our most familiar logics, both sentential and quantificational, are Hilbert logics.
In Henkin’s paper (cited in the preceding post) his main target was not sentential but predicate logic. Given that the proof of the Deduction Theorem hinges on Modus Ponens being the sole rule of inference, there is an obvious question. Isn’t
Universal Generalization. from ⊢Fa to infer ⊢(x)Fx
an indispensable rule as well?
Moreover, Henkin indicates that his topic here is not just classical and intuitionistic quantification theory, but applies to quantified modal logic. At that time the concept of a normal modal logic had not yet been introduced. The main modal logics were Lewis and Langford’s S1 through S5, and in retrospect we can say that of these only S4 and S5 were normal. But as typically formulated now, the normal modal logics have a rule that is similar to Universal Generalization, namely
Necessitation. from ⊢A to infer ⊢□A
It is important, and easy to see, that these rules are quite different from Modus Ponens, which says
from A and A ⊃ B to infer B
or, using the deducibility symbol,
A, A ⊃ B ⊢ B
Universal Generalization and Necessitation do not provide inferences from premises to conclusions. Instead, what they do, in effect, is to add lots to the stock of axioms, that is, to the lines in a deduction that do not need to be inferred from previous lines.
It is in fact possible in both cases to have a formulation that is a Hilbert logic, and here I would like to explore one way in which that can be shown.
Quantifier logic a la Quine
For quantificational logic this was made this beautifully clear by Quine in his Mathematical Logic (revised edition 1951). His system of quantifier logic with identity, called ML, does not have the rule of Universal Generalization, and Modus Ponens is its only rule of inference. But he introduced a more nuanced notion of axiom schemata.
First he defines the notion of closure: if A is a well-formed formula which has in it free variables x, y, z, … then the closure of A is the statement which results when universal quantifiers are prefixed to A to bind those variables. So the closure of A looks like this: (x)(y)(z) …A.
Each axiom schema then defines a large class of statements, which are the closures of instances of the displayed form. For example for the class he calls C he displays the form
A ⊃ (x)A, where A is a formula in which x is not a free variable,
and C is the class of all closures of well-formed formulas that have this form. Examples would be (y)(Fy ⊃ (x)Fy), (z)(y)(Rzy ⊃ (x)Rzy), and so forth.
It is clear then that no axiom, and hence no theorem, has free variables in it. Quine’s language did not have individual constants. So it is not even possible to formulate a non-trivial version of Universal Generalization for ML itself. But more to the point, if individual constants are added, we have a choice, and for each choice we arrive at a complete quantificational logic without adding the Universal Generalization rule. If we add the axiom schema
(Ey)(y = a)
the result is the familiar classical predicate logic. Intuitively, each individual constant is the name of a real thing. If we do not add anything then the result is free logic. In that logic, again intuitively, the only existence assumption built it (as it was for Quine) is that there is something rather than nothing. But the individual constants can be non-referring terms, like our “Pegasus”.
In both cases it is provable as a meta-theorem that if ⊢Fa then ⊢(x)Fx. The need for the rule of Universal Generalization has been neatly by-passed by Quine’s tactic.
Modal logic
So how does it stand with the modal logics? Ruth Barcan Marcus wrote the seminal papers on quantified modal logic in the years 1946-1953. She proved the usual Deduction Theorem for material implication for S4 and S5, as formulated by Lewis and Langford, and showed that it could not be proved for S1 or S2.
In our present context, where the family of normal modal logics has a standard formulation, the rule Necessitation plays a salient role. Can Quine’s tactic be implemented here, mutatis mutandis?
The weakest normal logic is system K, formulated as follows with one axiom schema and three rules:
R0. If p is a theorem of classical propositional logic then ⊢ p
R1. p, p ⊃ q ⊢ q
R2. If ⊢ p then ⊢ □p
A1. ⊢ □(p ⊃q) ⊃ (□p ⊃ □q)
Only R1, modus ponens, is an ordinary rule of inference. Rule R0 can be replaced by the axiom schemata of a Hilbert logic formulation of sentential logic. But R2 is Necessitation.
Let’s adapt Quine’s tactic as follows. If A is a formula then call the necessitations of A all formulas that consist of A preceded by zero or more symbols □. And introduce the more nuanced notion of the use of schemata as follows: each axiom schema introduces a large class of axioms, namely all the necessitations of instances of the displayed form. So for example the schemata that can replace R0 include the form (p ⊃ p), and this means that the axioms include all sentences of form (p ⊃ p), □(p ⊃ p), □□(p⊃p), …, and do forth.
Lets say that K* is the modal logic which is like K except for lacking rule R2, but with the axiom schemata understood in this new way. Then it is easy to prove that K* and K have the same theorems (see Appendix). And K* is a Hilbert logic, that is, a Hilbert logic formulation of K.
The same can be done, in the same way, for the other normal modal logics.So, it follows from this that Henkin was quite right, if we read him anachronistically as referring to the normal modal logics: the Deduction Theorem is provable for all of them.
Conclusion Both the normal modal logics and quantificational logics (classical, normal modal, and intuitionistic) are Hilbert logics.
Historical note. Ruth Barcan Marcus who pioneered quantified modal logic, proved the Deduction Theorem for S4 and S5, and showed that it could not be proved for S1 or S2 (Journal of Symbolic Logic 18 (1953): 234-236). For the completeness of ML with names added as free logic, van Fraassen (Zeitschr. Math. Logic und Grundl. der Math. 12 (1966): 219-234).
APPENDIX
K* and K have precisely the same theorems. The proof is by induction.
Suppose A is a theorem of K; then there is a proof S = S(1), …., S(n) in K which has A as its last line. Change this proof into the sequence T = T(1), …, T(k), … , T(n) by moving the axioms that appear in S to the beginning, though kept in the same order they had in S: these are the lines T(1) …, T(k). The lines T(k+1), … , T(n) are the remaining lines of S in the same order in which they appeared in S.
T is also a proof of A in K, for the axioms need no justification, and if another line of S followed from preceding lines by modus ponens, that same line appearing in T follows from lines that precede it by modus ponens as well.
Claim: There is a proof in K* of □A.
Proof.
Each of □T(i), for in = 1, …, k is an axiom, for T(i) is a necessitation of axiom schema A1, and therefore so is □T(i).
Hypothesis of induction: for k < m ≤ n, and for all j < m, there is a proof of □T(j) in K*.
In S the line S(c) which appears as T(j) in T came from preceding lines S(a) and S(b) by modus ponens, so S(b) was [S(a) ⊃ S(c)]. Thus by hypothesis there are also proofs of □S(a) and □[S(a) ⊃ S(c)]. From these, by an instance of axiom A1. and two applications of modus ponens, □S(c) follows. That conclusion is the same as □T(j).
Therefore, by induction, there is a proof in K* of □T(i), for 1 ≤ i ≤ n; hence of □T(n), which is □A.