Preface. Many things that can be proved about classical logic are provable on the basis of just a few important characteristics, and so generalize to large classes of logics. I am interested at the moment especially in characteristics that play a role in discussions of the Curry Paradox. While I will not discuss that paradox here, it is the reason for why I will focus on just features that pertain to the conditional.
Hilbert logics
While it is customary today to teach logic in a natural deduction formulation, that was a relatively new development. Looking back to Principia Mathematica and other sources in the first half of the twentieth century what we find instead are axiom systems. These can in general be read as consisting of a series of axiom schemata, with modus ponens as single rule of inference. That sort of formulation was initiated by Hilbert, and of course it did not disappear. For example, Kleene’s seminal Introduction to Metamathematics (1952) presents classical logic in this form (Ch. IV, sect. 19).
A Hilbert style deductive system is one in which a finite sequence of formulas is (a) a proof if and only if every line is either an axiom or follows from preceding lines by a rule of inference, and (b) a deduction (also, derivation) from certain premises if and only if every line is either a premise or an axiom or follows from preceding lines by a rule of inference. The last line of a proof is a theorem and the last line of a deduction is called the conclusion deducible from those premises.
Note well that this definition of the notion of a derivation from premises is correct for a deductive system in which modus ponens is the only rule of inference, it is not correct if certain other kinds of rules of inference (which only preserve theorem-hood) are present. In such cases we have to raise the question whether the logic has an equivalent formulation (in the sense that it has the same theorems) which does not have those other kinds of rules.
In subsequent literature the term Hilbert algebras came to refer to algebraic models of a fragment of classical (as well as of intuitionistic) logic, that Henkin studied in 1950 as the system of Positive Implicational Logic. Its only axiom schemata are:
H1. A ⊃ (B ⊃ A)
H2. [A ⊃ (B ⊃ C)] ⊃ [(A ⊃ B) ⊃ ( A ⊃C)]
and its only rule of inference is modus ponens:
from A and A ⊃ B to infer B.
Henkin then noted that the Deduction Theorem (Church’s term, now in common use) can be proved for any system that includes Positive Implicational Logic. The Deduction Theorem states, in effect, that the now familiar rule of Conditional Proof is an admissible rule for Positive Implicational Logic:
if B is derivable from set of premises X and A, then A ⊃ B is derivable from X
if X, A ⊢B then X ⊢ A ⊃ B.
We are therefore staying close the historical concepts and terminology with the following definition:
A Hilbert logic is a logic which is a Hilbert style deductive system, with modus ponens as sole inference rule, of which Positive Implicational Logic is a fragment.
Classical logic and intuitionistic logic both have formulations which are Hilbert logics. (Henkin offers other examples; I will postpone those to the next post).
We often refer to those logics as understood in a formulation-independent way. It will not be a great abuse of nomenclatures to just say then that they are Hilbert logics, meaning that they have equivalent formulations that are Hilbert logics.
Does the Deduction Theorem carry over to logics of which Positive Implicational Logic is a fragment? Not always, it depends on what that larger logic is like. We can at least say this:
If L is a Hilbert Logic and is a fragment of logic L* then: (a) if L* is a Hilbert logic then the Deduction Theorem holds for L* as well; (b) if L* is not a Hilbert logic then the Deduction Theorem will hold for those deductions in L* in which modus ponens is the only inference rule used.
The addition of other axioms will not affect the proof of the Deduction Theorem, but the addition of other inference rules can certainly change the game.
Hilbert languages
Classical and intuitionistic logic have something else in common: for each there is a strong completeness proof, showing that they capture the valid inferences — the semantic entailment relation – in a language with the pertinent syntax.
So let us have a corresponding definition for that as well:
A Hilbert language is a language with formally defined syntax and semantics, in which the semantic entailment relation is finitary and there exists a Hilbert logic whose deducibility relation coincides with that semantic entailment relation.
So the generalization of classical logic and classical sentential language that I am interested in exploring is Hilbert logics and Hilbert languages.
NOTE. Specifically, my previous posts about the Curry Paradox showed how it was possible to build a supervaluational language on top of a classical language in which Curry’s desideratum of expressive completeness was honored. The salient question for me, at the moment, is whether my construction there and its accompanying argument go through for Hilbert languages and not just for the language of classical logic.
APPENDIX.
L. Henkin, “An Algebraic Characterization of Quantifiers”, Fundamenta Mathematicae, vol. 37, 1950, 63-74. The name “Positive Implicational Logic” Henkin took from Hilbert and Bernays. Alonzo Church’s proof of the Deduction Theorem, which Henkin cites, is in Church’s An Introduction to Mathematical Logic (1944).
NB: Hilbert’s definition of what counts as a deduction is such that the normal structural rules hold for the deducibility (consequence) relation. So-called “sub-structural” logics are therefore not Hilbert logics.
Finitude. The restriction of proofs to finite sequences is not arbitrary. Imagine a logic whose only rule of inference (schema) is ” (A or A), therefore A”, which is surely something anyone would accept. But now suppose that its theorems are the last lines of proofs whether finite or infinite. Then think of a sequence whose lines are numbered with the negative integers, so the last line has number (-1). Let that last line be “God exists”, and if the line (-k) was B then the line (-(k+1)) was (B or B). It looks like this toward the end:
(-3) …………………………………………………….
(-2) God exists or God exists
(-1) God exists
Thus every line was either an axiom (that is vacuously true) or followed from a preceding line by the rule of inference.
So here we have it, a proof that God exists — since “God exists” is the last line of a proof. (I think I heard this example from Rich Thomason.) This is one paradox that is now universally blocked by the limitation of proofs to finite length, without ever even being mentioned.