Truthmaker semantics for the logic of imperatives

Seminal text:  Nicholas Rescher, The Logic of Commands.  London: 1966

  1. Imperatives: the three-fold pattern 1
  2. Denoting imperatives 2
  3. Identifying imperatives through their truthmakers 2
  4. Entailment and logical combinations of imperatives 3
  5. Starting truthmaker semantics: the events. 4
  6. Event structures 4
  7. The language, imperatives, and truthmakers 5
  8. Logic of imperatives 6
    APPENDIX. Definitions and proofs 7

In deontic logic there was a sea change when imperatives were construed as default rules (Horty, Reasons as Defaults: 2012).  The agent is conceived as situated in a factual situation but subject to a number of ‘imperatives’ or ‘commands’.  

Imperatives can be expressed in many ways.  Exclamation marks, as in “Don’t eat with your fingers!”, may do, but are not required.  Adapting one of Horty’s examples, we find in a book of etiquette:

  • One does not eat food with one’s fingers
  • Asparagus is eaten with one’s fingers 

These are declarative sentences.  But in this context they encode defeasible commands, default rules.  Reading the book of etiquette, the context in question, we understand the conditions in which the indicated actions are mandated, and the relevant alternatives that would constitute non-compliance. 

In this form of deontic logic, what ought to be the case in a situation is then based on the facts there plus the satisfiable combinations of commands in force.[1]  

1.   Imperatives: the three-fold pattern

Imperatives have a three-fold pattern for achievement or lack thereof:

  • Success: required action carried out properly
  • Failure:  required action not carried out properly or not at all
  • Moot:    condition for required action is absent 

In the first example above, the case will be ‘moot’ if there is no food, or if you are not eating.  Success occurs if there is food and it is not eaten with the fingers, Failure if there is food and it is eaten with the fingers.

Whenever this pattern applies, we can think of that task as having to be carried out in response to the corresponding imperative.  There are many examples that can be placed in this form.  For example, suppose you buy a conditional bet on Spectacular Bid to win in the Kentucky Derby. Doing so imposes an imperative on the bookie.  He is obligated to pay off if Spectacular Bid wins, allowed to keep the money if she loses, and must give the money back if she does not run.

2.  Denoting imperatives

An imperative may be identified in the form ‘When A is the case, see to it properly that B’.  This way of identifying the imperative specifies just two elements of the three-fold pattern, Success and (the opposite of) Moot.  

But the opposite of Moot is just the disjunction of the two contraries in which the condition is present.  Therefore it is equally apt to represent the imperative by a couple of two contraries, marking Success and Failure.  Doing so gives us a better perspective on the structure of imperatives and their relation to ‘ought’ statements.  

So I propose to identify an imperative with an ordered pair of propositions <X, Y>, in which X and Y are contraries.  Intuitively they correspond respectively to Success (and not Moot), and Failure (and not Moot).  

3.  Identifying imperatives through their truthmakers

Our examples point quite clearly to a view of imperatives that goes beyond truth conditions of the identifying propositions.  What makes for success or failure, what makes for the truth of the statement that the imperative has been met or not met, are specific events.

That Spectacular Bid wins, or that you close the door when I asked you to, are specific facts or events which spell success.  That I eat the asparagus with a fork is a distinct event which spells a failure of table etiquette.

Consider the command 

(*)   If A see to it that B!

as identified by its two contraries, Success and Failure.  For each there is a class of (possible) events which ‘terminate’ the command, one way or the other.  

The statement “Spectacular Bid wins” states that a certain event occurs, and encodes a success for the bookie’s client.  The statement that encodes Failure is not “Spectacular Bid does not win”. Rather it is “Spectacular Bid runs and does not win”, which is, for this particular imperative the relevant contrary.  

To symbolize this identification of imperatives let us denote as <X| the sets of events that make X true, and as |X> the set of events that make the relevant contrary (Failure) true.[2]  The imperative in question is then identified by an ordered couple of two sets of events, namely  (<X|,  |X>).  I will abbreviate that to <X>.  

In (*), <X> is the imperative to do B if A is the case, so X = the statement that (A and it is seen to that B), which is made true by all and only the events in set <X|.  Its relevant contrary in this particular imperative is the statement that (A but it is not seen to that B), and that relevant contrary is whatever it is that is made true by all and only the events in set |X>.

4. Entailment and logical combinations of imperatives

There is an obvious sense in which E, “Close the door and open the window!” entails F, “Close the door!”  Success for E entails success for F.  But that is not all.  Failure for F entails failure for E.   The latter does not follow automatically from the former, if there is a substantial Moot condition: not winning the Derby does not, as such, imply losing.

So entailment between imperatives involves two ‘logical’ implications, going in opposite directions, and we can define:

Definition.  Imperative A entails imperative B exactly if <A| ⊆ <B| and |B> ⊆ |A>.

“Open the door!” is a ‘strong’ contrary to “Close the door!”.  There is a weaker contrary imperative:  if someone looks like he is about to close the door, you may command “Do not close the door!”.  

Negation.  In the logic of statements, the contradictory is precisely the logically weakest contrary.  For example, yellow is contrary to red and so is blue, but to be simply not red is to be either yellow or blue or … and so forth.

So I propose as the analogue to negation that we introduce

<┐A>:             <┐A| = |A>  and |┐A> = <A|

Whatever makes ┐A true is what makes A false, and vice versa. Here the symbol “┐” does not stand for the usual negation of a statements, because  imperatives generally have significant, substantial conditions.  The relevant contrary to Success is not its logical contradictory (that would be: either Failure or Moot) but Failure (which implies not-Moot), and that is whatever counts as Failure for the particular imperative in question. 

Conjunction.  “Close the door and open the window” we can surely symbolize as <A & B>.  Success means success for both.  In addition, failure means failure for one or the other or both.  So there is no great distance between conjunction of Success statements and the ‘meet’ operation on imperatives:

<A & B>:             <A & B| = <A| ∩ <B|,  |A & B> = |A> ∪ |B>.

Disjunction.  Similarly, dually, for disjunction and the ‘join’ operation:

<A v B>:             <A v B| = <A| ∪ <B|,  |A v B> = |A> ∩ |B>

We can already see that some familiar logical relations are making an appearance.  

[1]  <A & B> entails <A>, while <A> entails <A v B>.

For example, <A & B| ⊆ <A| and |A> ⊆ |A & B>.

(All proofs will be provided in the Appendix.)

We could go a bit further with this.  But answers to the really interesting questions will depend on the underlying structure of events or facts, that is, of the truthmakers.

5. Starting truthmaker semantics:  the events.

Events combine into larger events, with an analogy to conjunction of statements.  So the events form a ‘meet’ semilattice.  Important are the simple events

Postulate:  Each event is a unique finite combination of simple events.  

Is it reasonable to postulate this unique decomposability into simple events?  

At least, it is not egregious.  Think of how we specify a sample space for probability functions:  each measurable event is a subset of the space.  The points of the space may have weights that sum up to the measure of the event of which they are the members.  Two events are identical exactly if they have the same members.  

In any case, the idea of truthmakers is precisely to have extra structure not available in possible worlds semantics.

Combination we can conceive of as a ‘meet’ operation.  Besides combining, we need an operation to identify contraries among events, in order to specify Success and Failure of imperatives.

Definition.  event structure is a quadruple E = <E, E0, ., ° >, where E is a non-empty set, . is a binary operation on E, E0 is a non-empty subset of E, and °  is a unary operation on E0, such that:

  • ° is an involution: if a is in E0 then a° ≠ a and a°°  = a
  • . is associative, commutative, idempotent (a ‘meet’ operator)
  • If e and e’ are elements of E then there are elements a1, …, ak,  b1, …, bof E0 such that e = a1… ak,  and e’=b1…b and e = e’ if and only if { a1, …, a}= { b1, …, b}

This last clause implies along the way that if e is an element of E then there is a set a1, …, an of elements of E0 such that e = a1 … an. That is part, but only part, of what the Postulate demands, and would not by itself imply unique decomposability. 

The involution operates solely on simple events.  A particular imperative could have a simple event b to identify Success; in that case simple event b°  will be identify its Failure.  

6.  Event structures

The following definitions and remarks refer to such an event structure E.

Definition.  e ≤ e’ if and only if there is an event f such that e’.f = e. 

Analogy: a conjunction implies its conjuncts, and if A implies B then A is logically equivalent to (A & C) for some sentence C.  

The definition is not the standard one, so we need to verify that it does give us a partial order, fitting with the meet operator.

[2]  The relation ≤ is a partial ordering, and e.f is the glb of e and f.

That is, we have the familiar semilattice laws:  e.g. if  e ≤ e’ and f is any other event then f.e ≤ e’.

So <E, ., ≤ > is a meet semilattice.  Note also that if a and b are simple events then a ≤ b only if a = b.  For if b.f = a, the Postulate implies that b = f = a.

So far we have a relation of contrariety for simple events only.  For events in general we need to define a general contrariness relationship.

Definition. Event e is contrary to event e’ if and only if there is an event a in E0 such that e ≤ a and e’ ≤ a° .

Contrariety is symmetric because a°°  = a.  

At this point we can see that the logic we are after will not be classical.  For contrariety is not irreflexive.  

That is because (a.a°) ≤ a and (a.a°) ≤ a°, so (a.a°) is contrary to itself.  But (a.a°) is not the bottom of the semilattice.  If a, a°, and b are distinct simple events then it is not the case that (a.a°) ≤ b.  For if b.f = a.a°  and f = a1 … an then the Postulate requires {b, a1, …, an} = {a, a°} so either b = a or b = a° .

It is tempting to get rid of this non-classical feature.  Just reducing modulo some equivalence may erase the distinction between those impossible events, a.a°  and b.b° .  Such events can never occur anyway.  

But there are two reasons not to do so.  The first is that the history of deontic logic has run on puzzles and paradoxes that involve apparent self-contradictions.  The second is more general.  We don’t know what new puzzles may appear, whether about imperatives or related topics, but we hope to have resources to represent whatever puzzling situation we encounter. Erasing distinctions reduces our resources, and why should we do that?

7. The language, imperatives, and truthmakers

More formally now, let us introduce a language, and call it LIMP.  Its syntax is just the usual sentential logic syntax (atomic sentences, &, v, ┐).  The atomic sentences will in a specific application include sentences in natural language, such as ‘”One does not eat with one’s fingers”.  The interpretations will treat those sentences not as statements of fact but as encoding imperatives.  In each case, the interpretation will supply what a context (such as a book of etiquette) supplies to set up the coding.

An interpretation of language LIMP in event structure E = <E, E0, ., ° > begins with a function f that assigns a specific event to each atomic sentence in each situation.  Then there are two functions, < | and | >, which assign sets of truth-makers to each sentence:  

  • If A is atomic and a = f(A) then <A| = {e in E:  e ≤ a} and |A> = {e in E:  e ≤ a°}.
  • <┐A| = |A> and |┐A> = <A|
  • <A & B| = <A| ∩ <B|,  |A & B> = |A> ∪ |B>
  • <A v B| = <A| ∪ <B|,  |A v B> = |A> ∩ |B>

Definition.  A set X of events is downward closed iff  for all e, e’ in E, if e ≤ e’ and e’ is in X then e is in X.

[3]  For all sentences A, <A| and |A> are downward closed sets.

Now we can also show that our connector ┐, introduced to identify the weakest contrary to a given imperative, corresponds (as it should) to a definable operation on sets of events.

Definition.  If X ⊆ E then X = {e: e is contrary to all elements of X}.

I will call X the contrast (or contrast class) of X.

Lemma.  X is downward closed.

That is so even if X itself is not downward closed.  For suppose that f is in X.   Then for all members e of X there is a simple event a such that f ≤ a and e ≤ a°.  Thus for any event g, also g.f.e ≤ a while e ≤ a°.  Therefore g.f is also in X.

[4]  For all sentences A, <┐A| = |A> = <A|  and |┐A> = <A| = |A> ⊥ .

The proof depends De Morgan’s laws for downward closed sets of events:

Lemma.  If X and Y are downward closed sets of events then 

(X ∩ Y) ⊥  = X ⊥ ∪ Y ⊥   and (X ∪ Y) ⊥ = X ⊥ ∩  Y ⊥.

In view of [4], there is therefore an operator on closed sets of events that corresponds to negation of imperatives:

Definition.  If A is any sentence then  <A> ⊥  = (<A| ⊥ , |A> ⊥ ).

[5]   <A> ⊥ =  <┐A>

This follows at once from [4] by this definition of the  operator on imperatives.

8. Logic of imperatives

We will concentrate here, not on the connections between sentences A, but on connections between their semantic values <A>.  These are the imperatives, imperative propositions if you like, and they form an algebra.  

Recall the definition of entailment for imperatives.  It will be convenient to have a symbol for this relationship:

Definition.   <A> ⇒ <B> exactly if <A| ⊆ <B| and |B> ⊆ |A>. 

 The following theorems introduce the logical principles that govern reasoning with imperatives.

[6]  Entailment is transitive.

To have the remaining results in reader-friendly fashion, let’s just summarize them.

[7] – [11] 

  • Meet.
    • <A & B> ⇒ <A>, 
    • <A & B> ⇒ <B>
    • If <X> ⇒  <A> and <X> ⇒  <B> then <X> ⇒ <A & B> 
  • Join.
    • <A> ⇒ <A v B>
    • <B> ⇒ <A v B>
    • If <A> ⇒ <X> and <B> ⇒ <X> then <A v B> ⇒ <X>
  • Distribution:  <A &(B v C)> ⇒ <(A & B) v (A & C)>.
  • Double Negation. <A> ⇒ < ┐ ┐ A>  and < ┐ ┐ A>  ⇒ <A>.
  • Involution.  If <A> ⇒ <B> then <┐B> ⇒ <┐A>.
  • De Morgan.
    • < ┐ (A & B)> ⇒ < ┐A v ┐B> and vice versa
    • < ┐ (A v B)> ⇒ < ┐A & ┐B> and vice versa.

COMMENTS.   In order for these results to make proper sense, each of the connectors ┐, &, v needs to correspond to an operator on imperatives, modeled as couples of downward closed sets of events. This was shown in the previous section.

The logic of imperatives is not quite classical.  We can sum up the above as follows: 

The logic of imperatives mirrors FDE (logic of first degree entailment); the imperatives form a De Morgan algebra, that is, a distributive lattice with De Morgan negation. 

APPENDIX.  Definitions and proofs

Definition.  Imperative A entails imperative B exactly if <A| ⊆ <B| and |B> ⊆ |A>.

[1]  <A & B> entails <A>, and <A> entails <A v B>.

For <A & B| = <A| ∩ <B| ⊆ <A| while |A > ⊆  |A|> ∪ |B> = |A & B>.  Similarly for the dual.

Postulate:  each event is a unique finite combination of simple events.  

Definition.  event structure is a quadruple E = <E, E0, ., ° >, where E is a non-empty set, . is a binary operation on E, E0 is a non-empty subset of E, and ° is a unary operation on E0, such that:

  • ° is an involution: a° ≠   a and a°°  = a,  if a is in E0
  • . is associative, commutative, idempotent (a ‘meet’ operation)
  • If e and e’ are elements of E then there are elements a1, …, ak,  b1, …, bof E0 such that e = a1… ak,  and e’=b1…b and e = e’ if and only if { a1, …, a}= { b1, …, b}

[2]  The relation ≤ is a partial ordering, and the meet e.f of e and f is the glb of e and f.

For  e ≤ e because e.e = e (reflexive), and if e = e’.f and e’ = e”.g then e = e”.f.g (transitive).  

(Perhaps clearer:  For if e = e’.f  then e.g = e’.f.g, so if e ≤ e’ then e.g ≤ e’, for all events g.)

            Concerning the glb: 

First, e.f ≤ e  because there is an element g such that e.f .g = e. g, namely g = f.  

Secondly suppose e’ ≤ e, and e’ ≤ f.  Then there are g and h such that e.g = e’ and f.h = e’.  In that case e’ = g.h.f.e, and therefore  e’ ≤ e.f. 

Definition. Event e is contrary to event e’ if and only if there is an event a in E0 such that e ≤ a and e’ ≤ a° .

Contrariness is symmetric because a°°  = a.  But it is not irreflexive for (a.a°) ≤ a and (a.a°) ≤ a°.   

Lemma 1. If a and b are simple events then a ≤ b only if a = b.  

That is because decomposition into simple events is unique.  For suppose that a.f = b. Then there are simple events c1, …, ck such that  f = c1….ck and a.f = a. c1, …, ck = b, which implies that a = c1 = … = ck = b.

Interpretation of the imperatives expressed in language LIMP, in event structure = = <E, E0, ., ° >, relative to function f from atomic sentences to simple events. Then there are two functions, < | and | >, which assign sets of truth-makers to each sentence:  

  • If A is atomic and a = f(A) then <A| = {e in E:  e ≤ a} and |A> = {e in E:  e ≤ a° }.
  • <┐A| = |A> and |┐A> = <A|
  • <A & B| = <A| ∩ <B|,  |A & B> = |A> ∪ |B>
  • <A v B| = <A| ∪ <B|,  |A v B> = |A> ∩ |B>

Definition.  A set X of events is downward closed iff  for all e, e’ in E, if e ≤ e’ and e’ is in X then e is in X.

[3]  For all sentences A, <A| and |A> are downward closed sets.

Hypothesis of induction: this is so for all sentences of length less than A.

Cases.

  1. A is atomic.  This follows from the first of the truth-maker clauses
  2. A has form ┐B.  Then <B| and |B> are downward closed, and these are respectively |┐A> and <┐A|.

A has the form (B & C) or (B & C).  Here it follows from the fact that intersections and unions of downward closed sets are downward closed.

Definition.  If X ⊆ E then X = {e: e is contrary to all elements of X}

Lemma 2.  X is downward closed.

Suppose that e is in X.  Then for all e’ in X, there is a simple event a such that e ≤ a and e’ ≤ a .  This implies for any event f, that f.e ≤ a and e’ ≤ a .  Hence f.e is also in X.

[4]  For all sentences A, <┐A| = |A> = <A|  and |┐A> = <A| = |A> ⊥ .

Hypothesis of induction: If B is a sentence of length less than A then <┐B| = |B> = <B|  and |┐B> = <B| = |B> ⊥ .

Cases.

  1. A is atomic, and f(A) = a.  Then by the first truth-maker clause, all elements of |A> are contrary to all of <A|.  Suppose next that e is contrary to all of <A|, so e is contrary to a, hence there is a simple event b such that a ≤ b and e ≤ b° .  But then a = b, so e ≤ a° , hence e is in |A>. Similarly all elements of <A| are contrary to all elements of |A>, and the remaining argument is similar.
  2. A has form ┐B.  Then by hypothesis <┐B| = |B> = <B| .  And <┐┐B| = |┐B> by the truthmaker conditions, and |┐B> = <B|, and the hypothesis applies similarly to this.   
  3. A has form (B & C)

We prove first that <┐A| = |A> = <A| ⊥

<A| = <B| ∩ <C|,  while <┐A| = |B & C> = |B>  ∪ |C>.  If e is in <┐A| then it is in  |B>  ∪ |C> so by hypothesis e is contrary either to all of <B| or to all of <C|, and hence to their intersection. 

Suppose next that e is in <A| = (<B| ∩ <C|) .  To prove that this is <┐A| = <┐(B & C)| = |B & C> = |B> ∪ |C> = <B| ∪ <C|  it is required, and suffices,  to prove the analogues to De Morgan’s Laws for downward closed sets.  See Lemma below.

We prove secondly that  |┐A> = <A| = |A> ⊥ .  The argument is similar, with appeal to the same Lemma below.

(4) A has form (B v C).  The argument is similar to case (3), with appeal to the same Lemma below.

Lemma 3.  De Morgan’s Laws for event structures:   If X and Y are downward closed sets of events then  (X ∩ Y) ⊥  = X ⊥ ∪ Y ⊥   and (X ∪ Y) ⊥ = X ⊥ ∩  Y ⊥.

Suppose e is in X ⊥.  Then e is contrary to all of  X, hence to all of X ∩ Y, hence is in (X ∩ Y) ⊥. Similarly for e in Y ⊥.  Therefore (X ⊥ ∪ Y ⊥ ) ⊆ (X ∩ Y) ⊥.

Suppose on the other hand that e is in (X ∩ Y) ⊥.  Suppose additionally that e is not in X.  We need to prove that e is in Y ⊥.  

Let e’ be in X and not contrary to e.  Then if e’’ is any member of Y, it follows that e’.e’’ is in X ∩ Y, since X and Y are both downward closed.  Therefore e is contrary to e’.e’’.  We need to prove that e is contrary to e”.

Let b be a simple event such that e ≤ b and e’.e” ≤ b°.   By our postulate, e’ and e’’ have a unique decomposition into finite meets of simple events.  So let e’ = a1…ak  and e’’= c1…cm, so that e’.e” = a1…ak.c1…cm.  Since e’.e” ≤ b°, there is an event g such that a1…ak.c1…cm = e’.e’’= g.b°.   The decomposition is unique, so b° is one of the simple events a1, …, ak, c1, …, cm.  Since e is not contrary to e’, it follows that none of a1, …, ak is b°.  Therefore, for some j in {1, ..,m}, cj = b°, and therefore there is an event h such that e” = h. b°, in other words, e” ≤ b°.  Therefore e is contrary to e”.

So if e is not in X ⊥ then it is in Y ⊥, and hence in X ⊥ ∪ Y ⊥.

The argument for the dual equation is similar.

In view of the above, there is an operator on closed sets of events that corresponds to negation of imperatives:

Definition.  If A is any sentence then  <A> ⊥  = (<A| ⊥ , |A> ⊥ ).

[5]   <A> ⊥ =  <┐A>

(<A| ⊥ , |A> ⊥ ) =  (<┐A|, |┐A>), in view of [4].

Definition.   <A> ⇒ <B> exactly if <A| ⊆ <B| and |B> ⊆ |A>. 

 The following theorems introduce the logical principles that govern reasoning with imperatives.

[6]  Entailment of imperatives is transitive.

Suppose <A> ⇒ <B> and <B> ⇒ <C>.  Then <A| ⊆ <B| and <B| ⊆ <C|,  hence <A| ⊆ <C|.  Similarly, |C> ⊆|A>.

[7]  <A & B> ⇒ <A>, and if <X> ⇒  <A> and <X> ⇒  <B> then <X> ⇒ <A & B>, Also  <A> ⇒ <A v B>, and if <A> ⇒ <X> and <B> ⇒ <X> then <A v B> ⇒ <X>

First, <A| ∩ <B| ⊆ <A| and |A> ⊆ |A> ∪ |B>, hence  <A & B> ⇒ <A>.  

Secondly, suppose that X is such that <X| ⊆ <A| and <X| ⊆ <B| while |A> ⊆ |X> and |B> ⊆ |X>.  Then <X| ⊆<A| ∩ <B| = <A& B| while |A & B> = |A> ∪ |B> ⊆ |X>.  Hence <X> ⇒ <A & B>.

The dual result for disjunction by similar argument.

[8]  Distribution:  <A &(B v C)> ⇒ <(A & B) v (A & C)>.

<A &(B v C)| = <A| ∩ <B v C| = <A| ∩ (<B| ∪ <C|) = [<A| ∩ <B| ] ∪ [<A| ∩ <C|)] = <(A & B) v (A & C|. Similarly for the other part.

[9] Double Negation:  <A> ⇒ < ┐ ┐ A>  and < ┐ ┐ A>  ⇒ <A>.

< ┐ ┐ A| = |┐ A> = <A|  and |┐ ┐ A> = <┐ A| = |A>

[10]  Involution.  If <A> ⇒ <B> then <┐B> ⇒ <┐A>.

<┐B> ⇒ <┐A> exactly if <┐B| ⊆  <┐A|, i.e.  |B> ⊆  |A>,   and  |┐A> ⊆  |┐B>, i.e. <A| ⊆ <B|.  But that is exactly the case iff <A> ⇒ <B>    

[11]  De Morgan.  < ┐ (A & B)> ⇒ < ┐A v ┐B> and vice versa, while < ┐ (A v B)> ⇒ < ┐A & ┐B> and vice versa.

< ┐ (A & B)| = |A & B> = |A> ∪ |B> = < ┐A| ∪ < ┐B| = < ┐A v ┐B|.  Similarly for  |┐(A & B> .  Therefore < ┐ (A & B)> = < ┐A v ┐B>.

Similarly for the dual.

7.                          REFERENCES

Curry, Haskell B.  (1963) Foundations of Mathematical Logic. New York: McGraw-Hill.

Lokhorst, Gert-Jan C. (1999) “Ernst Mally’s Deontik”. Notre Dame Journal of Formal Logic 40 : 273-282.

Mally, Ernst (1926)  Grundgesetze des Sollens: Elemente der Logik des Willens. Graz: Leuschner und Lubensky

Rescher, Nicholas (1966)  The Logic of Commands.  London: Routledge and Kegan Paul


NOTES

[1] Rescher traces this analysis of ‘ought’ statements to Ernst Malley (1926) who coined the name Deontik  for his ‘logic of willing’. Since the logic of imperatives we arrive at here is non-classical, note that Lokhorst (1999) argues that Mally’s system is best formalized in relevant logic.

[2] We can use Dirac’s names for them, “bra” and “ket”, with no reference to their original use.

Deontic logic: Horty’s gambles (2)

In the second part of his 2019 paper Horty argues that there is a need to integrate epistemic logic with deontic logic, for “ought” statements often have a sense in which their truth-value depends in part on the agent’s state of knowledge.

I agree entirely with his conclusion. But is the focus on knowledge not too strict? Subjectively it is hard to distinguish knowledge from certainty — and apart from that, when we don’t have certainty, we are still subject to the same norms. So I would like to suggest that rational opinion, in the form of the agent’s actual subjective probability, is what matters.

Here I will examine Horty’s additional examples of gambling situations with that in mind. I realize that this is not sufficient to demonstrate my contention, but it will show clearly how the intuitive examples look different through the eyes of this less traditional epistemology.

Horty’s figure 4 depicts the following situation: I pay 5 units to be offered one of two gambles X1, X2 on a coin toss. My options will be to bett Heads, to bet Tails, or Not To Gamble. But I will not know which gamble it is! You, the bookmaker will independently flip a coin to determine that, and not tell me the outcome. In the diagram shown here, X1 is the gamble on the left and X2 the gamble on the right.

On Horty’s initial analysis, if in actual fact I am offered X1 then I should bet Heads, since that has the best outcome. But as he says, rightly, I could not be faulted for not doing that, since I did not know whether I was being offered X1 or X2.

Even if the conclusion is the same, the situation looks different if the agent acts on the basis of the expectation values of the options available to him. The alternatives depicted in the diagram are equi-probable (we assume the coins are fair). So for the agent, who has paid 5 units, his net expectation value for betting Heads (in this situation where it is equally probable that he is betting in X1 or in X2) is the average of gaining 5 and losing 5. The expectation value is 0. Similarly for the option of betting Tails, and similarly for the option of Not Gambling: each has net expectation value 0. So in this situation it just is not true that the agent ought to take up any of these options — it is indifferent what he does.

Horty offers a second example, where the correct judgment is that I ought not to gamble, to show that his initial analysis failed to entail that. Here is the diagram, to be interpreted in the same way as above — the difference is in the value of the separate possible outcomes.

Reasoning by expectation value, the agent concludes that indeed she ought not to gamble. For by not gambling the payoff is 5 with certainty, while the expectation value of Betting Heads, or of Betting Tails, is 2.5.

So on this analysis as well we reach the right conclusion: the agent ought not to gamble.

Entirely in agreement with Horty is the conclusion that these situations are adequately represented only if we bring epistemology into play. What the agent ought to do is not to be equated with what it would objectively, in a God’s eye, be best for her to do. It is rather what she ought to do, given her cognitive/epistemic/doxastic situation in the world. But she cannot make rational gambling decisions in general if her knowledge (or certainty) is all she is allowed to take into account.

It would be instructive to think also about the case in which it is known that the coin has a bias, say that on each toss (inlcuding the hidden first toss) it will be three times as likely as not to land heads up. Knowledge will not be different, but betting behavior should.

Deontic logic: Horty’s gambles (1)

In “Epistemic oughts in Stit Semantics” Horty’s main argument is that an epistemic logic must be integrated in a satisfactory deontic logic. This is needed in order to account for a sense of what an agent ought to do hinges on a role for knowledge (“epistemic oughts”).

That argument occupies the second part of his paper, and I hope to explore it in a later post. But the first part of the paper, which focuses on a general concept of what an agent ought to do (ought to see to) is interesting in itself, and crucial for what follows. I will limit myself here to that part.

I agree with a main conclusion reached there, which is that the required value ordering is not of the possible outcomes of action but of the choices open to the agent.

However, I have a problem with the specific ordering of choices that Horty defines, which it seems to me faces intuitive counterexamples. I will propose an alternative ordering principle.

At a given instant t an agent has a variety V(h, t) of possible futures in history h. I call V(h, t) the future cone of h at t. But certain choices {K, K’, …} are open to the agent there, and by means of a given choice K the agent may see to it that the possible futures will be constrained to be in a certain subset V(K, h, t) of V(h, t).

The different choices are represented by these subsets of V(h, t), which form a partition. Hence the following is well defined for histories in V(m): the choice made in history h at t is the set V(K, h, t) to which h belongs; call it CA(h, t), thinking of “CA” as standing for “actual choice”.

In the diagram K1 is the set of possible histories h1 and h2, and so CA(h1,t) = K1 = CA(h2, t). (Note well: I speak in terms of instants t of time, rather than Horty’s moments.

And the statement that the agent sees to it that A is true in in h at t exactly if A is true in all the possible futures of h at t that belong to the choice made in history h at t. Briefly put: CA(h, t) ⊆ A.

The Chisholm/Meinong analysis of what ought to be is precisely what it is maximally good to be the case. Thus, at a given time, it ought to be that A if A is the case in all the possible future whose value is maximal among them. So applied to a statement about action, that means: It ought to be that the agent sees to it that A is true in h at t exactly if all the histories in the choice made in history h at t are of maximal value. That is, if h is in CA(h, t) and h’ is in V(h, t) but outside CA(h, t) then h’ is no more valuable than h.

But this analysis is not correct, as Horty shows with two examples of gambles. In each case the target proposition is G: the agent gambles, identified with the set of possible histories in which the agent takes the offered gamble. This is identified with: the agent sees to it that G. Hence the two choices, K1 and K2, open to the agent in h at t are represented by the intersection of V(h, t) with G and with ~G respectively.

In the first example the point made is that according to the above analysis, it is generally the case that the agent ought to gamble, since the best possible outcome is to win the gamble, and that is possible only if you gamble. That is implausible on the face of it — and in that first example, we see that the gambler could make sure that gets 5 units by not gambling, which looks like a better option than the gamble, which may end with a gain of 10 or nothing at all. While someone who values gambling for its own risk might agree, we can’t think that this is what he ought to do. The second example is the same except that winning the gamble would only bring 5 units, with a risk of getting 0, while not gambling brings 5 for sure. In this case we think that he definitely ought not to gamble, but on the above analysis it is not true either that he ought to gamble or ought not to gamble.

Horty’s conclusion, surely correct, is that what is needed is a value ordering of the choices rather than of the possible outcomes (though there may, perhaps should, be) a connection between the two.

Fine, but Horty defines that ordering as follows: choice K’ (weakly) dominates choice K if none of the possible histories in K are better than any of those in K’. (See NOTES below, about this.) The analysis of ‘ought’ is then that the agent ought to see to it that A exactly if all his optimal choices make A definitely true.

Suppose the choice is between two lotteries, each of which sells a million tickets, and has a first prize of a million dollars, and a second prize of a thousand dollars. But only the second lottery has many consolation prizes worth a hundred dollars each. Of course there are also many outcomes of getting no prize at all. There is no domination to tell us which gamble to choose, but in fact, it seems clear that the choice should be the second gamble. That is because the expectation value of the second gamble is the greater.

This brings in the agent’s opinion, his subjective probability, to calculate the expectation value. It leads in this case to the right solution. And it does so too in the two examples above that Horty gave, if we think that the individual outcomes were in each case equally likely. For then in the first example the expectation value is 5 in either case, so there is no forthcoming ought. In the second example, the expectation value of gambling is 2.5, smaller than that of not gambling which is 5, so the agent ought not to gamble.

So, tentatively, here is my conclusion. Horty is right on three counts. The first is that the Chisholm/Meinong analysis, with its role for the value ordering of the possible outcomes, is faulty. The second is that the improvement needed is that we rely, in the analysis of ought statements, on a value ordering of the agent’s choices. And the third is that an integration with epistemic logic is needed, ….

…. but — I submit — with a logic of opinion rather than of knowledge.

NOTES

John Horty “Epistemic Oughts in Stit Semantics”. Ergo 6 (2019): 71-120

Horty’s definition of dominance is this:

K ≤ K’ (K’ weakly dominates K) if and only if Value(h) ≤ Value(h’) for each h in K and h’ in K’; and K < K’ (K’ strongly dominates K) if and only if K ≤ K’ and it is not the case that K’ ≤ K.

This ordering gives the right result for Horty’s second example (Ought not to gamble), while in the first example neither choice dominates the other. But the demand that all possible outcomes of choice K’ should be better than any in K seems to me too strong for a feasible notion of dominance. For example if the values of outcomes in one choice are 100 and 4, while in the other they are 5 and 4, this definition does not imply that the first choice weakly dominates the other, since 5 (in the second) is larger than 4 (in the first) — while intuitively, surely, the first choice should be advocated.

A temporal framework, plus

Motivation: I have been reading John Horty’s (2019) paper integrating deontic and epistemic logic with a framework of branching time. As a preliminary to exploring his examples and problem cases I want to outline one way to understand indeterminism and time, and a simple way in which such a framework can be given ‘attachments’ to accommodate modalities. Like Horty, I follow the main ideas introduced by Thomason (1970), and developed by Belnap et al. (2001).

The terms ‘branching time’ and ‘indeterminist time’ are not apt: it is the world, not time, that is indeterministic, and the branching tree diagram depicts possible histories of the world. I call a proposition historical if its truth or falsity in a world depends solely on the history of that world. At present I will focus solely on historical propositions, and so worlds will not be separately represented in the framework I will display here.

We distinguish what will actually happen from what it is settled now about what will happen. To cite Aristotle’s example: on a certain day it is unsettled, whether or not there will be a sea-battle tomorrow. However, what is settled does not rule out that there will be a sea-battle, and this too can be expressed in the language: some things may or can happen and others cannot.

Point of view: The world is indeterministic, in this view, with the past entirely settled (at any given moment) but the future largely unsettled. Whatever constraints there are on how things may come to be must derive from what has been the case so far, and similarly for whatever basis there is for our knowledge and opinion about the future. Therefore (?), our possible futures are the future histories of worlds whose history agrees with ours up to and through now.

Among the possible futures we have one that is actual, it is what will actually happen. This has been a subject of controversy; how could the following be true:

there will actually be a sea battle tomorrow, but it is possible that there will not be a sea battle tomorrow?

It can be true if ‘possible’ means ‘not yet settled that not’. (See Appendix for connection with Medieval puzzles about God’s fore-knowledge.)

Representation: A temporal framework is a triple T = <H, R, W>, where H is a non=empty set (the state-space), R is a set of real numbers (the calendar), W is a set of functions that map R into H (the trajectories, or histories). Elements of H are called the states, elements of R the times.

(Note: this framework can be amended, for example by restrictions on what R must be like, or having the set of attributes restricted to a privileged set of subsets of H, forming a lattice or algebra of sets, and so forth.)

Here is a typical picture to help the imagination. Note, though, that it may give the wrong impression. In an indeterministic world, possible futures may intersect or overlap.

If h is in W and t in R then h(t) is the state of h at time t. Since many histories may intersect at time t, it is convenient to use an auxiliary notion: a moment is a pair <h, t> such that h(t) is the state of h at t.

An attribute is a subset of H, a proposition is a subset of W. For tense logic, what is more interesting is tensed propositions, which is to say, proposition-valued functions of time.

Basic propositions: if R is a region in the state-space H, the proposition R^(t) = {h in W: h(t) is in R} is true in history h at time t exactly if h(t) is in R. It is natural to read R^(t) as “it is R now”. If R is the attribute of being rainy then R^(t) would thus be read as “It is raining”.

I will let ‘A(t)’ stand for any proposition-valued function of time; the above example in which R is a region in H, is a special case. For any particular value of t, of course, A(t) is just a proposition, it is the function A(…) that is the tensed proposition. The family of basic propositions can be extended in many ways; first of all by allowing the Boolean set operations: A.B(t) = A(t) ∩ B(t), and so forth. We will look at more ways as we go.

Definitions:

  • worlds h and k agree through t (briefly h =/t k) exactly if h(t’) = k(t’) for all t’ ≤ t.
  • H(h, t)= {k in W: h =/ t k} is the t-cone of h, or the future cone of h at t, or the future cone of moment <h, t>.
  • SA(t)= {h in W: H(h, t) ⊆ A(t)}, the proposition that it is settled at t that A(t)

The term “future cone” is not quite apt since H(h, t) includes the entire past of h, which is common to all members of H(h, t). But the cone-like part of the diagram is the set of possible futures at for h at t.

Thus S, “it is settled that”, is an operator on tensed propositions. For example, if R is a region in the state-space then SR^(t) is true in h at t exactly if R has in it all histories in the t-cone of h. Logically, S is a sort of tensed S5-necessity operator. In Aristotle’s sea-battle example, nothing is settled on a certain evening, but early the next morning, as the fleets approach each other, it is settled that there will be a sea-battle.

There are two important notions related to settled-ness: a tensed proposition A(t) is backward-looking iff membership in A(t) depends solely on the world’s history up to and including t. That is equivalent to: A(t) is part of SA(t), and hence that A(t) = SA(t). If A is a region in H then A^(t) is backward-looking iff each future cone is either entirely inside A, or else entirely disjoint from A.

Similarly, A is sedate if h being in A(t) guarantees that h is in A(t’) for all t’ later than t (that world has, so to say, settled down into being such that A is true). Note well that a backward- looking proposition may be “about the future”, because in some respects the future may be determined by the past. Examples of sentences expressing such propositions:

“it has rained” is both backward-looking and sedate, “it will have rained” is sedate but not backward looking, and “it will rain” is neither.

Tense-modal operators can be introduced in the familiar way: “it will be A”, “it was A”, and so forth express obvious tensed propositions, e.g. FA(t) = {h in W: H(h,t’) ⊆ A for some t’> t}. More precise reckoning can also be introduced. For example if the numbers in the calendar represent days, then “it will be A tomorrow” expresses the tensed proposition TomA(t) = {h in W: h(t+1) is in A}.

Attachments

If T is a temporal framework then an attachment to T is any function that assigns new elements to any entities definable as belonging to T. The examples will make this clear.

Normal modal logic

Let T = <H, R, W> be a temporal framework and REL a function that assigns to W a binary relation on W. Define:

◊A^(t) = {h in W: for some k in W such that REL(h, k), k(t) is in A}

Read as the familiar ‘relative possibility’ relation in standard possible world semantics, a sentence expressing ◊A^(t) would be of the form “it is possible that it is raining”.

But such a modal logic has various instances. In addition to alethic modal logic, there is for example a basic epistemic logic where the models take this form. There, possibility is compatibility with the agent’s knowledge, ‘possible for all I know’. In that case a reading of ◊A^(t) would be “It is possible for all I know that it is raining”, or “I do not know that it is not raining”.

Deontic logic

While deontic logic began as a normal modal logic, it has now a number of forms. An important development occurred when Horty introduced the idea of reasons and imperatives as default rules in non-monotonic logic. There is still, however, a basic form that is common, which we can here attach to a temporal framework.

To each moment we attach a situation in which an agent is facing choices. What ought to be the case, or to be done, depends on what it is best for this agent to do. Horty has examples to show that this is not determined simply by an ordering of the possible outcomes, it has to be based on what is best among the choices. (The better-than ordering of the choices can be defined from a better-than ordering of the possible outcomes, as Horty does. But that is not the only option; it could be based for example on expectation values.)

Let T = <H, R, W> be a temporal framework and SIT a function that assigns to each moment m = <h, t> a situation, represented by a family Δ of disjoint subsets of the future cone of m, plus an ordering of the members of Δ. The cells of Δ are called choices: if X is in Δ then X represents the choice to see to it that the actual future will be in X. The included ordering ≤ of sets of histories may be constrained or generated in various ways, or made to depend on specific factors such as h or t. Call X in Δ optimal iff for all Y in Δ, if X ≤ Y then Y ≤ X. Then one way to explicate ‘Ought’ is this:

OA(t) = {h in W: for some optimal member X of Δ in SIT(<h, t>), X ⊆ A(t)}

This particular formulation allows for ‘moral dilemmas’, that is cases in which more than one cell of Δ is optimal and each induces an undefeated obligation. That is, there may be mutually disjoint tensed propositions A(t) and B(t) such that a given history h is both in OA(t) and in OB(t), presenting a moral dilemma.

An alternative formulation could base what ought to be only on the choice that is uniquely the best, and insure that there is always such a choice that is ‘best, all considered’.

Subjective probability

We imagine again an agent in a situation at each moment <h, t>, this time with opinion, represented by a probability function P<h,t> defined on the propositions. (If the state-space is ‘big’ the attributes must be restricted to a Boolean algebra (field) of subsets of the state-space, and thus similarly restrict the family of propositions.)

This induces an assignment of probabilities to tensed propositions: thus if R is a region in H, P(R^(t)) = r is true in h at t exactly if P<h, t>({h in W: h(t) is in R}) = r. Similarly, the probability FR^(t) is true in h at t, is P<h,t>({{h in W: h(t’) is in R for some t’> t}). So if R stands for the rainy region of possible states, this is the agent’s opinion, in moment <h,t>, that it will rain.

In view of the above remarks about the dependency of future on the past, the subjective probabilities will tend to be severely constrained. One natural constraint is that if h =/t h’ then P<h,t> = P<h’,t>.

In Horty’s (2019) examples (which I would like to discuss in a sequel) it is clear that the agent knows (or is certain about) which futures are possible. In that case, at each moment, the future cone of that moment has probability 1. For any proposition A(t), its probability at <h, t> equals the probability of A(t) ∩ H(h, t).

APPENDIX

I am not unsympathetic to the view that only what is settled is true. But the contrary is also reasonable, and simpler to represent. However, we face the puzzle that I noted above, about whether it makes sense to say that we have different possible futures, though one is actual, and future tense statements are true or false depending on what the actual future is.

In the Middle Ages this came up as the question of compatibility between God’s foreknowledge and free will. If God, being omniscient, knew already at Creation that Eve would eat the apple, and that Judas would betray Jesus, then it was already true then that they would do that. Doesn’t that imply that it wasn’t up to them, that they had no choice, that nothing they could think of will would alter the fact that they were going to do that?

No, it does not imply that. God knew that they would freely decide on what they would do, and also knew what they would do. If that is not clearly consistent to you — as I suppose it shouldn’t be! — I would prefer to refer you to the literature, e.g. Zagzebski 2017.

REFERENCES

(I adapted the diagrams from this website)

Belnap, Nuel; Michael Perloff, and Ming Xu (2001) Facing the Future; Agents and Choices in our Indeterministic World. New York: Oxford University Press.

Horty, John (2019) “Epistemic Oughts in Stit Semantics”. Ergo 6: 71-120.

Müller T. (2014) “Introduction: The Many Branches of Belnap’s Logic”. In: Müller T. (eds) Nuel Belnap on Indeterminism and Free Action. Outstanding Contributions to Logic, vol 2. Springer, Cham. https://doi.org/10.1007/978-3-319-01754-9_1

Thomason, R. H. (1970) “Indeterminist Time and Truth Value Gaps,” Theoria 36: 264-281. 

Zagzebski, Linda (2017) “Foreknowledge and Free Will“, The Stanford Encyclopedia of Philosophy (Summer 2017 Edition), Edward N. Zalta (ed.).

Deontic logic: Horty’s new examples

Although there was much historic precedent, contemporary deontic logic began in the time of Alan Anderson’s explication of “it is forbidden that A” as “if A then there is a sanction”, taking the sanction to be a proposition h (“all hell breaks loose”, as he liked to say). The immediate problem, which resisted all efforts at solution, was to find a conditional “if … then” that would fit this idea.

The early problems, and the ‘classical’ response

When deontic logic was then popularly couched as a normal modal logic, with “it ought to be that” taken to be a non-factive ‘necessity’ operator, there were once again many problems about the conditional. For example, “You ought to make restitution if you have stolen” should be consistent with, but not logically implied by, “You ought not to steal”, The generally accepted new turn consisted in reading the “if” in such such examples not as a conditional connective but on the model of conditionalization in probability theory. Roughly speaking, OA is true if A is better (in some value scheme applying to propositions) than ~A, and O(A given B) is true if similarly (A & B) is better than (~A & B).

Let us call this sort of account, in terms of possible worlds, propositions equated to sets of worlds, and some form of evaluation of propositions, the classical semantics of deontic logic. It faced just one challenge: the submission that irresolvable moral conflicts are possible (whether absolutely, or under certain factual circumstances). After a slow increase in discussion of this putative problem, over a number of years, the truly new development in deontic logic was John Horty’s development of deontic logic as a non-monotonic logic, with moral imperatives represented as default rules. (Reasons as Defaults, 2012).

Horty’s critique

In a later paper (2014) Horty shows that the classical semantics as developed in greater generality by Angela Kratzer could handle a great deal of what could be done in non-monotonic logic. But Horty offered two quite simple examples that showed that there were still problems about conditionality which motivate the switch from the classical semantics to default theories.

What I want to do here is to describe Horty’s new examples, and then see whether they can be handled satisfactorily in the ‘hybrid’ semantics that I outlined in the previous post “Deontic logic: two paradoxes”.

Example 1. Etiquette requires me to follow two the two imperatives ‘Don’t eat with your fingers’ and ‘If you are served cold asparagus, eat it with your fingers.’  Of these two, the first is defeasible (priority less than the second).

(Note: in Horty’s form of deontic logic the imperatives have a priority ranking, which represents in this case something conveyed in the book of etiquette, in addition to the set of imperatives it presents.)

Example 2. Like Example 1, except that there is also a third imperative ‘Put your napkin on your lap’.

The original form of classical semantics has a problem with Example 1 because O(~F) and O(if A then F) together imply, on any ordinary reading of the conditional, O(~A). Surely etiquette does not forbid eating asparagus! The later form, which takes conditionalization seriously, does not have a difficulty with this. We can see this if we take the evaluation of propositions to be an additive function like probability or expectation value: ~F is better than F, but (F & A) is better than (~F & A). (Analogy: death from Covid 19 is not likely, but it is likely if you are an octogenarian.)

But in Example 2. we would like to infer that even if you eat asparagus, and do it with your fingers, you should put your napkin on your lap. After all, there is nothing in the situation to defeat or conflict with the napkin imperative. Now that later form of the classical semantics does badly, for the following inference is not valid:

~F is better than F; (F & A) is better than (~F & A); N is better than ~N; therefore (N & F & A) is better than (N & ~F & A).

This could be dealt with in a particular model by adjusting the evaluation of the propositions so as to make it come out OK. Miss Manners’ etiquette book does not say explicitly that even if you are eating asparagus you should put your napkin on your lap. This could be added (a footnote?) but then we are ignoring the fact that Miss Manners doesn’t have to say so: her imperatives should not be defeated if there isn’t any conflicting imperative in force.

Horty ends by showing clearly that the account in terms of default theories allows the correct handling of this situation.

The proposed hybrid deontic logic, with imperatives and values

So, how does it stand with the ‘hybrid’ version I proposed in the previous post? Here I will outline this version with the formality and details required for this discussion. Then I will reconstruct the examples in that framework.

We begin with a universe (the ‘worlds’, or as I shall say, more appropriately, the situations) and identify propositions subsets of this universe. Each situation is to be understood as situation in which an agent finds himself. A situation is identified with (represented by) an ordered pair Δ = <W, D> where W is a proposition (the agent’s knowledge) and D a set of imperatives (default rules).

(A note: in principle we allow a situation to have much more to it than this. It would be more accurate, but for present purposes not useful, to include in addition to W and D a number of parameters. These could represent such other characteristics as mass, hair color, surrounding flora, or whatever else the agent has.)

An imperative δ = <A, B> is an ordered pair of propositions; A is its antecedent and B its consequent. The truth conditions will be given for ordered pairs of worlds and scenarios,

An imperative is in force in Δ exactly if W implies its antecedent, and the intersection of W, the antecedent, and the consequent of this imperative is not empty: the agent knows that this imperative applies in his situation, and his knowledge does not imply that satisfying it is literally impossible. The satisfaction region of S(δ) of δ in Δ is the proposition W ∩ A ∩ B, and we say that δ is satisfied by all and only the situations in S(δ).

Imperatives are compatible if and only if their satisfaction region have a non-empty overlap. A scenario in Δ is a set of imperatives, a subset of D; its satisfaction region is the intersection of all the satisfaction regions of its members.

A feasible scenario for this situation Δ is a set of mutually compatible imperatives, all of which are in force. A proper scenario is a maximal feasible scenario. The satisfaction regions of the proper scenarios I will call ideal propositions for Δ.

The proposition OA is true in this situation Δ if and only if there is a proper scenario whose corresponding ideal proposition implies (is a subset of) A. The connector O can be read as “It is a primary obligation that A”. Primary obligations can conflict with each other, because there may be more than one proper scenario.

(A note: in general a proposition is true in a situation if and only if that situation is one of its members. So the above should be understood as meaning that OA is the set of situations Δ such that … etcetera. But speaking of truth and satisfaction is a bit more user-friendly.)

The connector ⊙, on the other hand, can be read as “It ought to be, all things considered, that A”. To introduce it we have to add something to the model (as described so far), namely a value-ranking of the propositions. The source for such a ranking may be other factors, not explicitly listed here, but a typical example could be expectation value (thinking of the ideal propositions as outcomes of actions determined by decisions that obey or honor some primary obligation). The value-ranking defines a “better than” relation on the propositions. An ideal proposition E will be called value ideal for Δ exactly if, among the ideal propositions for Δ, none are better than E.

The proposition ⊙A is true in situation Δ if and only if there is a proper scenario in Δ, , whose corresponding ideal proposition is value ideal and implies A.

Both O and ⊙ can be conditionalized in the way Horty does. The situation Δ[A] = <W ∩ A, D>, and O(B|A), , is true in Δ exactly if OB is true in scenario Δ[A]. Similarly for ⊙(B|A)

Reconstructing Horty’s examples

‘Don’t eat with your fingers’ δ1 = <T, ~F> where T is the tautology

‘If you are served cold asparagus, eat it with your fingers.’ δ2 = <A, F>

Δ = <α, W, D> with W = T, the tautology, and D = {δ1, δ2}. We do not have priority rankings on imperatives.

There are two proper scenarios, for the satisfaction regions of δ1 and δ2 do not overlap. Only δ1 is in force, so both O(~F) and ⊙(~F) are both true.

We choose as value ranking one by which the satisfaction region of δ2 is better that that of δ1. The justification for this is precisely the same content or aspect of the book of etiquette that was the source for the priority ranking in Horty’s approach to the example.

In the scenario Δ[A] both imperatives are in force, and the satisfaction regions of both δ1 and δ2 are ideal propositions for Δ[A]. Hence O(~F|T) and O(F|A) are both true in scenario Δ. This is a case of conflicting primary obligations.

However, only ⊙(F|A), and not ⊙(~F|T), is true in scenario Δ, since only the satisfaction region of δ2 is value ideal for Δ[A].

Conclusion: ⊙(~F) and ⊙(F|A) are both true in this world, for scenario Δ.

Now we add to D the imperative δ3 = <T, N> : “Place your napkin on your lap”. We require that the set of situations in which you place your napkin on your lap is large: it has a non-empty intersection with both ~F and (A ∩ F). We also require that the intersection with N does not change the value ranking: (A ∩ F ∩ N) is better than (A ∩ ~F ∩N).

(A note: in Horty’s discussion of the examples the place of δ3 in the priority ranking is not specified. It is tacitly understood that there is no conflict, so it does not need to be specified. That there is no conflict between N, F, and A, however is important.)

This gives us scenario Δ* = <W, {δ1, δ2, δ3}>. So now there are two proper scenarios, {δ1, δ3} and {δ2, δ3}. Only the first is in force, as before. So we still have ⊙(~F), but we also have ⊙N as well as ⊙(~F ∩ N).

In Δ*[A] now all three imperatives are in force, so we have two ideal propositions to consider, (A ∩ ~F ∩ N) and (A ∩ F ∩ N). Of these only the latter is value ideal. So in scenario Δ*[A] it is ⊙(F ∩ N) that is the case, and it is not the case that ⊙(~F). Consequently, in Δ* it is the case that ⊙(F|A), as well as ⊙(N|A) and ⊙(F ∩ N|A)

Conclusion: in scenario Δ*, ⊙(~F), ⊙N, and ⊙(N ∩ F | A)  are all true.

And this is how it should be.

NOTES

Thanks to Ali Farjami for alerting me to an omission in an earlier version of this post. I had mistakenly allowed for imperatives that were literally impossible to satisfy. While an agent may have conflicting obligations, which cannot be jointly satisfied, there cannot be an obligation which it is impossible to satisfy.

I said that in Alan Anderson’s deontic logic, the conditional resisted all efforts at explication. In the classical framework, with all the machinery of possible world modeling also for various sorts of conditionals, things looked better for the conditional. There is recent work on this, for example:

Catherine Saint-Croix and Richmond H. Thomason (2020) “Chisholm’s Paradox and Conditional Oughts”. http://web.eecs.umich.edu/~rthomaso/documents/deontic-logic/ctd.pdf

The new examples are in Horty, John (2014) “Deontic modals: why abandon the classical semantics?” Pacific Philosophical Quarterly 95: 424-460.