A brief note on the logic of subjective probability

In a previous post I mentioned a principle that holds in familiar, ordinary reasoning about probability, an analogue of what is called the Deduction Theorem in standard logic. Expressed as a rule, it is:

that P(A) = 1 implies that P(B) =y

hence: P(B | A) = y.

I argued that this rule is not valid once opinions about our own opinion are admitted to the language.

Here I want to explain why that rule does hold for the more basic ‘ordinary’ language for subjective probability. For this I must of course specify what is or is not ‘ordinary’.

The crucial point made for Moore’s Paradox is that there are statements that we must admit as possibly true, but which we cannot believe.

The analogue once belief is replaced by subjective probability is this: there are statements which we can assign probability greater than 0, but cannot (coherently) assign 1.

The cases in which there are such statements (so far encountered only when we allow the formulation of opinions about our own opinion) I call ‘un-ordinary’. And cases in which there are no such ‘Moore anomalies’ I call ‘ordinary’.

So let us look into the logic of subjective probability, with only ‘ordinary’ states of opinion.

For any interpretation of subjective probability as a representation of opinion we must start with an agent, who has a possibility space (space of possible states of affairs) S, and a state of opinion, which is typically not numerically precise. So this opinion can be represented by a set C of probability functions, its representor. Equivalently it can be represented by a set of probability judgements (the ones jointly satisfied precisely by all and only members of C).

What must C be like? It is interesting to explore various conditions that we may think should be imposed, such as convexity or closure under Jeffrey conditionalization. (There are discussions in the literature.)

However, I want to keep the characterization of ‘ordinary’ here as minimal as I can. So the condition is just this:

If P is a member of C and P(A) > 0 then

the conditionalization of P on A is also a member of C.

This is defined as usual: the conditionalization of P on A is the probability function on the same possibility space which assigns to any proposition Y the probability P(Y |A); and it exists only if that ratio is well-defined.

And it is the appropriate condition, for it means that if an agent who is not excluded from having a positive probability for A is also not excluded from being or becoming sure that A.

Let me call this an ordinary model.

What is valid in the basic logic of ‘ordinary’ subjective probability is just what holds in all these ordinary models.

The principle I mentioned above, analogue of the Deduction Theorem holds. To show that the following must be the case:

If it is the case that

(i) for all members P of C, if P(A) = 1 then P(B) = y

then

(ii) for all members P of C, P(B | A) = y

To prove this, suppose that some member P of C is such that P(B | A) = z, which does not equal y. This implies that P(A) > 0. So we note that the conditionalization of P on A — call it P# for now — also belongs to C. And P#(A) = 1 but P#(B) = z, which does not equal y. In other words, if (ii) does not hold, neither does (i).

Leave a comment