We tend to have wrong beliefs about many things. The criteria for having a belief do not stop at introspection and so we may be wrong also about what beliefs we have. We are not fully self-transparent, and so it may not be right to blame us for such mistakes.
But it is still appropriate to point out debilitating forms of error, just as we would for a distracted or forgetful accountant. After all, the success of our practical projects may depend on the beliefs we had to begin.
A Criterion for Minimal Consistency
As a most minimal norm, beyond mere logical consistency, I would propose this:
our belief content should not include any awowal of something we have definitely disavowed.
We can avow just by asserting, but to disavow we need to use a word that signifies belief in some way. For example, to the question: “Is it raining?”, you can just say yes. But if you do want to demur, without giving information that you may not have, the least you must do is to say “I don’t believe that it is raining”.
Definition. The content of someone’s beliefs is B-inconsistent if there it includes some proposition p and also the proposition that one does not believe that p.
B-consistency is just its opposite.
I am modeling belief content as a set of propositions, and minimally consistent belief contents are B-consistent sets of propositions. I will also take it that belief can be represented as a modal operator on propositions: Bp is the proposition that encodes, for the agent, the proposition that s/he believes that p.
Normal Updating
Now the study of belief systems has often focused on problems of consistency for updating policies. Whatever you currently believe, it may happen that you learn, or have warrant to add, or just an impulse to add, a new belief. That would be a proposition that you have not believed theretofore. The updating problem is to do without landing in some inconsistency. That is not necessarily easy, since the reason that you did not believe it theretofore is because you had contrary beliefs. So there is much thought and literature about when such a new belief can just be added, and when not, and if not, what to do.
However, responses to the updating problem generally begin by mapping out a safe ground, where the new belief can just be added. Under what conditions is that unproblematic?
A typical first move is just to require consistency: that is, if a new proposition p is consistent with (consistent) belief content B then adding p to B yields (perhaps part of) a (consistent) belief content. I think we had better be more conservative, and so we should require that the prior beliefs include an explicit disavowal of any belief both of p and of it its contraries.
So here is a modest proposal for when a new belief can just be added without courting inconsistency of any sort:
Thesis. If a belief system meets all required criteria of consistency, and it includes disavowal of both p and not-p, then the result of adding p while removing its disavowal, does not violate those criteria of consistency.
We might think of the Thesis as articulating the condition for a system of belief to be updatable in the normal way under the best of circumstances.
A pertinent example then goes like this:
I have no idea whether or not it is now raining in Peking. I do not have the belief that it is so, nor the belief that it is not so. For all I know or believe, it is raining there, or it is not raining there, I have no idea.
The Thesis then implies that if I were to add that it is raining in Peking to my beliefs (whether with or without warrant) the result would not be to make me inconsistent in any pertinent sense.
The Dilemma
But now we have a problem. In that example, I have expressed my belief that I do not believe that it is raining in Peking – that part is definite. But whether it is raining in Peking, about that I have no beliefs. Let’s let p be the proposition that it is raining in Peking. In that case it is clear that I neither believe nor disbelieve the following conjunction:
p & ~Bp
So according to the thesis I can add this to my beliefs, while removing its disavowal, and remain consistent.
But it will take me only one step to see that I have landed myself in B-inconsistency. For surely I believe this conjunction only if I believe both conjuncts. I will be avowing something that I explicitly disavow.
Dilemma: should we accept B-consistency as a minimal consistency criterion for belief, or should we insist that a good system of beliefs must be one that is updatable in the normal way, when it includes nothing contrary, and even disavows anything contrary, to the new information to be added?
(It may not need mentioning, but this dilemma appears when we take into account instances of Moore’s Paradox.)
Parallel for Subjective Probability Conditionalization
If we represent our opinion by means of a subjective probability function, then (full) belief corresponds to probability 1. Lack of both full belief and full disbelief corresponds to positive probability strictly between 0 and 1.
Normal updating of a prior probability function P, when new information E is obtained, consists in conditionalizing P on E. That is to say, the posterior probability function will be
P’: P’(A) = P(A|E) = P(A & E)/P(E), provided P(E) > 0.
So this is always the normal update, whenever one has no full belief either way about E.
In a passage famous in certain quarters David Lewis wrote about the “class of all those probability functions that represent possible systems of beliefs” that:
This class, we may reasonably assume, is closed under conditionalizing. (1976, 302)
In previous posts I have argued that probabilistic versions of Moore’s Paradox raise the same problem for this thesis, that a class of subjective probability functions represent possible systems of belief only if it is closed under conditionalization.
( “Stalnaker’s Thesis –> Moore’s Paradox” 04/20/2023; “Objective Chance –> Moore’s Paradox” 02/17/2024).
Lewis, David K. (1976) “Probabilities of Conditionals and Conditional Probabilities”. The Philosophical Review 85 (3): 297-315.