A new scientific (or quasi-scientific) theory often begins with a flamboyant, controversial new postulate. Just think of Copernicus’ theory that starts with the postulate that the Sun is stationary and the earth moving. Or Dalton’s, that all substances are composed of atoms, which combine in molecules in remarkable ways. Or von Daniken’s, that the earth has had extra-terrestrial visitors.
The first reaction is usually that this sort of speculation can’t even be tested. But the theory is developed, with many new additions, and eventually a testable consequence appears. When that is tested, and the result is positive, the theory is said to be confirmed.
I will take it here that “confirm” has a very specific meaning: that information confirms a theory if and only if it makes that theory more likely to be true. And in addition, I will take the “likely” to be a subjective probability: my own, but it could be yours, or the community’s. So, using the symbolism I introduced in the previous post (“Moore’s Paradox and Subjective Probability”) the relation is this:
Information E confirms theory T if and only if P(T | E) > P(T)
Now, the question I want to raise is this:
In this sort of scenario, does the confirmation of the theory also raise the probability that the initial flamboyant postulate is true?
I will argue now that in general, the answer to this question must be NO. The reason is that from the prior point of view, what is eventually tested is not relevant to that initial postulate — though of course it is relevant to that postulate relative to the developed theory.
The answer NO must, I think, be surprising at first blush. But I will blame that precisely on a failure to distinguish prior relevance from relevance relative to the theory.
I will present the argument in two forms — the first quick and easy, the second a bit more finicky (relegated to the Appendix).
For my first argument I will represent the impact of the positive test as a Jeffrey Conditionalization. The testable consequence of the theory is a proposition (or if you prefer the terminology, an event) B, in a probability space S.
The prior probability function I will call P as usual, the posterior probability function P*. Let q = P(B). Then, for any event Y in S,
P(Y) = qP(Y|B) + (1 – q)P(Y| ~B)
Now when the test is performed, the impact on our subjective probability is that the probability of B is raised from q to r. Jeffrey’s recipe for the posterior probability P* is simple: all probability ratios ‘inside’ B or ‘inside’ ~B are to be kept the same as they were. Hence:
for all events Y in S, P*(Y) = rP(Y|B) + (1 – r)P(Y| ~B)
In general there can be quite a large redistribution of probabilities due to such a Jeffrey shift. However, something remains the same. Both the above formulas, for P and for P*, assign to each event Y a number that is a convex combination of two end points, namely P(Y|B) and P(Y| ~B).
What is characteristic of a convex combination is that it will be a number between the two end points.
So in the case in which Y and B are mutually irrelevant, from a prior point of view, those two endpoints are the same:
P(Y|B) = P(Y| ~B) = P(Y)
hence any convex combination of those two is also just precisely that number
Application: Suppose A is the initial flamboyant postulate of the theory. Typically, from the prior point of view, there is no relevance between A and the eventual tested consequence of the entire theory, B. So the prior probability P is such that P(A|B) = P(A |B). Therefore, when the positive evidence comes in (and the probability of the entire theory rises!) the probability of that initial flamboyant postulate stays the same.
For example, in Dalton’s time, 1810, when he introduced the atomic hypothesis into chemistry, the prior probabilities were such that any facts about Brownian motion were irrelevant to that hypothesis. (Everyone involved was ignorant of Lucretius argument about the movement of dust particles, and although the irregular movement of coal dust particles had been described by the Dutch physiologist Jan Ingen Housz in 1785, the phenomenon was not given serious attention until Brown discussed it in 1827.)
So when, after many additions and elaborations of the atomic theory had made it into a theory that had a testable consequence in data about Brownian motion (1905), that full theory was confirmed in everyone’s eyes, but the initial hypothesis about unobservable atomic structure did not become any more likely than it was in 1810.
Right?
And notice this: the entire theory is in effect a conjunction of the initial postulate with much else. But a conjunction is never more likely to be true than any of its conjuncts. So the atomic theory is not now more likely to be true than it was in Dalton’s time.
Confirmation of empirical consequences raises the probability of the theory as a whole, but it is a matter of increase in a very low probability, below that of its initial postulate, which never rises above that.
My Take On This
The confirmation of empirical consequences, most particularly when they are the results of experiments designed on the basis of the theory itself, provides evidential support for the theory.
But that has confusedly misunderstood as confirmation of the theory as a whole in a way that raises its probability above its initial very low plausibility. What is confirmed are certain empirical consequences, and we are right to rely ever more on the theory, in our decisions and empirical predictions, as this support increases.
The name of the game is not confirmation but credentialing and empirical grounding.
APPENDIX
It is regrettable that discussions of confirmation give so often the impression of faith in the freakonomics slogan, that A RISING TIDE LIFTS ALL BOATS.
It just isn’t so.
Confirmation is more familiarly presented as due to conditionalization on new evidence, so let’s recast the argument in that form. The following diagram will illustrate this, with the same conclusion that the probability of the initial postulate does not change when the new evidence achieves relevance only because of the other parts of the theory.

Explanation: Proposition A is the initial postulate, and proposition B is what will eventually be cited as evidence. However, A by itself is still too uninformative to be testable at all.
The theory is extended by adding hypothesis H to A, and the more informative theory does allow for the design of a test. The test result is that proposition B is true.
The function q is the prior probability function. The size of the areas labeled A, B, H in the diagram represent their prior probabilities — notice that A and B are independent as far as the prior probability is concerned.
The function Q is the posterior probability, which is q conditionalized on the new evidence B.
The increase in probability of the conjunction (A & H) shows that the evidence confirms the theory taken as a whole. But the probability of A does not increase: Q(A) = q(A). The theory as a whole was confirmed only because its empirical consequence H was confirmed, and this ‘rising tide’ did not ‘lift the boat’ of the initial flamboyant postulate that gave the theory its name.
Perhaps it is a bit silly to reply to my own post, but I want to mention that all the technical points about measures of confirmation that relate to this are made in a paper by David Atkinson, Jeanne Peijnenburg, and Theo Kuipers, in Philosophy of Science 2009. The point for me is rather to challenge scientific realists who take it that basic postulates introducing unobservable entities become more likely to be true as the empirical evidence for the theory as a whole accumulates.
LikeLiked by 1 person