Confidence and Probability Part 4: Theories of Confidence

In the last post, we took a bit of a detour into 'confidence intervals', so as to avoid potential confusion later on. In this post, we will present a list of interpretations for analytical 'confidence' that have been advanced by organisations or analysts, and that have been harvested from personal experience, official guidance, and returns from a survey conducted last year by Aleph Insights.

Some of these theories contradict each other, but others are mutually consistent. Some (it will turn out) are difficult to make internally coherent by themselves. But all have at least an intuitive appeal, and we will consider each in turn over the next few posts to ask:

  1. Is the interpretation internally coherent? Can the thing it purports to describe be made meaningful, and how?

  1. Does the interpretation accord with what we think, according to survey and other evidence, that analysts and their customers actually intend to mean using confidence language?

  1. Is the interpretation decision-relevant? In other words, does or should the information affect optimal decision-making, and how? Is it therefore something we should be communicating to decision-makers, and if so how?

The competing theories, then, are these:

1: The 'Uncertain Probability' Theory


"Doctors say he's got a fifty-fifty chance of living,
though there's only a ten percent chance of that." This theory suggests that 'confidence' relates to how certain we are about the probability of the hypothesis. If we have lots of evidence to support our estimate of the probability, this theory goes, we will have high confidence in it. Where we are just guessing, we will have low confidence in it. This theory presupposes that it is meaningful to talk about being uncertain about a probability. This is an important idea at which we will look closely in the next post.

2: The 'Information' Theory


The 'information' theory of confidence suggests that it measures the amount of information you have access to. The more information you have, the higher confidence you will have in your judgement. This idea is premised on the concept that 'information quantity' is meaningfully separable from probability - whether this is possible is something we'll examine.

3: The 'Ignorance' Theory


The 'ignorance' theory is a counterpart to the 'information' theory, and posits that confidence is instead related to how much information you reasonably think you don't have. In other words, if you think you've seen everything useful relating to a hypothesis, you will report higher confidence than if you think there is still information out there that will have a bearing on it. Like the 'information' theory, this idea presupposes that 'quantity of information' can be separated from probability, and will prove problematic for the same reason.

4: The 'Quality' Theory


This idea proposes that 'confidence' is related to a basket of qualitative indicators relating to the credibility, reliability and so on of the information used to form the judgement in question. This notion hinges on the idea that there are qualitative differences between types of evidence which are important to convey, but that are somehow not reflected in the probabilities of the hypotheses they support.

5: The 'Prior Weight' Theory


This somewhat more technical suggestion proposes that 'confidence' captures the degree to which a judgement is formed using prior probabilities - informally, 'background knowledge' - rather than the likelihood ratio (or 'diagnosticity') of subject-specific evidence. If this theory is true, low-confidence probabilities will be little changed from statistical priors, while high-confidence probabilities will diverge considerably from statistical priors as a result of highly-diagnostic evidence.

6: The 'Expected Value' Theory


The 'expected value' theory is that confidence relates to the anticipated economic value of new items of information. This theory suggests that, all things being equal, if we face a high-cost or high-risk decision, and thus if new information would be likely to add more expected value to our decision, we will have lower confidence in our judgement. Conversely, if our decision is of little consequence, we will treat the judgement more confidently. This proposal is similar to the 'ignorance' theory, but considers not the 'amount' of missing evidence so much as its value in terms of the fundamental characteristics of the decision being made.

7: The 'Expected Cost' Theory


The final theory, that confidence measures the expected cost of further information, suggests that we will have higher confidence when new information is harder to acquire, and lower confidence when it is easy to acquire. This also adds a new dimension to the 'ignorance' theory by considering the expected cost of new information, rather than how much new information we expect there to be.

In the next post, we'll look at the first of these proposals: the 'Uncertain Probability' theory. This will involve a dive into the intriguing and subtle distinction between frequency and probability that lies behind a philosophical and statistical debate that has been smouldering for centuries.