The Difficulty of Decision-making with Potentially Catastrophic Outcomes

The UK government faces an archetypal policy challenge in considering whether or not to introduce Ebola screening at borders. In one sense this is a fairly standard decision-problem. If we decide not to screen, there are a range of possible outcomes ranging from no or a handful of deaths through to a full-scale epidemic. The probabilities of these outcomes are presumably estimable using standard epidemiological techniques applied to the specific pathology and etiology of Ebola. If we decide to screen, we will (one presumes) reduce these probabilities across the board, for the price of an additional cost in time and resources.

The optimal decision will of course depend on the expected outcomes under the 'screen' and 'no screen' options. The cost of screening will be fairly easy to estimate. However, the uncertainty associated with epidemic outcomes makes optimal decision-making distinctly problematic. This is particularly so when the probability distribution of the outcomes has a long tail that reaches to catastrophic dimensions. Small changes in probabilities can have an extremely high impact on expected outcomes under these circumstances. If, perhaps as the result of new clinical data, the probability of a million deaths goes from 0.1% to 0.2%, the expected impact would be equivalent to 1000 more deaths - quite possibly enough to change a policymaker's mind, and perhaps rightly so.

In extreme cases - and perhaps only in theory - expected impacts could in fact be undefined. Suppose we have a 90% chance of an impact of 1 unit, a 9% chance of an impact of 10 units, a 0.9% chance of an impact of 100 units, a 0.09% chance of an impact of 1000 units and so on. In this case, the expected impact is 0.9 + 0.9 + 0.9 + ...; in other words, it is apparently limitless. This has an impact on optimal decision-making that is still the subject of debate after nearly 300 years.

The 'St Petersburg Paradox' was first identified by Bernoulli in 1738, and originally concerned the following bet: I toss a coin, and if it's heads you win £1. If it's tails, the pot doubles to £2 and we play again. If it's heads this time, you win the £2, but if it's tails, the pot doubles to £4, and so on. This also has an undefined expectation, and in theory you should be willing to pay any amount of money to play it. This is almost impossible to believe, yet if you simulate this game repeatedly, the average winnings per game do, indeed, inexorably continue to rise.

Ebola is not like this. For one thing, UK deaths can't get much higher than 64 million - not that this outcome would be particularly palatable. But the policy dilemma has similar features to that presented by the St Petersburg Paradox: the huge sensitivity of expectations to our epidemiological assumptions means it is very difficult to put a reasonable value on mitigation measures.