The Channel 4 documentary 'What British Muslims Really Think', screened last Thursday, presented the results of a set of structured face-to-face interviews of 1000 British Muslims. The ensuing controversy - inevitable, one imagines, whatever the results of the survey might have been - presents a unusually-topical opportunity to look at some interesting aspects of statistical methodology.

Some criticisms focused on the sample size (e.g. "How can it be possible that the views of 1,000 odd people can prove something about an entire community?", or "You can't get a thousand people and just ask them questions and make that a representation of [all] British Muslims"). The counterintuitive thing about sample size is that, by-and-large, the *percentage* of the population surveyed is less important than the *absolute numbers*. A survey of 1000 Muslims would be *roughly *as informative for most purposes if there were 100,000 Muslims in Britain as if there were 10,000,000, assuming of course the sampling was genuinely random.

Of course, as ever, what constitutes an 'adequate' sample size is not determined just by the things we're interested in, but equally by the decision we need to make *based *on the information the sample gives us. If the decision we wanted to make was very risky - if there were sizeable differences in the outcomes from getting it right or wrong - then we need a larger sample size. If not, sample size matters less.

If there's *no* decision to make, however, then the concept of an 'adequate' sample size simply doesn't apply. It's a bit like the Type I / Type II error distinction - it can only be made meaningful in the context of a decision. Information from a sample *is just information* and should affect our beliefs about the world to an extent determined by *how informative it is - *there isn't a cut-off below which we should simply ignore it. A small sample of a large population won't give us *much* information, but it will give us *some*. As much as classical statistics tries to suppress this notion, we all understand this intuitively and would be incapable of functioning if it were not true: if we had to sample 1000 olives before deciding we didn't like them - just in case we'd just been unlucky - we'd spend a lot of time being miserable.

If you're looking for a line-of-attack against a survey with whose conclusions you disagree, hunting for sampling bias is normally a better bet ("It is clear that the areas surveyed are disproportionately poorer and more conservative [and so] may potentially have different views than richer areas"). The great thing about sampling bias is that there is no 'diagnostic test' for it - in other words, you can't pick it up from any feature of the data. Instead, you have to look at the mechanics whereby the data were generated to see if it's likely to produce correlation with the thing you're trying to measure. Among other things, this has the benefit of moving the debate away from discussion of statistical rules-of-thumb and widely-misunderstood concepts like confidence intervals, which can't by themselves ever force us to 'reject*'* a survey entirely.

When do you give up? All animals deal with the 'sampling' problem all the time - it's fundamental to decision-making in a dynamic world. We continually have to decide whether we've got enough to make a call (to keep eating the fruit, attack the castle, put an offer in on that house etc.), or whether we need to collect more information first. Thinking statistically can certainly help (and is perhaps necessary) when we are trying to measure the power of information, but any claim that a survey should be *ignored entirely* based on its failure to meet some statistical threshold is almost certainly mistaken.

Thinking about Sampling