Intelligence and Hypotheses

Dan Lewis reports on an interesting side-effect of LED traffic lights - they don't melt the snow that lands in front of them, leading to obscured signals, accidents and so on. Engineers are apparently looking into installing snow detectors coupled to heaters or other such systems. A design change has revealed a hitherto-unnoticed benefit of filament bulbs that will now need to be met by a control system.

Overflows are simple but effective control systems
Photo: Myles Smith
Simple control systems are everywhere. Overflows regulating water levels, governors on steam engines, crossguards on swords preventing contact with the blade, thermostats switching heating on and off, and fuses burning out in the event of a power surge are all simple mechanical control systems. They fulfil design objectives by causing a difference in system behaviour when certain thresholds are met. Thanks to cheap processors, electronic control systems have become ubiquitous (do people even use the term 'smart phone' or 'smart TV' any more?) but this feels like cheating: the ingenuity of the mechanical systems is so much more striking.

Living organisms have - or perhaps are - abundant control systems. From plant photoreceptors through to the human brain, evolution has found a wealth of solutions to the problem of getting an organism to reproduce, via the acquisition of energy. But from a user perspective, the brain doesn't feel like a mechanical control system. Overflow pipes don't form beliefs about the water level, thermostats don't gather evidence about the temperature, fuses don't make a decision when they burn out, and the traffic lights certainly didn't notice that they weren't melting snow any more. Our goal-seeking behaviour seems in contrast to be mediated through hypotheses about the world - we entertain cognitive models of the world, attach probabilities to them, run simulations with them, become attached to them, try to bring them about or avert them.

Are we just a more sophisticated version of Dr Nim? There are several possibilities to consider here. One is that this difference is only one of degree; perhaps hypotheses are simply epiphenomena of some underlying, basically-mechanical method of producing decisions that is not really 'about' anything? At the other extreme is the possibility that the hypothesis-driven approach to decision-making is in some way a fully-general end-of-the-line. Perhaps, once you have the ability to construct and evaluate hypotheses, 'better' is simply a matter of the speed and volume with which you do it and not a matter of more generality. If mechanical control systems work, this theory goes, it's because they do what a hypothesis-driven decision-maker would do under the same circumstances, if it were prepared to waste processing power on problems as simple as preventing a bath from overflowing.

The most interesting possibility, however, is that while hypotheses do represent a genuinely distinct cognitive technology to purely-mechanical systems, they are not fully general, and that there are ways to design an all-purpose, all-environments decision-maker that don't involve anything like hypotheses. Perhaps our present cognitive limitations simply prevent us from imagining it?

A lot depends on which of these possibilities is true. At present, we can understand why artificially-intelligent systems work, even if not exactly how. It is not fallacious to describe (for example) face recognition algorithms as essentially constructing and testing hypotheses about face-possibilities within images. If artificial intelligences greater than ours operate on the hypothesis paradigm then we will still be able to understand why they work, impressed though we might be with their speed of operation and the complexity of those hypotheses. But if the hypothesis-based approach is itself surpassed by a better cognitive technology, we might find ourselves at much more of a loss.