Is Unsupervised Learning Actually Learning?

Read any introduction to machine learning (e.g. this, or this, or this) and you'll quickly encounter an apparently significant distinction: between supervised and unsupervised learning. The general impression given is that the distinction is a simple one: if you have the 'right answers', you can do supervised learning; if you don't, you'll be doing unsupervised learning. Unsupervised learning, so the story goes, is about 'finding hidden structures in the data', and is perhaps 'how a child learns'; common examples of its utility include finding multidimensional clusters in data features or finding the most information-rich elements of a dataset in order to prune away the fat. This story gives the unmistakable impression that unsupervised learning is basically like supervised learning, but better, because you don't need the right answers to begin with. An unsupervised learner can apparently toddle off by itself and 'get on with it'.
Reading these kinds of explanations, someone new to machine learning might wonder why we bother with supervised learning at all.
The answer is that, despite its name, unsupervised learning isn't in general actually 'learning' at all. This claim doesn't hinge on a philosophical nicety: it follows quite straightforwardly from the definition of 'learning' that you'll find in the same kinds of articles alluded to above. Tom Mitchell's 1997 definition of 'machine learning' states that: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E."
The key here is the 'performance measure', P. All learning, and indeed all machine intelligence, requires some kind of objective that is being optimised against: a task that needs to be done, and which the machine is designed to improve at performing. An image classifier's objective is to correctly predict the 'right' answers to, say, whether there is a cat in a photo. An entity extraction algorithm's objective might be to correctly predict whether or not a text string refers to a person. A house price regression's objective is to get (by some measure or other) 'closer' to the actual price of a house sold based on its features. A bipedal robot's objective (or rather, that of its motor control software) might be to get as far up a set of steps as possible.
A learner's objectives aren't necessarily simple or linear: a false positive, for example, might be more costly than a false negative, and the cost of wrongness in a numerical estimate could take any functional form. But the point is that there is some performance measure, which must ultimately depend on the 'right answers' (the classification of a photo, the meaning of a text string, the price of a house, proximity to the top of the stairs), and 'learning' is the same as a positive relationship between this measure and the quantity of data available to the algorithm. (An example of a non-learning artificial intelligence might be the pathfinding algorithms used by most satnavs: although the algorithms might improve with successive software upgrades, each is not designed to get better over time, say by experimenting with and learning from different routes.)
So - what is the 'performance measure' for an unsupervised learner? There are a number of possible answers, but all of them suggest that either the machine isn't learning, or that it's actually doing supervised learning.
Is he really unsupervised? (Source: Wikipedia) First, what if there isn't a performance measure? In this case, we can straightforwardly rule out the idea that the machine is learning, simply by the definition given above. What can it be said to be getting better at, if there is no 'better'? Typically, what is really going on in the absence of a performance measure is simply data processing - finding, for example, the most likely co-ordinates of clusters in the data features, or finding functions that succinctly relate the values of one data feature to those of another. But although we might be learning from the algorithm's output, the algorithm itself isn't. By a similar token, we might learn about the weight of some rice using kitchen scales, but the scales aren't going to get any more accurate as a result.
What if there is a performance measure, but the machine doesn't have access to it because it's not represented in the data? What if, for example, we are able to say that the clusters identified are clearly appropriate or inappropriate, or that the residuals for a regression are nice and messy, or too neat and thus indicative of a mis-specified functional form? Again, however, this isn't the algorithm 'learning': it's us learning, using the algorithm; any improvements made are us learning about the quality of the algorithm, not the algorithm learning about the behaviour of the data.
What if the performance measure is provided by the 'real world'? For example, what if we designed a satnav pathfinder that would measure the actual time taken by its suggested routes, compare it to its estimates, and thereby improve its ability to estimate and minimise time taken? Well, this is supervised learning by the back door: we aren't giving it the right answers (the 'training data') directly, but we are giving it the means to find its own right answers. In terms of the algorithm, it doesn't make any real difference whether the right answers are provided by a human programmer or by remote sensors.
In summary, 'unsupervised learning' is a misleading term. If there's no performance measure, or there's a 'secret' performance measure that the machine is unaware of, it won't be learning. If there is a performance measure, and it's in the data somewhere, then it's not unsupervised.