Paul Krugman has spent a lot of time recently criticizing economists associated with the Chicago School because he believes they are making horrible public policy recommendations and, importantly, misusing mathematical models, e.g. the oft criticized, dynamic stochastic general equilibrium models. But Krugman has written a lot that is equally interesting, and less political.
Today I’d like to draw your attention to an article he wrote in 1994 entitled, The Fall and Rise of Development Economics. Its absolutely worth reading the whole thing, but I’ll summarize and excerpt a huge chunk from the middle of it for those of you who are really pressed for time. Krugman tells us that rigorous modeling is very important for economics, but he also argues that the new (1950s and 1960s) emphasis on modeling led people to forget or ignore things people knew about development economics for years, until people figured out how to model them.
After the break I’m pasting the middle section of the essay, essential reading on metaphors and models:
…I have just acknowledged that the tendency of economists to emphaisze what they know how to model formally can create blind spots; yet I have also claimed that the insistence on modeling is basically right. What I want to do now is call a time out and discuss more broadly the role of models in social science.
It is said that those who can, do, while those who cannot, discuss methodology. So the very fact that I raise the issue of methodology in this paper tells you something about the state of economics. Yet in some ways the problems of economics and of social science in general are part of a broader methodological problem that afflicts many fields: how to deal with complex systems.
It is in a way unfortunate that for many of us the image of a successful field of scientific endeavor is basic physics. The objective of the most basic physics is a complete description of what happens. In principle and apparently in practice, quantum mechanics gives a complete account of what goes on inside, say, a hydrogen atom. But most things we want to analyze, even in physical science, cannot be dealt with at that level of completeness. The only exact model of the global weather system is that system itself. Any model of that system is therefore to some degree a falsification: it leaves out some (many) aspects of reality.
How, then, does the meteorological researcher decide what to put into his model? And how does he decide whether his model is a good one? The answer to the first question is that the choice of model represents a mixture of judgement and compromise. The model must be something you know how to make — that is, you are constrained by your modeling techniques. And the model must be something you can construct given your resources — time, money, and patience are not unlimited. There may be a wide variety of models possible given those constraints; which one or ones you choose actually to build depends on educated guessing.
And how do you know that the model is good? It will never be right in the way that quantum electrodynamics is right. At a certain point you may be good enough at predicting that your results can be put to repeated practical use, like the giant weather-forecasting models that run on today’s supercomputers; in that case predictive success can be measured in terms of dollars and cents, and the improvement of models becomes a quantifiable matter. In the early stages of a complex science, however, the criterion for a good model is more subjective: it is a good model if it succeeds in explaining or rationalizing some of what you see in the world in a way that you might not have expected.
Notice that I have not specified exactly what I mean by a model. You may think that I must mean a mathematical model, perhaps a computer simulation. And indeed that’s mostly what we have to work with in economics. But a model can equally well be a physical one, and I’d like to describe briefly an example from the pre-computer era of meteorological research: Fultz’s dish-pan.
Dave Fultz was a meteorological theorist at the University of Chicago, who asked the following question: what factors are essential to generating the complexity of actual weather? Is it a process that depends on the full complexity of the world — the interaction of ocean currents and the atmosphere, the locations of mountain ranges, the alternation of the seasons, and so on — or does the basic pattern of weather, for all its complexity, have simple roots?
He was able to show the essential simplicity of the weather’s causes with a “model” that consisted of a dish-pan filled with water, placed on a slowly rotating turntable, with an electric heating element bent around the outside of the pan. Aluminum flakes were suspended in the water, so that a camera perched overhead and rotating with the pan could take pictures of the pattern of flow.
The setup was designed to reproduce two features of the global weather pattern: the temperature differential between the poles and the equator, and the Coriolis force that results from the Earth’s spin. Everything else — all the rich detail of the actual planet — was suppressed. And yet the dish-pan exhibited an unmistakable resemblance to actual weather patterns: a steady flow near the rim evidently corresponding to the trade winds, constantly shifting eddies reminiscent of temperate-zone storm systems, even a rapidly moving ribbon of water that looked like the recently discovered jet stream.
What did one learn from the dish-pan? It was not telling an entirely true story: the Earth is not flat, air is not water, the real world has oceans and mountain ranges and for that matter two hemispheres. The unrealism of Fultz’s model world was dictated by what he was able to or could be bothered to build — in effect, by the limitations of his modeling technique. Nonetheless, the model did convey a powerful insight into why the weather system behaves the way it does.
The important point is that any kind of model of a complex system — a physical model, a computer simulation, or a pencil-and-paper mathematical representation — amounts to pretty much the same kind of procedure. You make a set of clearly untrue simplifications to get the system down to something you can handle; those simplifications are dictated partly by guesses about what is important, partly by the modeling techniques available. And the end result, if the model is a good one, is an improved insight into why the vastly more complex real system behaves the way it does.
When it comes to physical science, few people have problems with this idea. When we turn to social science, however, the whole issue of modeling begins to raise people’s hackles. Suddenly the idea of representing the relevant system through a set of simplifications that are dictated at least in part by the available techniques becomes highly objectionable. Everyone accepts that it was reasonable for Fultz to represent the Earth, at least for a first pass, with a flat dish, because that was what was practical. But what do you think about the decision of most economists between 1820 and 1970 to represent the economy as a set of perfectly competitive markets, because a model of perfect competition was what they knew how to build? It’s essentially the same thing, but it raises howls of indignation.
Why is our attitude so different when we come to social science? There are some discreditable reasons: like Victorians offended by the suggestion that they were descended from apes, some humanists imagine that their dignity is threatened when human society is represented as the moral equivalent of a dish on a turntable. Also, the most vociferous critics of economic models are often politically motivated. They have very strong ideas about what they want to believe; their convictions are essentially driven by values rather than analysis, but when an analysis threatens those beliefs they prefer to attack its assumptions rather than examine the basis for their own beliefs.
Still, there are highly intelligent and objective thinkers who are repelled by simplistic models for a much better reason: they are very aware that the act of building a model involves loss as well as gain. Africa isn’t empty, but the act of making accurate maps can get you into the habit of imagining that it is. Model-building, especially in its early stages, involves the evolution of ignorance as well as knowledge; and someone with powerful intuition, with a deep sense of the complexities of reality, may well feel that from his point of view more is lost than is gained. It is in this honorable camp that I would put Albert Hirschman and his rejection of mainstream economics.
The cycle of knowledge lost before it can be regained seems to be an inevitable part of formal model-building. Here’s another story from meteorology. Folk wisdom has always said that you can predict future weather from the aspect of the sky, and had claimed that certain kinds of clouds presaged storms. As meteorology developed in the 19th and early 20th centuries, however — as it made such fundamental discoveries, completely unknown to folk wisdom, as the fact that the winds in a storm blow in a circular path — it basically stopped paying attention to how the sky looked. Serious students of the weather studied wind direction and barometric pressure, not the pretty patterns made by condensing water vapor.
It was not until 1919 that a group of Norwegian scientists realized that the folk wisdom had been right all along — that one could identify the onset and development of a cyclonic storm quite accurately by looking at the shapes and altitude of the cloud cover.
The point is not that a century of research into the weather had only reaffirmed what everyone knew from the beginning. The meteorology of 1919 had learned many things of which folklore was unaware, and dispelled many myths. Nor is the point that meteorologists somehow sinned by not looking at clouds for so long. What happened was simply inevitable: during the process of model-building, there is a narrowing of vision imposed by the limitations of one’s framework and tools, a narrowing that can only be ended definitively by making those tools good enough to transcend those limitations.
But that initial narrowing is very hard for broad minds to accept. And so they look for an alternative.
The problem is that there is no alternative to models. We all think in simplified models, all the time. The sophisticated thing to do is not to pretend to stop, but to be self-conscious — to be aware that your models are maps rather than reality.
There are many intelligent writers on economics who are able to convince themselves — and sometimes large numbers of other people as well — that they have found a way to transcend the narrowing effect of model-building. Invariably they are fooling themselves. If you look at the writing of anyone who claims to be able to write about social issues without stooping to restrictive modeling, you will find that his insights are based essentially on the use of metaphor. And metaphor is, of course, a kind of heuristic modeling technique.
In fact, we are all builders and purveyors of unrealistic simplifications. Some of us are self-aware: we use our models as metaphors. Others, including people who are indisputably brilliant and seemingly sophisticated, are sleepwalkers: they unconsciously use metaphors as models.