This past weekend I found myself listening to This American Life, a quirky show that tells a variety of stories about the American experience. The most recent show included a discussion of the potential and pitfalls in economic forecasting. And, as it turns out, predictive models of the national economy aren’t very good- with margins of error wide enough to straddle the range from “sluggish with rising unemployment” to “robust with decreasing unemployment.” That’s a little bit like going to your doctor and being told that, given your test results, you’re either going to life for another thirty years or be dead in six months. Most of us would probably not find such a prognosis terribly useful. Yet, what emerged on the show was not only an acknowledgement that economic forecasting is chancy at best- there was some discussion actually that predictions should be made along the lines of “growth will be two-ish percent”- but a wry commentary on the degree of precision in those estimates. Indeed, while the margin of error is so wide as to encompass both boom and bust, the predictions themselves regularly include two or more decimal points. It is as though the doctor has said that you will live for either thirty years and one hundred days, or six months and eight hours. By the time the range is that large, including the extra bits seems a tad silly. Yet, pointless or no, the precision is in the estimates and, more to the point, is actually demanded by the consumers. Even though the people who use these forecasts are aware of how inaccurate they can be they nevertheless seem to want all those extraneous decimal points.
The commentators on This American Life tried to tackle the issue of why, but they didn’t get very far. I won’t get very far either. But it seems to me that the desire for those decimal points stems from a belief that measuring and analyzing something using mathematics necessarily makes it more accurate, reliable, or even useful. And if you believe that, then obviously using math with more decimal points must work even better, right? Well, no, and that’s the problem. Measuring something using mathematics doesn’t make it more accurate or more reliable, it just makes it numeric. And it can be easy to lose sight of that.
When we measure something we’re making certain assumptions. We’re assuming, for example, that a good indicator of economic health is your salary from primary employment, or the value of your home. When we analyze something mathematically, we assume that the thing itself behaves in a certain way- that it has a linear effect on the dependent variable, for example, or that it has an appropriate distribution. Sometimes we can confirm these assumptions, sometimes we can’t, but they’re always there and they always influence what our results actually mean. Mathematics can be an enormous benefit to the research process- and I am a firm believer in them- but the elegance of our models will never relieve us of the burden of clarifying our ideas.
And this is something that we would do well to remember and, particularly, to remind our grad students of. Often grad students (and I include myself in this, once upon a time) are seduced by the apparent power of modern mathematical methods. On first exposure to multiple regression or formal models it may seem as though they have been given the keys to heaven and, like the sorcerer’s apprentice, may try to use them with reckless abandon. But math isn’t magic and its answers are only as accurate as the questions put to it are well-crafted.
Good quantitative analysis isn’t just about pushing the right buttons and running the right programs, it’s about having the awareness to really think about what you’re doing and what it means.