I discovered an example of Stigler’s Law

May 30, 2010

from Wikipedia:

Stigler’s Law of Eponymy is a process proposed by University of Chicago statistics professor Stephen Stigler in his 1980 publication “Stigler’s law of eponymy”.[1] In its simplest and strongest form it says: “No scientific discovery is named after its original discoverer.” Stigler attributes its discovery to sociologist Robert K. Merton (which makes the law self-referencing).

So what is the example I discovered?*  I’ve often hear someone mention one, but never more than one, of the following: Campbell’s Law (1976), Goodhart’s Law (1975), and the Lucas Critique (1976).  Just recently I came across a reference to “Steve Kerr’s classic 1975 article, [On the Folly of Rewarding A While Hoping for B]”.

OK, so they aren’t identical (the Lucas critique probably has the most unique content), but they are all closely related.  I looked over each original article (except the one for Goodhart’s Law which doesn’t appear to be online) and none of them cite the other.  I’m sure that with some research one would find that any one of these alone would be an example of Stigler’s Law, that is, we could find very similar insights from someone like Adam Smith or J.S. Mill.

A couple questions:

1. Had the authors been exposed to the others’ similar ideas at the time they published theirs?  Did they make a conscious decision not to cite?  (Note that while the publications have dates, we have no idea when each author first had their idea.)

2. Let’s assume that none of them got their idea directly from one of the others.  What does it say that they each published their idea in 1975-76?  Is it unusual that multiple people had similar ideas, later recognized to be important, at the same time?  Or is it, as I suspect, merely unusual that all four different people are recognized?  I wish I could be chatting with Robert Merton about this.

* While I’m a little proud of the fact I noticed these similarities, and that I edited Wikipedia to link the first three ideas, I hope its obvious that my claim to having “discovered” this example is intentionally ironic.

Advertisements

Babcock replies on College Slackers

May 25, 2010

Philip Babcock was kind enough to reply to my previous post about his research.  This is the second time a scholar I don’t know personally has responded to a blog post I wrote.*  How excellent!  Let me take this occasion to say explicitly something I was thinking, and should have emphasized, when I initially wrote the post.**  I believe Babcock’s and Marks’s central finding, that college students spend much less time studying than they did in the past, is an important discovery.  Sure, some scholars of education must have had an idea that study time has been declining, but when one considers how many numbers have been crunched and how much ink has been spilled in the name of understanding education, it is shocking to realize that a question as fundamental as the amount of time students spend studying has been paid so little attention.  The authors deserve a great deal of credit for tracking down multiple datasets in an attempt to answer an important question.  Important follow-up questions include: why? and, how should we feel about it?  See the old post for a little discussion of those issues.

*Should I email someone every time I discuss their work?  I tried that for one of the posts on this blog and got no reply.

**I think it is enormously important to criticize and attach qualifications to other people’s research, in fact, I think social science suffers from too little good criticism.  But too little appreciation may be an equally big problem.


The Credibility Revolution in Econometrics

May 13, 2010

Angrist and Pischke are on a tear.  They’re bringing econometrics to the masses with their new book, and the editors of the Journal of Economic Perspectives have seen fit to publish a debate around their article assessing the state of econometrics.  A&P claim, and I more or less agree, that microeconometrics has undergone an inspiring “credibility revolution.”

The best summary I’ve found of their article is by Austin Frakt, here.  Arnold Kling comments here.  Andrew Gelman reviewed their textbook positively and constructively here.

Angrist’s website gave ungated links to most of the comments on his paper:

Michael KeaneEdward LeamerAviv Nevo and Michael WhinstonChristopher Sims, and James Stock

Added 6/3:

Austin Frakt reviews the Mostly Harmless Econometrics.

Mostly Harmless Econometrics has a blog!


The Abuse of Language

May 11, 2010

I cannot rigorously define “the abuse of language,” but I can offer one example:

Arnold Kling recently asked his blog readers whether they belong to the Church of Unlimited Government.  Sounds pretty bad to me, I don’t think I want any part of that.  But wait, though he never defines it carefully, it seems Kling would put you in the Church of Unlimited Government unless you value limited government for its own sake.

In other words, you could favor school vouchers, privatizing the post office, and cutting the military budget in half, but if you favored those proposals because (and only because) you thought they’d have good consequences (e.g. better schools, lower taxes, better foreign relations, etc.) then you could still be accused of belonging to the Church of Unlimited Government.

It is understandable, and unavoidable, that people will frame issues to make their views sound appealing, but hopefully social scientists can enforce a norm of using more mutually acceptable language.  It is a part of debating charitably.

Anyone want to offer another example of the abuse of language?


Social Network Packages Poll

May 6, 2010

Gabriel Rossman is running it here.


Declining Standards in Higher Education

May 4, 2010

In a paper entitled, “Leisure College, USA”  Philip Babcock and Mindy Marks have documented dramatic declines in study effort since 1961, from 24 down to 14 hours per week.  This decline occurred at all different sorts of colleges and is not a result of students working for pay.

At the same time, colleges are handing out better gradesIn other work, Babcock presents strongly suggestive evidence that the two phenomena are related.  That is, lower grading standards lead to less studying.  They also lead students to give better course evaluations.

To me this looks like evidence of big problems in higher education, though I’d love someone to convince me otherwise.

Andrew Perrin has been a leader in developing an institutional response to concerns about grading.  See his original scatterplot post on the topic, “grades: inflation, compression, and systematic inequalities.” as well as the more recent scatterplot discussion.

ADDED 5/4:

Fabio at Orgtheory considers four possible explanations.  I’ll quote him:

  1. Student body composition – there are more colleges than before and even the most elite ones have larger class sizes.
  2. Technology – the Internet + word processing makes assignments much easier to do.
  3. Vocationalism – If the only reason you are in college is for a job, and this has been true for the modal freshman for decades now, you do the minimum.
  4. Grade inflation – ’nuff said.

To address them in reverse order.  Fabio thinks he can rule out grade inflation because even students in hard majors report studying less… I gather he’s arguing that have really tough (uninflated?) grading are studying less, then it seems arbitrary to posit one unnamed cause in those disciplines, and a separate cause (grade inflation) in the other discplines.  I’m not sure if that argument with that data are strong enough to convince me.  I’m not saying that grade inflation explains 100% of the change.  My guess is that it explains some of it, but that both phenomena have common and distinct causes.

Fabio’s favored explanations are vocationalism and technology.  I don’t really like either of them.  First, I don’t know that it’s true that those seeking more career oriented education do the minimum.  Second, as Fabio mentioned, they claim the dropoff is similar across courses of study (though I’m not sure how fine grained that data is).  As for the idea that technology makes studying more efficient, most of the decline in studying had already occurred by the mid-eighties, before email and the web.

A priori I would have predicted the effect was mostly explained by change in the composition of colleges and college students, but the authors claim that the trend was similar among highly competitive colleges.

Any other theories?

ADDED 5/5:

I should have mentioned this before.  The authors are analyzing different surveys with somewhat different methodologies and then attempting to make them comparable.  They lean pretty heavily on the 1961 Project Talent survey.  If that is, for whatever reason, an overestimate, the decline might be far less dramatic.  Ungated version of the paper here.

ADDED 5/6:

After a closer look at the paper, I don’t think the data is fine grained enough to show that today’s students that are similar to those who attended in 1961 (ie. privileged students at top schools) are studying less, or at least not much less.  Therefore one cannot rule out the theory that much/most of the decline is due to compositional change. I wish the authors had made their agreement/disagreement with my assessment more clear because I think it is of fundamental importance in interpreting the trend.

&rfrPhillip Babcock & Mindy Marks (2010). The Falling Time Cost of College: Evidence from Half a Century of Time Use Data NBER Working Paper (April) Other: 15954