Professor Quality and Professor Evaluation

June 11, 2010

If you wanted to be more objective about student and professor evaluation, you would have standardized measures of student performance across professors.  In the rare case in which this is done, we learn all sorts of fascinating things, including things which raise questions about the unintended consequences of our evaluation systems.

Tyler Cowen points me to a paper in the Journal of Political Economy, by Scott E. Carrell and James E. West [ungated version].

In the U.S. Airforce Academy students are randomly assigned to professors but all take the same final exam.  What makes the data really interesting is that there are mandatory follow-up courses so you can see the relationship between which Calculus I professor you had, and your performance in Calculus II!  Here’s the summary sentence that Tyler quotes:

The overall pattern of the results shows that students of less experienced and less qualified professors perform significantly better in the contemporaneous course being taught.  In contrast, the students of more experienced and more highly qualified introductory professors perform significantly better in the follow-on courses.

Here’s a nice graph from the paper:

Student evaluations, unsurprisingly, laud the professors who raise performance in the initial course.  The surprising thing is that this is negatively correlated with later performance.  In my post on Babcock’s and Marks’ research, I touched on the possible unintended consequences of student evaluations of professors.  This paper gives new reasons for concern (not to mention much additional evidence, e.g. that physical attractiveness strongly boosts student evaluations).

That said, the scary thing is that even with random assignment, rich data, and careful analysis there are multiple, quite different, explanations.

The obvious first possibility is that inexperienced professors, (perhaps under pressure to get good teaching evaluations) focus strictly on teaching students what they need to know for good grades.  More experienced professors teach a broader curriculum, the benefits of which you might take on faith but needn’t because their students do better in the follow-up course!

But the authors mention a couple other possibilities:

For example, introductory professors who “teach to the test” may induce students to exert less study effort in follow-on related courses.  This may occur due to a false signal of one’s own ability or from an erroneous expectation of how follow-on courses will be taught by other professors.  A final, more cynical, explanation could also relate to student effort.  Students of low value added professors in the introductory course may increase effort in follow-on courses to help “erase” their lower than expected grade in the introductory course.

Indeed, I think there is a broader phenomenon.  Professors who are “good” by almost any objective measure, will have induced their students to put more time and effort into their course.  How much this takes away from students efforts in other courses is an essential question I have never seen addressed.  Perhaps additional analysis of the data could shed some light on this.

Carrell, S., & West, J. (2010). Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors Journal of Political Economy, 118 (3), 409-432 DOI: 10.1086/653808

Added: Jeff Ely has an interesting take: In Defense of Teacher Evaluations.

Added 6/17: Another interesting take from Forest Hinton.

Advertisements

Babcock replies on College Slackers

May 25, 2010

Philip Babcock was kind enough to reply to my previous post about his research.  This is the second time a scholar I don’t know personally has responded to a blog post I wrote.*  How excellent!  Let me take this occasion to say explicitly something I was thinking, and should have emphasized, when I initially wrote the post.**  I believe Babcock’s and Marks’s central finding, that college students spend much less time studying than they did in the past, is an important discovery.  Sure, some scholars of education must have had an idea that study time has been declining, but when one considers how many numbers have been crunched and how much ink has been spilled in the name of understanding education, it is shocking to realize that a question as fundamental as the amount of time students spend studying has been paid so little attention.  The authors deserve a great deal of credit for tracking down multiple datasets in an attempt to answer an important question.  Important follow-up questions include: why? and, how should we feel about it?  See the old post for a little discussion of those issues.

*Should I email someone every time I discuss their work?  I tried that for one of the posts on this blog and got no reply.

**I think it is enormously important to criticize and attach qualifications to other people’s research, in fact, I think social science suffers from too little good criticism.  But too little appreciation may be an equally big problem.


Declining Standards in Higher Education

May 4, 2010

In a paper entitled, “Leisure College, USA”  Philip Babcock and Mindy Marks have documented dramatic declines in study effort since 1961, from 24 down to 14 hours per week.  This decline occurred at all different sorts of colleges and is not a result of students working for pay.

At the same time, colleges are handing out better gradesIn other work, Babcock presents strongly suggestive evidence that the two phenomena are related.  That is, lower grading standards lead to less studying.  They also lead students to give better course evaluations.

To me this looks like evidence of big problems in higher education, though I’d love someone to convince me otherwise.

Andrew Perrin has been a leader in developing an institutional response to concerns about grading.  See his original scatterplot post on the topic, “grades: inflation, compression, and systematic inequalities.” as well as the more recent scatterplot discussion.

ADDED 5/4:

Fabio at Orgtheory considers four possible explanations.  I’ll quote him:

  1. Student body composition – there are more colleges than before and even the most elite ones have larger class sizes.
  2. Technology – the Internet + word processing makes assignments much easier to do.
  3. Vocationalism – If the only reason you are in college is for a job, and this has been true for the modal freshman for decades now, you do the minimum.
  4. Grade inflation – ’nuff said.

To address them in reverse order.  Fabio thinks he can rule out grade inflation because even students in hard majors report studying less… I gather he’s arguing that have really tough (uninflated?) grading are studying less, then it seems arbitrary to posit one unnamed cause in those disciplines, and a separate cause (grade inflation) in the other discplines.  I’m not sure if that argument with that data are strong enough to convince me.  I’m not saying that grade inflation explains 100% of the change.  My guess is that it explains some of it, but that both phenomena have common and distinct causes.

Fabio’s favored explanations are vocationalism and technology.  I don’t really like either of them.  First, I don’t know that it’s true that those seeking more career oriented education do the minimum.  Second, as Fabio mentioned, they claim the dropoff is similar across courses of study (though I’m not sure how fine grained that data is).  As for the idea that technology makes studying more efficient, most of the decline in studying had already occurred by the mid-eighties, before email and the web.

A priori I would have predicted the effect was mostly explained by change in the composition of colleges and college students, but the authors claim that the trend was similar among highly competitive colleges.

Any other theories?

ADDED 5/5:

I should have mentioned this before.  The authors are analyzing different surveys with somewhat different methodologies and then attempting to make them comparable.  They lean pretty heavily on the 1961 Project Talent survey.  If that is, for whatever reason, an overestimate, the decline might be far less dramatic.  Ungated version of the paper here.

ADDED 5/6:

After a closer look at the paper, I don’t think the data is fine grained enough to show that today’s students that are similar to those who attended in 1961 (ie. privileged students at top schools) are studying less, or at least not much less.  Therefore one cannot rule out the theory that much/most of the decline is due to compositional change. I wish the authors had made their agreement/disagreement with my assessment more clear because I think it is of fundamental importance in interpreting the trend.

&rfrPhillip Babcock & Mindy Marks (2010). The Falling Time Cost of College: Evidence from Half a Century of Time Use Data NBER Working Paper (April) Other: 15954


an aside on asides

January 4, 2010

For as long as i’ve read books (which admittedly isn’t actually nearly as long as most people in this line of work), i’ve had a strong preference in the whole endnotes versus footnotes debate.* I actually read them. So, i find it incredibly annoying when they’re all tucked away in the back of the book somewhere, nowhere near the content to which they actually apply. I’m sure there are folks out there who prefer endnotes, if that’s you, can you share why? I’m genuinely curious.
Read the rest of this entry »


standards in publishing

November 24, 2009

In the comments on a previous post, dadakim raises a pertinent question about publishing practices that hasn’t (yet?) been adopted in sociology (other than by SMR*, as far as i know). I re-raise it here in case you missed it because i’d be interested in reader reactions to the idea.

But the motivation for this post was actually an unrelated publishing issue has been bugging me for a while. Why is it that news articles that mention scientific research don’t have to detail their sources? This is one practice i’ve never understood. I get elated when i see articles that actually go ahead and source the original materials, which is sad, since i think it should be SOP.

*See the last paragraph of the guidelines.