Ok, in my research methods class, we are hitting an overview of statistics in the closing weeks of the semester. As such, i would prefer to include some empirical examples to visualize the things we’re going to talk about that are fun / outside my typical wheelhouse. So, do you have any favorite (read: typical, atypical, surprising, bizarre, differentially distributed, etc.) examples of univariate distributions and/or bivariate associations that may “stick” in their memories when they see them presented visually? I have plenty of “standard” examples i could draw from, but they’re likely bored with the one’s i think of first by this point in the term. So, what are yours? It’s fine if you just have the numbers, i can convert them to visualizations, but if you have visual pointers, all the better.
We’ve had a number of posts here before about teaching. Here’s a question I’ve seen debated a fair bit: does HBO’s, The Wire, have educational value? In particular, can it be a productive part of a college course in sociology?
I’ve just begun watching the series myself, and yes, I am enjoying it. But I didn’t need to watch it, or like it, to know that it could help teach. In fact, it strikes me as curious that there would be public debate about whether The Wire could be curriculum.
Ishmael Reed thinks that the show reinforces stereotypes about blacks. I’m not sure about that yet, one could argue exactly the opposite, but even if it this is true, what better way to address the stereotypes prevalent in popular culture, than to critique them in a class which also requires reading rigorous social science?
Some point out that one can learn a lot more facts in an hour of reading about urban social problems than one can by watching one episode of a fictional television program. I certainly agree, but there are a couple obvious responses: First, popular and critically acclaimed television may have an emotional impact is a valuable part of education, and in some ways cannot be matched by other content. Second, watching The Wire need not replace reading peer-reviewed studies or other academic approaches. I think it is quite plausible that one could expect students to do a typical reading load and have them watch some episodes of the Wire on top of that. So for me there is no question about whether The Wire can be used in education, the only issue, but a very real one, is how it should be used.
Here’s what I find odd about this whole debate… how many unimportant and/or poorly taught classes are being taught this semester, in colleges all across the country? Answer: lots. How many articles do you see about important issues in higher education published in outlets like The Huffington Post, The Boston Globe, and The Washington Post? Answer: few. The reason this debate is prominent is that people want to read about The Wire, while pretending to read about important issues in education. OK, that shouldn’t be a revelation for most people. But it is worthwhile to have pointy headed types pointing out that that is what’s going on.
If, like me and millions of others, you can’t help but be interested in a somewhat silly debate about The Wire, William Julius Wilson and Anmol Chaddha defend their class in an op-ed for The Washington Post.
If you wanted to be more objective about student and professor evaluation, you would have standardized measures of student performance across professors. In the rare case in which this is done, we learn all sorts of fascinating things, including things which raise questions about the unintended consequences of our evaluation systems.
In the U.S. Airforce Academy students are randomly assigned to professors but all take the same final exam. What makes the data really interesting is that there are mandatory follow-up courses so you can see the relationship between which Calculus I professor you had, and your performance in Calculus II! Here’s the summary sentence that Tyler quotes:
The overall pattern of the results shows that students of less experienced and less qualified professors perform significantly better in the contemporaneous course being taught. In contrast, the students of more experienced and more highly qualified introductory professors perform significantly better in the follow-on courses.
Here’s a nice graph from the paper:
Student evaluations, unsurprisingly, laud the professors who raise performance in the initial course. The surprising thing is that this is negatively correlated with later performance. In my post on Babcock’s and Marks’ research, I touched on the possible unintended consequences of student evaluations of professors. This paper gives new reasons for concern (not to mention much additional evidence, e.g. that physical attractiveness strongly boosts student evaluations).
That said, the scary thing is that even with random assignment, rich data, and careful analysis there are multiple, quite different, explanations.
The obvious first possibility is that inexperienced professors, (perhaps under pressure to get good teaching evaluations) focus strictly on teaching students what they need to know for good grades. More experienced professors teach a broader curriculum, the benefits of which you might take on faith but needn’t because their students do better in the follow-up course!
But the authors mention a couple other possibilities:
For example, introductory professors who “teach to the test” may induce students to exert less study effort in follow-on related courses. This may occur due to a false signal of one’s own ability or from an erroneous expectation of how follow-on courses will be taught by other professors. A final, more cynical, explanation could also relate to student effort. Students of low value added professors in the introductory course may increase effort in follow-on courses to help “erase” their lower than expected grade in the introductory course.
Indeed, I think there is a broader phenomenon. Professors who are “good” by almost any objective measure, will have induced their students to put more time and effort into their course. How much this takes away from students efforts in other courses is an essential question I have never seen addressed. Perhaps additional analysis of the data could shed some light on this.
Carrell, S., & West, J. (2010). Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors Journal of Political Economy, 118 (3), 409-432 DOI: 10.1086/653808
Added: Jeff Ely has an interesting take: In Defense of Teacher Evaluations.
Added 6/17: Another interesting take from Forest Hinton.
In a paper entitled, “Leisure College, USA” Philip Babcock and Mindy Marks have documented dramatic declines in study effort since 1961, from 24 down to 14 hours per week. This decline occurred at all different sorts of colleges and is not a result of students working for pay.
At the same time, colleges are handing out better grades. In other work, Babcock presents strongly suggestive evidence that the two phenomena are related. That is, lower grading standards lead to less studying. They also lead students to give better course evaluations.
To me this looks like evidence of big problems in higher education, though I’d love someone to convince me otherwise.
Andrew Perrin has been a leader in developing an institutional response to concerns about grading. See his original scatterplot post on the topic, “grades: inflation, compression, and systematic inequalities.” as well as the more recent scatterplot discussion.
Fabio at Orgtheory considers four possible explanations. I’ll quote him:
- Student body composition – there are more colleges than before and even the most elite ones have larger class sizes.
- Technology – the Internet + word processing makes assignments much easier to do.
- Vocationalism – If the only reason you are in college is for a job, and this has been true for the modal freshman for decades now, you do the minimum.
- Grade inflation – ’nuff said.
To address them in reverse order. Fabio thinks he can rule out grade inflation because even students in hard majors report studying less… I gather he’s arguing that have really tough (uninflated?) grading are studying less, then it seems arbitrary to posit one unnamed cause in those disciplines, and a separate cause (grade inflation) in the other discplines. I’m not sure if that argument with that data are strong enough to convince me. I’m not saying that grade inflation explains 100% of the change. My guess is that it explains some of it, but that both phenomena have common and distinct causes.
Fabio’s favored explanations are vocationalism and technology. I don’t really like either of them. First, I don’t know that it’s true that those seeking more career oriented education do the minimum. Second, as Fabio mentioned, they claim the dropoff is similar across courses of study (though I’m not sure how fine grained that data is). As for the idea that technology makes studying more efficient, most of the decline in studying had already occurred by the mid-eighties, before email and the web.
A priori I would have predicted the effect was mostly explained by change in the composition of colleges and college students, but the authors claim that the trend was similar among highly competitive colleges.
Any other theories?
I should have mentioned this before. The authors are analyzing different surveys with somewhat different methodologies and then attempting to make them comparable. They lean pretty heavily on the 1961 Project Talent survey. If that is, for whatever reason, an overestimate, the decline might be far less dramatic. Ungated version of the paper here.
After a closer look at the paper, I don’t think the data is fine grained enough to show that today’s students that are similar to those who attended in 1961 (ie. privileged students at top schools) are studying less, or at least not much less. Therefore one cannot rule out the theory that much/most of the decline is due to compositional change. I wish the authors had made their agreement/disagreement with my assessment more clear because I think it is of fundamental importance in interpreting the trend.
&rfrPhillip Babcock & Mindy Marks (2010). The Falling Time Cost of College: Evidence from Half a Century of Time Use Data NBER Working Paper (April) Other: 15954
David Easley and Jon Kleinberg have a textbook coming out called Networks, Crowds, and Markets: Reasoning About a Highly Connected World. It looks great, and for now you can download a preprint of the whole thing for free. Cornell, home of founding co-blogger Matthew Brashears seems like a great place to do work on networks.
Robert Hanneman (with coauthors Riddle and Izquierdo) also has a free textbook or three for you to download. I won’t try to summarize any of these books since you are just a click away from viewing them, but I will point out that they aren’t competitors… they each have a lot of unique material.
See more discussion of social network curriculum/pedagogy at Jimi’s post here.
If you’re a regular reader or contributor to this blog, you probably agree that mathematics have an indispensable role in the social sciences. Lately, however, I’ve been thinking a lot about something: what kind of mathematical tools do sociologists of the future require?
The reason I’ve been thinking about this is a talented undergraduate who is strongly considering going to grad school in sociology. Interestingly, part of what has moved him in that direction seems to have been two classes he’s taken with me. The first was the required undergraduate statistics class and the second a substantive class that includes a lot of network analysis, structural theory, and associated material. As it turns out, it’s entirely possible to teach Mayhew & Levinger to undergraduates. Who knew? In any case, in this second class he’s gotten a strong sense that sociology involves a lot of math and this excites him. I’m all for it, since this student is very smart and we need more mathematically-gifted grad students. Yesterday during a conversation, though, he asked me what sorts of math he should be thinking about taking as he prepares for graduate school. I gave him my answers- and an regression analysis textbook to work through during winter break- but I wonder what others think.
If you could somehow start over again in the field, what areas of mathematics would you make sure you learned right from the start?