December 1, 2009
Appearing in the same issue of the ASR as the Condron article I previously discussed, Robert Crosnoe publishes evidence that lower income students suffer some negative academic and psychosocial consequences from attending higher income schools. He uses propensity score weighting (no silver bullet, but probably the best methodology you could ask for with this data) in an attempt attempt to reduce possible confounding due to different students selecting into different schools. Putting that issue aside, my question is, how are students’ academic and psychosocial outcomes changing over time?
Most of the outcomes Crosnoe uses (GPA, negative self-image, social isolation, depression) are measured more than once. He is predicting the later measure, which is appropriate, but why not run models with the lagged dependent variable?
Leave a Comment » | Uncategorized | Tagged: causality, education, methodology, propensity, psychosocial, statistics | Permalink
Posted by Michael Bishop
October 30, 2009
Statistician and political scientist Andrew Gelman recently offered some thoughts on how to talk about associations that could be causal. In my opinion, even when we limit ourselves to high quality scholarship, some work offers far more evidence of causality than other work. The evidence for this claim, and the consequences which follow from it, should be the topic of much future research (and blog posts). In our research, many of us want to make claims that sound like, e.g. “on average, an hour of studying improves final exam scores by 5%,” which we might consider, “a strong effect of studying on test scores.” When is this causal language justified? First of all, I think every paper needs to address potential threats to causal interpretations. Randomized controlled trials, and natural experiments, have the best claim to proving causal relationships – they clearly justify the causal language above. But with appropriate qualifications, I think a paper using propensity score matching/stratification, and in many contexts, plain old regression techniques (especially, e.g. diffs-in-diffs) can justify the use of causal language. The truth is, the devil is in the details. In general, I think we sociologists could be a little more careful in our use of causal language. Of course, causality isn’t everything. How to weigh the importance of demonstrating causality versus other important goals in our research is a very difficult question.
Leave a Comment » | Uncategorized | Tagged: causality, language, methodology, nerd humor, statistics | Permalink
Posted by Michael Bishop