
Statistician and political scientist Andrew Gelman recently offered some thoughts on how to talk about associations that could be causal. In my opinion, even when we limit ourselves to high quality scholarship, some work offers far more evidence of causality than other work. The evidence for this claim, and the consequences which follow from it, should be the topic of much future research (and blog posts). In our research, many of us want to make claims that sound like, e.g. “on average, an hour of studying improves final exam scores by 5%,” which we might consider, “a strong effect of studying on test scores.” When is this causal language justified? First of all, I think every paper needs to address potential threats to causal interpretations. Randomized controlled trials, and natural experiments, have the best claim to proving causal relationships – they clearly justify the causal language above. But with appropriate qualifications, I think a paper using propensity score matching/stratification, and in many contexts, plain old regression techniques (especially, e.g. diffs-in-diffs) can justify the use of causal language. The truth is, the devil is in the details. In general, I think we sociologists could be a little more careful in our use of causal language. Of course, causality isn’t everything. How to weigh the importance of demonstrating causality versus other important goals in our research is a very difficult question.