An interesting idea came up in an article I was reading for a class, and i thought i’d post about the big idea i found in it here to see if this is a more standard approach than i’d realized. The article in question is a meta analysis of smoking cessation effects on mental health outcomes published in BMJ. In one part of the article they acknowledge the potential for publication bias to shape the results in the published literature. So they came up with a plan:
“In some studies, data on mental health were presented incidentally and the aim was to report on other data. In others, the aim of the report was to present data on change in mental health, therefore the decision to publish might have been contingent on the results. We compared effect estimates between studies in which mental health was the primary outcome and those in which it was not to assess if there was evidence of publication bias.” (Taylor et al. 2014, p. 3)
This seems a potentially intriguing way to deal with publication bias, but it’s not one I’ve seen before. So my question is a relatively simple one – is it a common approach? And one with many evaluated strengths/benefits?
I agree it is a neat idea, but the same types of selective filter could happen with alternative outcomes as it could with the main outcome. In this example for instance (as a hypothetical), a researcher may be reluctant to point out that smoking cessation increases depression when they are examining other outcomes, as it may come across and conflicting if they are making the (obvious) argument that smoking is bad for the health outcomes of interest. For similar reasons they may add in that smoking cessation does not increase depression when it is congenial with the other main findings of the paper.
(I am not aware of whether it is a common approach – I have never seen it before but I wouldn’t claim expertise in meta analysis.)
To the extent that alternative outcomes are “standard” in their reporting it might be less of an issue. In many practical circumstances it won’t be feasible anyway. They had 6 of the 26 studies meet this criteria in the BMJ study.
I agree its a neat approach. But I don’t think it can be used in most circumstances. It’s rare that the ideal (or actual) analysis done to shed light on one phenomena is also the ideal (or even close to ideal) analysis for a different phenomena even if it involves some of the same variables.