Publication bias, pharmaceutical efficacy and the peer-review process

Reading this MetaFilter discussion on publication bias in favor of positive findings for studies of antipsychotics and antidepressants has me thinking about how research gets reviewed and published. The current model involves a research (or a team of researchers) having an idea (presumably based on theory), developing it into a set of hypotheses, designing a study to test them, securing funding to actually do the study, writing up the results, submitting the paper to a journal, getting peer-reviewed, and ultimately getting either rejected or published. Publication bias occurs at the peer-review stage — studies that find no relationships between the things they examine are less likely to be published than those that do. This is particularly problematic for pharmaceutical research, because it basically means that the body of pharmaceutical literature describes drugs as more effective than they actually are. This is likely to bias both drug application and subsequent research.

There are reasons why this model persists, but it doesn’t make a lot of sense in the era of unlimited online page-space. When we could only publish a certain number of studies per issue, it made sense to filter them based on not only theoretical foundation and study design quality, but also on the impact of their contributions to the literature. Most studies with null findings aren’t going to make much of a splash, of course. But now, especially in medical research, why should we restrict publication on these criteria?

A better approach could be to submit proposals to the peer-review process, with approval carrying a commitment to publish. Right now, grants go through this process, but they’re not tied to publication — journal referees see finished products, with results. But if we really believe in building on theory and hypothesis testing, a literature review, hypothesis and study design should be enough to know if the research is worthwhile. A well-designed study that has null findings may not be a big attention-getter, but it is something that should be available to the rest of the research community, especially at the aggregate level. That is, if 40 null studies don’t get published, while 10 studies with positive findings are published, researchers ought to have a way to find out that null findings actually dominate this area of research.

Share this post:

    Tags: , , ,

    • Candy Goulette

      Tremendous idea! I believe this could apply to all healthcare-related disciplines, not just pharmaceutical. Nursing journals typically develop an editorial calendar a year out, then slot accepted studies based on themes. If a submission doesn’t fit a planned theme, or if the theme has already been filled, the research is rejected. Most scholarly nursing journals won’t accept submissions that have been sent to other journals, so the Catch-22 is not getting published at all. 

    • Guest

      It’s a little mind boggling that the scientifc community allows itself to operate in this way.

      • I don’t know what it’s like in the hard sciences, but in our field, part of the issue is that online journals are still generally seen as lesser outlets. There are some exceptions — Journal of Computer-Mediated Communication has become one of our top journals, though it also had a rough perception until it started to build up a big impact factor. But generally speaking, print still has prestige, which perpetuates the resource-limitations of the medium. It also doesn’t help that there’s big money (for databases like Elsevier, etc.) in keeping access scarce and output limited. There’s a movement out there to get all research funding by public money (i.e., NIH grants, NSF grants, etc.) released for free to the public, which would probably go a long way toward making the Internet the default venue for all research.