Posts Tagged ‘research’
Reading this MetaFilter discussion on publication bias in favor of positive findings for studies of antipsychotics and antidepressants has me thinking about how research gets reviewed and published. The current model involves a research (or a team of researchers) having an idea (presumably based on theory), developing it into a set of hypotheses, designing a study to test them, securing funding to actually do the study, writing up the results, submitting the paper to a journal, getting peer-reviewed, and ultimately getting either rejected or published. Publication bias occurs at the peer-review stage — studies that find no relationships between the things they examine are less likely to be published than those that do. This is particularly problematic for pharmaceutical research, because it basically means that the body of pharmaceutical literature describes drugs as more effective than they actually are. This is likely to bias both drug application and subsequent research.
There are reasons why this model persists, but it doesn’t make a lot of sense in the era of unlimited online page-space. When we could only publish a certain number of studies per issue, it made sense to filter them based on not only theoretical foundation and study design quality, but also on the impact of their contributions to the literature. Most studies with null findings aren’t going to make much of a splash, of course. But now, especially in medical research, why should we restrict publication on these criteria?
A better approach could be to submit proposals to the peer-review process, with approval carrying a commitment to publish. Right now, grants go through this process, but they’re not tied to publication — journal referees see finished products, with results. But if we really believe in building on theory and hypothesis testing, a literature review, hypothesis and study design should be enough to know if the research is worthwhile. A well-designed study that has null findings may not be a big attention-getter, but it is something that should be available to the rest of the research community, especially at the aggregate level. That is, if 40 null studies don’t get published, while 10 studies with positive findings are published, researchers ought to have a way to find out that null findings actually dominate this area of research.
Filed: Science Is Real || 17:55, April 3 || View Comments
I’m in my tenth year as an academic, which means that my view of how things worked in the world of research pre-2002 is based purely on output — that is, the papers and books I’ve read that were published before my time. As a result, I really don’t know if the phenomenon I’ve been noticing lately is new or not, but man, there are unpublished papers everywhere. Conference papers, working papers, executive reports, etc., are posted all over scholars’ web sites, their academia.edu profiles, departmental sites, and then frequently logged in Google Scholar and/or touted in news releases. They may or may not make an academic splash, though there are certainly a number of recent conference papers in political science and mass comm that used the web to gain a lot of notice, but news organizations and commentators don’t operate with the same filters as academics do. This University of Washington study (including one author who is a former colleague of mine at Wisconsin) is a good example. It was linked by Talking Points Memo, in a way that seems fairly typical of how unpublished work is disseminated through blogs and other online political commentary. Its conclusion makes the paper appealing, as it seems to inject some empirical evidence into the debate over whether “Twitter revolutions” really have anything to do with Twitter, which has already gone back and forth in the press.
At the same time, it’s a study that hasn’t gone through the peer review process. Maybe it’s applying theory in an unconventional way, or maybe there’s something odd about the data, or maybe it’s exactly right (and I should note I’m only highlighting this study because it’s the most recent example I’ve come across — there are dozens more). But the widespread availability of unreviewed research presents a twist to the science news cycle model, which relies on carefully considered and reviewed conclusions to be reined in. If we’re inserting more and more research from earlier stages of the process into our discussions of public policy, current events, etc., we could see major challenges to the peer review model.