Monday, 1 February 2016

"Nature" echoes the unpublished biased research

The British epidemiologist, Matthew Hankins, leads us to an article in Nature, but before going, I think it's worth commenting on the graph illustrated in his tweet. Note that from almost half a million studies published in Nature, the number of results without statistical significance (p> 0.5) is almost testimonial.

If you click the link in Dr. Hankins’s tweet you’ll find an article, "Social sciences suffer from severe publication bias", which explains that, according to researchers at Stanford University, only 20% of sociological studies with null results are published. And from this article, typical to the field of social sciences, a new link can lead you to a blog post by Nature, specific to the biomedical sector, "Ethical failure leaves one-quarter of all clinical trials Unpublished" where its  mentioned that, after a review of the clinical trials registration of the US government in 2009, it has been observed that 29% of the studies, not only had not given the desired result but, as they were not published, the effort was lost for the analysis of the scientific community, which meant that the tests applied to the 299,763 participating patients had been in vain.

So far, many of us, who don’t work directly in research, would find it common practice that the big magazines choose their publications among those who had achieved the proposed results. Now however, critics warn that there is a need to have access to all results, good and bad, to better measure the impact of the things that has been proven to work, but also to avoid repeating research that doesn’t produce good results.

Jordi Varela

No comments:

Post a Comment