"Social sciences suffer from severe publication bias" says Nature http://t.co/jA6gmQbxSP pic.twitter.com/MT8WeUlPpl— Matthew Hankins (@mc_hankins) 28 agost, 2014
The British epidemiologist, Matthew Hankins, leads us to an article in Nature, but before going, I think it's worth commenting on the graph illustrated in his tweet. Note that from almost half a million studies published in Nature, the number of results without statistical significance (p> 0.5) is almost testimonial.
Ethical failure leaves one-quarter of all clinical trials Unpublished" where its mentioned that, after a review of the clinical trials registration of the US government in 2009, it has been observed that 29% of the studies, not only had not given the desired result but, as they were not published, the effort was lost for the analysis of the scientific community, which meant that the tests applied to the 299,763 participating patients had been in vain.
So far, many of us, who don’t work directly in research, would find it common practice that the big magazines choose their publications among those who had achieved the proposed results. Now however, critics warn that there is a need to have access to all results, good and bad, to better measure the impact of the things that has been proven to work, but also to avoid repeating research that doesn’t produce good results.