Nature has echoed the professional debate about the intrinsic quality of scientists’ work, in a dynamics of self-criticism comparable to what is taking place, in similar terms, in the clinical world. Scientists are also fallible, says the article writer and therefore, should enhance the mechanisms of self-criticism, rather than enrol in self-deception.
John Ioannidis, Meta-Research Innovation Center at Stanford, says scientists should work harder to understand the biases of their human fallibility if they want to overcome the crisis of confidence generated by the poor reproducibility of research results. And to illustrate his words, Ioannidis offers three examples: a) from a selection of one hundred psychology studies, only the results of just over a third of the work could be replicated, b) a group of Amgen researchers only succeeded in reproducing 6 of the results of 53 reference studies in the field of oncology and haematology, and c) the Ioannidis team itself was able to replicate completely only 2 of the 18 gene expression studies based on microarrays (DNA chips).
The Nature article author, Regina Nuzzo, analyzes four trends that lead scientists to self-deception and, to compensate, she proposes four ideas to reduce this phenomenon.
Four Trends in Self-Deception
Myopia of the hypothesis. Often researchers conduct biased reviews of evidence to support the hypothesis of the future project, while ignoring the evidence that goes the other way, and not hearing enough adverse opinions to their original idea.
The adoption of flawed models. There’s a Texan fable about a clumsy shooter who, on a hunting course, got lucky and hit the target; this caused some bettors who thought they had discovered a talented shooter to lose a lot of money. According to Nuzzo, same trends are observed among researchers: being blinded by the observed patterns that inspire them and, on the other hand, not realizing that these patterns may not be supported in a sufficiently consistent way.
Asymmetric attention. When the results go in the desired direction there is an inclination to eagerly take them for good, whereas if the results go in the opposite direction, they are revised and double checked excessively.
The rounding of the story. Once the results of the investigation are obtained, a tendency to create a well rounded story is observed. Matthew Hankins, a statistician at King's College London, has collected more than 500 creative phrases that researchers use in scientific articles to convince readers that the results, while not significant, are valuable.
Four proposals to reduce self-deception
Promote the practice of devil's advocate. Regina Nuzzo suggests devoting time to elaborate hypotheses that are alternative to the original and to contrast the support that the scientific revision offers to each of these alternative hypotheses.
Signing agreements prior to publication. Publishing the plan for the data exploitation and analysis during the previous phase of the work. Richard Horton, editor of the Lancet and Richard Smith, former editor of the British Medical Journal, have been proposing this for some time, and now more than 20 scientific journals are already offering agreements to publish projects protocols, with clear commitments of results publishing, regardless of whether they go in the desired direction or not.
Inviting the opponents to share their opinion. Scientists who are in positions contrary to the hypotheses are the best actors in detecting self-deception: approach myopia, errors in the detection of models, asymmetric attention, confusing scripts, etc.
Blind data analysis. The focus of this proposal is for scientists to analyze data that only computers know if they are the ones that correspond to the actual results or not. This should be done as a mechanism to avoid unconscious bias in the analysis phase, due to the pressure that scientists have to obtain certain results.
Large investments and research expectations have boosted competitiveness among professionals and between institutions, to the extremes that have triggered alarms, which has led many leaders and scientific editors to believe that the time has come to look after biases and to increase the reproducibility of results.