Daniel Kahneman |
Noise understood as the variability in decisions made in the face of the same facts, is, along with cognitive biases –which I have talked about in previous posts–, one of the fundamental reasons why human beings make decision errors in all kinds of fields, starting with medicine, but also in the fields of law, economics, police decisions, food safety or personnel selection. Since it is a statistical concept, it cannot be heard but must be observed through the data. While cognitive biases produce deviations in decisions that always go in the same direction, noise implies that these deviations occur in any direction, without a systematic pattern. The book authors use the metaphor of a target: while the shots of an individual with cognitive biases would always move away from the centre towards the same place, the presence of noise would imply that the failures observed in the target would be distributed throughout the space, in unpredictable patterns.
The key effect of noise is not that the shot is missed, but the lack of consistency in decision making. The authors equate consistency with fairness. If cognitive biases are removed, the “correct” solution can be reached. On the contrary, the lack of consistency produces erroneous results that diminish the credibility of the decision systems that produce those evaluations. Given that we live in a time when polarization and mistrust in institutions are important, eradicating the noise that leads to random and unfair decisions can help restore trust between human beings and, in our context, improve the credibility of our sanitary system.
Biases are difficult to detect if you don't know where the centre is, but they are easy to correct once you know in which direction the deviation tends to occur. Noise is very easy to observe; it's enough to verify variability in decisions, but it is more complicated to correct because there is no pattern of deviation to address.
Noise in healthcare decisions undermines doctor-patient credibility
In the health field, faced with the same patient with the same symptoms, different doctors can diagnose completely different diseases and therefore prescribe different treatments and interventions that may not only fail to improve the underlying disease but even aggravate the problem. That is why it is very difficult to establish an appropriate level of trust between doctor and patient, since trusting the wrong doctor can be just as bad as asking for second and third opinions from different doctors who offer incompatible diagnoses. Noise in medicine appears especially in areas that lend themselves to greater subjectivity, such as psychiatry, but even in more objective areas such as radiology, doctors do not unequivocally interpret the same X-ray they observe. And not even this very diagnosis is the same when the doctor himself observes the same case at two different times. For example, as recounted in the book, a study of 22 physicians examining the same 13 angiograms twice, at different times, found that each physician disagreed with their prior diagnosis between 63% and 92%. % of the time on average.
Decision hygiene: how to reduce the noise?
A common aspect of all the examples shown in the book is that from all of them an attempt has been made to create instruments to reduce the presence of noise. In medicine, for example, there are clinical practice guidelines, while in law there is a debate about the standardization of sentences and in human resources, there is a recommendation for avoiding interviews and for converting the selection of personnel into tests of ability and personality with objective evaluation. However, although these measures can reduce noise, there is also logical reluctance, in some cases of a corporate nature, because reducing noise also implies losing the richness that discretion and experience can provide. To put it humorously, one of Groucho Marx's most remembered phrases is: "If everyone agrees with me, I know I'm wrong."
What is the optimal noise level? How can we identify when noise is only harmful? And, in those cases, how can we reduce it? These are the topics that this book deals with. The authors dedicate the first part to differentiating and relating the cognitive biases of noise. Then they focus on a type of noise of special importance: the variability in the predictions of each phenomenon and how to attack it through rules, formulas and algorithms that, although in many cases do not increase knowledge about the phenomenon that is trying to be predicted, contribute to reducing the noise. A third part, perhaps the most interesting, deals with the psychological causes of noise. Among the causes, the authors highlight the cognitive and personality differences between human beings, the different weight that individuals give to the different factors behind a phenomenon, and the different use we make of the measurement scales that characterize it. The final part of the book focuses on practical considerations on how decision-making, both individual and collective, can be improved and error prevented. They give this part the striking name of "decision hygiene". Finally, the authors offer a general decision-making protocol that is intended to help evaluate the different options and incorporates some practices to help structure the decision process, such as replacing the most complicated evaluations in absolute terms with comparative evaluations. Let's take a practical example when a teacher evaluates an exam, a more objective grade would be obtained if, as a previous step to correct the exams, they were ordered in a ranking through systematic comparisons.. is this exam better than this other one?
A good anecdote, but a bit repetitive
The book continues a fashion trend in recent years that consists of publishing popular science based on data analysis and social experiments with great success. Among the best-known examples of publications along the same lines, readers will remember Thinking Fast, Thinking Slowly by Kahneman himself, Nudge by Thaler and Sunstein, Freakonomics by Dumber and Levitt, or The Traps of Desire by Ariely. All of them have been best-sellers that have helped to popularize research in the social sciences and to develop areas such as behavioural economics and social psychology. In this sense, a book that unites Kahneman, a psychologist, and Sunstein, a lawyer, both authors of two of the best-selling books (I don't know if they have also been read) in the area, explaining the psychological causes of human errors and how to correct them to better institutions, seems like a winning proposition. In fact, from the very design of the cover to its structure, the book has practically sold as a sequel to Kahneman's. A sort of "Think Fast, Think Slow 2: The Noise Attack."
The book is motivating and includes many compelling and engaging examples that can serve the reader with successful stories to tell at social gatherings. The main argument about how we have underestimated the presence of noise in our society and the mechanisms it offers to solve it is intuitive. The effort made by the authors to provide general practice guidelines for employing a more structured decision-making process is especially noteworthy.
However, I suspect that the book arrives a little late in a market perhaps already saturated with writings of this genre. I believe its reading will be less attractive to the reader who has already read some of the cited references. In the first place, because it is excessively long and repetitive. Five hundred pages is a lot to expose an idea already summed up precisely in some of the academic reviews of the book. The examples, many of them from the health area, are numerous, but the text ends up exhausting the reader, not by conviction, since it only shows that noise is everywhere, but by the fact that new examples do not lead to learning new lessons.
In addition, the presence of three different authors who have not managed to unify the same style of writing is excessively noticeable. It is perceived that in most cases they try to simplify concepts of probability and statistics for the non-expert reader, but sometimes they go too far, both by oversimplification and by default, which will frustrate the different types of readers. Plus, it feels like the authors have already expended their best bullets on previous books. Many of the examples they use have already appeared in other works by the co-authors, and the practical implications of how to make better decisions are not entirely new. Unlike the other books, which are based largely on the authors' research, many of the stories illustrated here are based on subsequent investigations by others, which makes the argument lose some rigour and credibility. A notorious example is a confusion that occurs in various chapters between the concepts of correlation and causality when analyzing the data.
In any case, I deem the book interesting for anyone involved in the decision-making process and correcting the individual and institutional consequences of mistakes. However, I venture to predict that for most readers, the first two chapters, motivating the topic with very good examples and possible solutions, and perhaps the decision guides included at the end, will be more than enough.
No comments:
Post a Comment