Posted on

thinking

What are some of the problems with the way doctors think? The answers to this question are complex. The problems are many and the scope of each problem is fairly large. The solutions are not obvious. But cognitive errors and mistakes are likely the leading causes of medical error. As many as 15% of all diagnoses made by physicians are incorrect. The diagnoses and subsequent treatments that we currently consider correct are often based on studies and data that are not actually true. Here is an overview of some issues that we need to deal with in order to improve this situation:

Cognitive biases. Our susceptibility to cognitive biases is enormous and they shape every thought and every clinical problem that we have. Even when we are supplied with accurate data, the use and interpretation of that data is more often than not perplexed and perverted by our cognitive biases. Some of the most important biases are confirmation biasbase-rate neglectcommission and omission bias, premature closure, anchoring, framing, availability bias, loss aversion and extremeness aversion, naive extensional reasoning, etc.

Under-utilization of physical exam and history. Modern clinicians do not trust the physical exam and have relegated the history to a laborious chore of documentation necessary for billing. Understanding that the history and physical is essential in determining the pretest probability for tests that might be ordered will help to reemphasize the importance of these steps. Clinicians need to understand the true value of physical exam, which in some cases is actually superior to the “objective” tests that they choose to rely on instead. Perhaps more important, they need to understand that the tests they rely on are not interpretable without an assessment of the patient’s pretest probability.

Over-utilization of medical tests. Not understanding the actual utility of medical tests leads to their overuse and leads to an over-reliance on them for clinical and diagnostic purposes. Inappropriate diagnosis, in turn, often stems from the over-utilization or inappropriate utilization of medical tests. This problem also promotes quasi- “standards of care” which have deleterious downstream effects, like creating a worse medico-legal environment and driving up cost without a simultaneous improvement in patient outcomes.

Misdiagnosis and over-diagnosis. As the history and physical is more and more under-emphasized by clinicians and medical tests are inappropriately utilized (without an appropriate understanding of post-test probabilities and the true value of the test), an unacceptable number of patients receive the wrong diagnosis (misdiagnosis) or a correct diagnosis that is clinically inconsequential (over-diagnosis). In a recent study, among 208 patients who had a missed or incorrect diagnosis, 63% of the time this was related to failure to perform a physical exam.  These types of basic mistakes leading to inappropriate diagnosis, in turn, lead to inappropriate or unnecessary interventions that drive up medical costs, while increasing the net risk of harm to patients.

An inappropriate understanding of (and therefore an inappropriate utilization of) statistics. Our understanding of statistics is a bedrock of almost every activity of clinical reasoning and performance of medical care. This is true even if you think you don’t use statistics. It relates to how we think about the impact and importance of disease processes, the utility of tests and therapies that we order, the risk:benefit ratios of our clinical interventions, the magnitude of the importance of the things that we do to and for patients, how we read and interpret scientific literature, and how we design and perform research studies. To be more technical, we are living in the midst of a revolution in the field of statistics, where much of the 20th Century was consumed with the dogma of the Frequentist or Orthodox school of statistics, while the 21st-century is seeing the slow, painful, disruptive death of this philosophical view of statistics, being replaced now by what some call Bayesian statistics or “probability as logic.” The importance of this goes far beyond impacting the statistical methods we use. Indeed, it changes the entire way we think and has profound implications for what we accept as true and untrue. It changes the way we conduct experiments, and even undermines what we call “the scientific method.” It should, hopefully, change the way we publish, analyze and apprise scientific literature.

Discomfort with uncertainty. Clinicians are notoriously uncomfortable with uncertainty. We feel better when we are definitive; we like to tell patients exactly what’s wrong with them and exactly what’s best for them. We like things black and white: one treatment is better than another, one test is better than another, this study proves this, that study proves that, etc. But the reality is, all of those types of statements are absurd. In philosophical terms, there are profoundly important epistemological issues at play regarding our views of certainty and doubt that are framed in our understanding of probability. In practical terms, we are never “certain” about the diagnosis, but we do accept some diagnoses as more probable than others. Appreciating the uncertainty in every diagnosis and, indeed, in every decision, helps to de-bias our cognitive processes and de-dogmatize many of the things we hold to be true. It also changes what we think it means to be “wrong” or “right” and helps us think in inferential terms, always assessing new evidence and remaining comfortable with changing our beliefs (diagnoses, etc.) as new data develops. Plausible inference or Bayesian inferential thinking is the key to dealing with uncertainty in a logical manner.

The misunderstanding, underutilization, and perversion of Evidence Based Medicine (EBM).  Many believe that EBM is at a crossroads. Its most ardent detractors believe that it is a standing in the way of progress. Yet for its supporters, it’s hard to imagine any alternative to EBM is valid. Should we rely on the common sense and anecdotal experiences of a single practitioner when deciding how to diagnose and treat complex medical problems? Or should we rely on our collective experiences? Is the knowledge of one doctor with a few dozen patients more relevant than the knowledge of hundreds of doctors with thousands of patients? If you agree that the collective knowledge and experience of the many is more valuable than the experience of the one, then you believe in EBM. EBM is simply the organized system for how we collect, analyze, and report those collected experiences and collective knowledge. If we look more closely at the criticisms of EBM, they are not in fact criticisms of what I just wrote. Rather, they are criticisms of Frequent Statistical techniques which are at the root of the perversion and manipulation of EBM, and the incorrect use of the results of those techniques. It is often stated that statistics can be used to prove anything; in a sense, this is true. However, Bayesian inference cannot be used to prove anything. What’s missing from our current model of EBM is Bayesian inferential thinking or plausibility reasoning. By adding this small but profound component to EBM, all of the currently popular detractions fall away.

An educational system which does not adequately teach critical thinking skills. When we start to talk about things like Dual Process Theory, Cognitive Biases, Bayesian Inference, Plausibility Reasoning, Epistemology, etc., most people stop reading. It’s hard to convince many medical students, residents, and physicians that these topics are the root cause of most medical errors and waste, and even more, that these things stand in the way of scientific progress in medicine and many other fields. It is a struggle to make these topics palatable to folks who would rather be reading about seemingly more “practical” ways to help their patients. Doctors want to be told “do this” or “do that.” They want to organize knowledge algorithmically in the form of “if this, then that.” Ironically, this type of information often comes to them in the form of a practice guideline, which many also reject because it seemingly negates their individual experiences.

We have selected a particular type of person for medical training by emphasizing certain intellectual attributes that are demonstrable on standardized tests. This unfortunately means that a good memory and efficient recall is more important than critical thinking abilities. Philosophy, statistics, and logic are not prerequisites to medical school. Most practicing physicians see little value in the writings of Plato or Socrates, Boole or Cantor, Frege or Russell, Fischer or Jeffreys, Kant or Nietzsche, Jaines or Cox, Allais or Kahneman, etc. And the process of medical training does not emphasize critical thinking; there is no thesis to defend, no requirement to do meaningful research, no immersive view of knowledge. Rather, students are taught to memorize “truths” and regurgitate them when asked. This is made more convenient by review books and exam courses, which teach students lists of “facts,” all out of context.  All propositions are held to be the same, in simple, Aristotlean terms: either true or untrue. Multiple choice questions define whether we practice medicine or do not. We treat the “truth” of the boiling point of water in the same way as the “truth” of the lipid hypothesis of coronary artery disease; yet, these two “truths” have profoundly different levels of certainty or degrees of plausibility. So our approach to education needs a fundamental overhaul in order to teach the skills necessary for critical thinking.

We will work through each of these issues on howardisms and I’ll add links as we go to this post.