Medical tests are not infallible

The man was 66 when he came to the hospital with a serious skin infection. He had a fever, low blood pressure, and a headache. His doctors gave him a brain scan just to be safe. They found a very small bulge in one of his cranial arteries, which probably had nothing to do with his headache or the infection.

Nevertheless, doctors ordered an angiogram to get images of brain blood vessels. This test, in which doctors insert a plastic tube into a patient's arteries and inject dye, found no evidence of any blood vessel problems. But the dye injection caused multiple strokes, leading to permanent issues with the man's speech and memory.

That case, recounted in JAMA Internal Medicine three years ago, is no surprise. As a doctor in a large urban hospital, I know how much modern medicine has come to rely on tests and scans. I review about 10 cases per day and order and interpret more than 150 tests for patients. Every year, doctors in this country order more than 4 billion tests in total. They've gotten more sophisticated and easier to execute as technology has advanced, and they're essential to helping doctors understand what might be wrong with their patients.

But my research has found that many physicians misunderstand test results or think tests are more accurate than they are. Doctors especially fail to grasp how false positives work, which means they make crucial medical decisions--sometimes life-or-death calls--based on incorrect assumptions that patients have ailments that they probably don't. When we do this without understanding the science of risk and probability, we unacceptably increase the chances of making the wrong choice. In the worst cases, we increase the odds of unnecessarily putting patients in danger.

The first problem that doctors (and patients) face is a basic misunderstanding of probability. Say that Disease X has a prevalence of 1 in 1,000 (meaning that 1 out of every 1,000 people will have it), and the test to detect it has a false-positive rate of 5 percent (meaning 5 of every 100 subjects test positive for the ailment even though they don't really have it). If a patient's test result comes back positive, what are the chances that she actually has the disease? In a 2014 study, researchers found that almost half of doctors surveyed said patients who tested positive had a 95 percent chance of having Disease X.

This is catastrophically wrong. Imagine 1,000 people, all with the same chance of having Disease X. We already know that just one of them has the disease. But a 5 percent false-positive rate means that 50 of the remaining 999 would test positive for it nonetheless. That means 51 people would have positive results, but only one of those would really have the illness. So if your test comes back positive, your true chance of having the disease is actually 1 out of 51, or 2 percent--a heck of a lot lower than 95 percent.

A 5 percent false-positive rate is typical of many common tests. The primary blood test to check for a heart attack, known as high-sensitivity troponin, has a 5 percent false-positive rate. U.S. emergency rooms often administer the test to people with a very low probability of a heart attack; as a result, 84 percent of positive results are false, according to a study published last year. These false-positive troponin tests often lead to stress tests, observation visits with expensive co-pays and sometimes invasive cardiac angiograms.

In one study, gynecologists estimated that a woman whose mammogram was positive had a higher than 80 percent chance of having breast cancer; the reality is that her chance is less than 10 percent. Women who have a positive mammogram often undergo other tests such as an MRI and a biopsy, which can offer more precision about the presence of cancer. But researchers have found that even after the battery of exams, about 5 of every 1,000 women will have a false-positive result and will be told they have breast cancer when they do not.

These women are likely to receive unnecessary treatment--generally some combination of surgery, radiation or chemotherapy, all with serious side effects, and are stressful and expensive. Switzerland and France, grasping this problem, are halting and reconsidering their mammogram programs. In Switzerland, they're not screening ahead of time, preferring to manage cases of breast cancer as they're diagnosed. In France, doctors are letting women decide for themselves whether to have the tests.

Studies have found that doctors make similar errors with tests for prostate and lung cancer, heart attack, asthma and Lyme disease. No test is perfect, and even statistically sophisticated doctors can make mistakes. That's not the problem.

In a study I published last year with several colleagues, we reviewed the treatment of 177 patients who were admitted to hospitals with a wide range of problems, from broken bones to severe intestinal pain, to see how necessary their tests were, as judged by the latest medical guidelines. We found that nearly 90 percent of the patients received at least one unnecessary test and that overall nearly one-third of all the tests were superfluous.

In another paper from 2016, my colleagues and I interviewed more than 100 doctors to gauge their understanding of the risks and benefits of 10 common medical tests or treatments. We found that nearly 80 percent of our subjects overestimated the benefits. Strangely, the doctors themselves acknowledged this, with two-thirds rating themselves as not confident in their understanding of tests and probability. Eight out of 10 said they rarely, if ever, talked to patients about the probability of test results being accurate.

I too sometimes fall prey to overvaluing test results regardless of their probability. Last year, I saw a patient who had problems breathing. His symptoms were typical of chronic obstructive pulmonary disease (COPD), but a test for a blood clot in the lung came back positive. This test has a relatively high false-positive rate, but we still started the patient on a blood thinner, which can treat clots but also has serious risks, such as internal bleeding. Within a few days, another test confirmed that he did not have a blood clot, so we discontinued the anticoagulant, which caused no permanent harm. But things could have gone much worse.

Basic misunderstandings about how tests work and how accurate they are contribute to a bigger problem. Although precise numbers are hard to come by, every year, many thousands of patients are diagnosed with diseases that they don't have. They receive treatments they don't need, treatments that may have harmful side effects. Perhaps just as important, they and those around them often experience enormous stress from these incorrect diagnoses. Treating nonexistent diseases is wasteful and often expensive, not only for patients but for hospitals, insurance companies and governments.

Doctors also tend to overuse some tests. In a paper last year, my colleagues and I highlighted some key examples: One was computed tomography (CT), a high-tech scanning technology that is increasingly used in patients with nonspecific respiratory symptoms. In cases with only mild respiratory problems, the test does not improve patient outcomes, and it can lead to false positives. Often the test shows small lung nodules that can lead doctors to follow up with a high-risk surgical biopsy for cancer--which is very unlikely to be the cause of the symptoms. The scan also exposes patients to radiation, which is a risk in itself; studies have found that between 1.5 and 2 percent of all cancers in the United States are caused by radiation from CT scans.

It is not surprising that doctors tend to overestimate the precision and accuracy of medical tests. The companies that provide tests work hard to promote their products. Doctors often think that ordering more tests will protect from lawsuits. Moreover, medical schools offer limited instruction on how to understand test results, which means many doctors are not equipped to do this well. Even when medical students have short classroom instruction in test interpretation, it is rarely taught in a clinic with actual patients.

One key step is for doctors to acknowledge the gaps in our understanding and to improve our knowledge of what each test can accurately tell us. Medical schools and professional associations can also do a much better job of educating doctors to understand how risk and probability work. Patients play an important role. They should realize that doctors, even quite capable ones, may not fully understand the statistical underpinning of the tests they use.

In essence, your doctor may have a blind spot, an unconscious tendency to have too much trust in a test. Being aware of this problem and asking your doctor about disease probability can reduce hassles and anxiety--and sometimes even save lives.

Editorial on 10/14/2018

Upcoming Events