高樓低廈,人潮起伏,
名爭利逐,千萬家悲歡離合。

閑雲偶過,新月初現,
燈耀海城,天地間留我孤獨。

舊史再提,故書重讀,
冷眼閑眺,關山未變寂寞!

念人老江湖,心碎家國,
百年瞬息,得失滄海一粟!

徐訏《新年偶感》

2012年4月25日星期三

John Allen Paulos: Cancer by the Numbers




PHILADELPHIA – It is difficult to communicate medical risk to a large audience, especially when official recommendations conflict with emotional narratives. That is why, when the United States Preventive Services Task Force (USPSTF) in 2009 presented its guidelines for breast cancer screening, which recommended against routine screenings for asymptomatic women in their 40’s and biennial, rather than annual, mammograms for women over 50, the public responded with confused fury.

The key to understanding this response is to be found in the nebulous zone between mathematics and psychology. People’s discomfort with the findings stemmed largely from faulty intuition: if earlier and more frequent screening increases the likelihood of detecting a possibly fatal cancer, then more screening is always desirable. If more screening can detect breast cancer in asymptomatic women in their 40’s, wouldn’t it also detect cancer in women in their 30’s?

And, if so, why not, reductio ad absurdum, begin monthly mammograms at age 15?
The answer, of course, is that such intensive screening would cause more harm than good. But striking the proper balance is challenging. Unfortunately, it is not easy to weigh breast cancer’s dangers against the cumulative effects of radiation from dozens of mammograms over the years, the invasiveness of biopsies, and the debilitating impact of treating slow-growing tumors that would never have proven fatal.

The USPSTF recently issued an even sharper warning about the prostate-specific antigen test for prostate cancer, after concluding that the test’s harms outweigh its benefits. Chest X-rays for lung cancer and Pap tests for cervical cancer have received similar, albeit less definitive, criticism.

The next step in the reevaluation of cancer screening was taken last year, when researchers at the Dartmouth Institute for Health Policy announced that the costs of screening for breast cancer were often minimized, and that the benefits were much exaggerated. Indeed, even a mammogram (almost 40 million are given annually in the US) that detects a cancer does not necessarily save a life.

The Dartmouth researchers found that, of the estimated 138,000 breast cancers detected annually in the US, the test did not help 120,000-134,000 of the afflicted women. The cancers either were growing so slowly that they did not pose a problem, or they would have been treated successfully if discovered clinically later (or they were so aggressive that little could be done).

A related concern is measurement. Since the patient’s duration of survival is calculated from the time of diagnosis, more sensitive screening starts the clock sooner. Survival times can thus appear longer, even if the earlier diagnosis had no real effect on survival.

Naturally, individual cases dictate which tests and treatments are best, but an additional concern about frequent screenings is the problem of false positives. When one is looking for something relatively rare (whether cancer or terrorists), it is wise to remember that a positive result is often false. Either the “detected” pathology is not there, or it is not the sort that will kill you.

Consider the following hypothetical example. Assume that the screening test for a certain cancer is 95% accurate, meaning that if someone has the cancer, the test will be positive 95% of the time. Next, assume that if someone does not have the cancer, the test will be positive only 1% of the time. Finally, assume further that 0.5% of people – one of every 200 – actually have this type of cancer. If your doctor tells you that you have tested positive, does this mean that you are likely to have the cancer? Surprisingly, the answer is no.

A little arithmetic shows why. Suppose that 100,000 screenings are conducted. On average, 500 people will have the cancer. Since 95% of them will test positive, there will be, on average, 475 positive tests. Of the 99,500 people without cancer, 1% will test positive, yielding 995 false positives out of 1,470 positive tests. In other words, even if you tested positive for the cancer, the probability that you actually have it is only about 32%. 

That answer is decidedly counterintuitive and hence easy to reject. Most people do not think in terms of probabilities other than “50-50” and “one in a million.” But, whatever the probabilities, the fact remains that there will generally be a high percentage of false positives when screening for rare conditions. Moreover, the patients who receive these faulty diagnoses will usually receive further treatments, which often will have harmful consequences.

The “availability heuristic” – a pervasive cognitive bias caused by people’s tendency to estimate the likelihood of a phenomenon by how easily an example of it comes to mind – routinely clouds the issue. People relate much more readily to a friend dying of cancer than they do to statistics about strangers suffering from the consequences of testing.

But the bottom line is that the ongoing reevaluation of cancer screening is evidence-based. When it comes to policymaking, decisions must be based on facts and argument, not anecdotes and stories, however compelling those narratives may be.


John Allen Paulos is Professor of Mathematics at Temple University and the author of Innumeracy and A Mathematician Reads the Newspaper.