No, the PSA test probably didn’t save Ben Stiller’s life [Respectful Insolence]


A frequent topic of discussion on this blog is the concept of overdiagnosis. It’s a topic I’ve been writing about regularly since around 2007 or so and is defined as the detection in an asymptomatic person of disease that, if left alone, would never progress to endanger that person’s life or well-being within his or her lifetime. The problem with overdiagnosis is that it pretty much always leads to overtreatment, the treatment of overdiagnosed disease that is not health- or life-threatening. The key shortcoming in our knowledge that leads to overtreatment is that, once we detect disease with many screening tests, we usually do not have any tests that can identify which lesions are potentially dangerous and which ones can be safely watched because they are unlikely to progress (or to progress fast enough) to be a problem within the patient’s lifetime.

Most commonly, I’ve discussed overdiagnosis in the context of mammographic screening programs, but I’ve also looked at the overdiagnosis of thyroid cancer, prostate cancer, and other diseases. The bottom line, from my perspective, is that screening asymptomatic populations for breast and prostate cancer does have some benefit, but over the years that benefit was likely exaggerated as the downside of screening (overtreatment) underestimated. On the other hand, there are those out there whom I view as nihilists, who have gone too far in the other direction, declaring certain forms of screening (particularly mammography) to be worthless or even harmful.

The purpose of that preamble is to let new readers know where I’m coming from, as what I’ve just written will have been no surprise to regular readers. The longer you’ve been a reader, basically, the less of a surprise it will be. So you can probably predict how I reacted to an article that’s been going around social media since it was first published four days ago. In fact, I was going to write about it yesterday, but my Dug The Dog complex hit, and I was distracted by two articles in the latest New England Journal of Medicine. Still, many of you can guess which article I’m talking about, as the title is akin to my retort to an article by a fellow physician that, no, the New York Times did not kill your patient, only in the other direction. This time around, it’s not a physician, but rather an actor and comedian, Ben Stiller, who wrote about The Prostate Cancer Test That Saved My Life. My response is not quite as definitive as my response about the NYT. In response to Mr. Stiller, I can only respond: No, the prostate cancer test probably didn’t save your life, but we can’t know for sure.”

Stiller is clearly very intelligent. His essay is well written and well-argued and even includes a poignant description of his reaction to his diagnosis of prostate cancer:

I got diagnosed with prostate cancer Friday, June 13th, 2014. On September 17th of that year I got a test back telling me I was cancer free. The three months in between were a crazy roller coaster ride with which about 180,000 men a year in America can identify.

Right after I got the news, still trying to process the key words echoing dimly in my head (probability of survival–vival-vival-val…” “incontinence-nence-nence-ence…), I promptly got on my computer and Googled “Men who had prostate cancer.” I had no idea what to do and needed to see some proof this was not the end of the world.

John Kerry… Joe Torre… excellent, both still going strong. Mandy Patinkin… Robert DeNiro. They’re vital. OK great. Feeling relatively optimistic, I then of course had to do one more search, going dark and quickly tapping in “died of” in place of “had” in the search window.

If you’ve just been diagnosed with cancer, never do that search, people.

Pretty much any cancer patient can relate to Stiller’s story, particularly if the cancer is a common one, like prostate, for which there will likely be lots of news stories about celebrities bravely battling the disease. Here’s where Stiller goes wrong, but understandably so given the issues involved:

Taking the PSA test saved my life. Literally. That’s why I am writing this now. There has been a lot of controversy over the test in the last few years. Articles and op-eds on whether it is safe, studies that seem to be interpreted in many different ways, and debates about whether men should take it all. I am not offering a scientific point of view here, just a personal one, based on my experience. The bottom line for me: I was lucky enough to have a doctor who gave me what they call a “baseline” PSA test when I was about 46. I have no history of prostate cancer in my family and I am not in the high-risk group, being neither — to the best of my knowledge — of African or Scandinavian ancestry. I had no symptoms.

What I had — and I’m healthy today because of it — was a thoughtful internist who felt like I was around the age to start checking my PSA level, and discussed it with me.

If he had waited, as the American Cancer Society recommends, until I was 50, I would not have known I had a growing tumor until two years after I got treated. If he had followed the US Preventive Services Task Force guidelines, I would have never gotten tested at all, and not have known I had cancer until it was way too late to treat successfully.

It’s understandable why Stiller feels this way and believes this. Also, in fairness, one aspect of Stiller’s case that tends to be in his favor is how young he is. Prostate cancer before the age of 50 is very uncommon. Be that as it may, it is a maddeningly intuitive notion that early detection always saves lives. Yet, as I’ve pointed out time and time again, beginning back in 2007, the relationship between the early detection of cancer and the likelihood of cure and long term survival is not nearly as straightforward as it seems on the surface. There are a number of factors that confound this seemingly obvious and intuitive relationship, but the two most important are lead time bias and length bias.

I’ll illustrate the concept of lead time bias with a graph that I’ve used many times before to explain:

Lead time bias

And a slightly more complex one:

lead-time-bias

I have explained the concept of lead time bias in more depth here and here, but I do these graphs for illustrating the concept that lead time bias can make it look as though survival from a cancer screened for is longer even in the absence of any therapeutic effect whatsoever from treatment. Since it’s been a while since I’ve shown this particular graph, I’ll add it to the mix, as it shows the effect of lead time bias on cancer survival curves:

< lead_time_bias_02-480x278

Before I move on, I’ll even cite Aaron Carroll again, as he gives one of the simplest and most straightforward explanations of lead time bias that I’ve yet to see. Of course, the phenomena of lead time bias and overdiagnosis are intertwined. Think of overdiagnosis as the detection of a tumor with a lead time, as illustrated above, that is longer than the remaining expected lifespan of the patient, and you’ll see what I mean (I hope). In such a case, patient survival will seem to be, in essence, infinity, at least with respect to the cancer in that the patient dies of something else before the tumor progresses significantly.

The other problem with screening tests that confounds the relationship between early detection and improved cancer survival is a phenomenon known as length bias. Basically, length bias is the term used to describe how regular screening tests tend to detect slower-growing, more indolent disease preferentially. How many of you know someone who had a mammogram on schedule as recommended but came back the next year with a big, advanced breast cancer. Did mammography fail? Probably not. Rather, the tumor was just too fast growing. It went from undetectable by the test (mammography) to advanced during the interval between screenings. More often, what is detected by screening tests like mammography is disease the grows relatively slowly. This concept is illustrated here, by another chart that I also like to use a lot:

untitled

The other problem with length bias is that the more sensitive the test, the more likely it is to detect tiny indolent tumors that are so slow-growing that they wouldn’t progress to the point where they would endanger the patient’s life within the lifetime of the patient. Some, particularly screen-detected cancers, even spontaneously regress. Length bias and overdiagnosis are also related concepts in that length bias contributes to the tendency of screening tests to overdiagnose the disease they are looking for.

So, unfortunately, Ben Stiller is wrong in that there’s no way of knowing that the PSA test “saved” his life or that if he hadn’t been screened at all his tumor would not have been found until it was “too late.” Those are common assumptions on the part of patients and even many physicians. In brief, there is no surefire way of knowing whether or not he was overdiagnosed. However, as we’ve come to appreciate over the last decade, prostate cancer is very commonly overdiagnosed. In fairness, Stiller is definitely more savvy and informed than the average celebrity writing about medical issues. He clearly at least recognizes the controversies involved in screening:

The criticism of the test is that depending on how they interpret the data, doctors can send patients for further tests like the MRI and the more invasive biopsy, when not needed. Physicians can find low-risk cancers that are not life threatening, especially to older patients. In some cases, men with this type of cancer get “over-treatment” like radiation or surgery, resulting in side effects such as impotence or incontinence. Obviously this is not good; however it’s all in the purview of the doctor treating the patient.

This is exactly what happened to Stiller. His PSA rose for a year and a half before he was referred to a urologist, who did an examination and ordered an MRI. This led to a biopsy, which diagnosed a tumor with a Gleason score of 7 (3+4), which is categorized “mid-range aggressive cancer.” The Gleason score, however, is not the be-all and end-all as a criterion for starting treatment. Stiller doesn’t tell us how many cores of his biopsy contained cancer, whether cancer was found in both sides of the prostate, or how much of each core contained cancer, or how fast his PSA was rising. Be that as it may, he is far more likely than not to be mistaken when he declared so bluntly that the PSA test definitely saved his life. This infographic from the National Cancer Institute tells why there’s at least a 95% chance he was mistaken. In fact, for every 1,000 men who undergo PSA screening, only one life is saved, but 100-120 get false positive diagnoses and 110 will get a prostate cancer diagnosis. Again, it’s understandable why he would believe as he does, but that doesn’t make it any less misguided.

Let’s just put it this way. If even Dr. Mehmet Oz throws cold water on your claims, you should rethink:

Of course, Oz then went on to oversell dietary and exercise interventions for preventing death from prostate cancers, but you can’t expect Oz to do a whole segment without getting into some woo, now, can you?

The main problem with Stiller’s article, besides its mistaken argument that PSA screening almost certainly saved his life. He also cited a study by Edward Schaeffer, his surgeon and the chair of urology at Northwestern University, to support his point, stating, “There is growing evidence that these guidelines have led to increased cases of prostate cancers that get detected too late for the patient to survive the disease.”

You might remember this study, as it was in the news back in July. Basically, it purported to find that the incidence of metastatic prostate cancer has been climbing since PSA screening began to fall, the implication being that decreased screening is leading to more men being diagnosed with metastatic (and therefore incurable) prostate cancer. I first saw the study, but it was at a time when I had a lot of other things going on; so I didn’t look at it carefully. Concerned, I whipped off a quick e-mail to a relevant expert whom I trusted and had published with before asking him what he thought of the study and whether maybe I should consider starting PSA screening again. His response was blistering.

I’m not going to quote his e-mail directly because I do not publicly reveal e-mail contents without permission of the person who sent it to me. I can say that he did dismiss it, pointing out that, although the authors used the word “incidence,” what they were really counting were gross numbers of cases reported to the National Cancer Data Base (NCDB) from 1,000 health care facilities and changes in gross numbers from 2004 to 2013. He pointed out that true incidence is the number of cases divided by the number of people in the population (usually expressed per 100,000). So here’s the problem. Schaeffer’s group reported the numerator, but had no idea what the denominator was (the number of people in the population from whom the cases came from), as they even basically admitted:

Limitations to the current study include the lack of national annual incidence rates in the NCDB. Thus, our outcome variable was annual incidence of prostate cancer at over 1000 health-care facilities in the United States relative to that of 2004, the initial year of our study period.

In fact, the incidence of metastatic prostate cancer has been stable from 2004 to 2012 (the time period of the study cited by Stiller) and in fact the incidence was lower in 2012 than it was in 2000. Clearly, the incidence of metastatic prostate cancer is not increasing using more standard methods of measurement.

Suffice to say, I was horribly embarrassed for having asked about this study when in retrospect I knew enough to have figured out how useless it was myself.

Others have been critical of Stiller as well, for example, Kevin Lomangino, the managing editor of HealthNewsReview.org, who wrote Ben Stiller’s misguided prostate cancer recommendations aren’t based on evidence. He hit many of the same points I did, although he didn’t go into lead time bias and length bias, as I did and some I didn’t, noting, as I did, that Stiller is “smart, persuasive, and famous,” but that “his skewed piece may do a great deal of harm to men who may be led astray by his faulty reasoning.” He also cites oncologist Vinay Prasad, MD, MPH, who made a similar point to that which I’ve made many times regarding mammography:

Ben Stiller says everyone over 40 should get a PSA, but why does he discriminate against 39 year olds? If you accept Ben Stiller’s logic, that we should do anything to find cancer early (with near total disregard for net effects, harms or overdiagnosis), why is 40 Ben Stiller’s cutoff? He criticizes the American Cancer Society for 50, and yet equally arbitrarily chooses 40. If Ben Stiller thinks a 40 year old should be offered a PSA, why not a 39 year old? Why not every man? Since Ben Stiller does not employ careful scientific reasoning to reach his position, I would argue that Ben Stiller is logically inconsistent.

I’ve basically said the same thing about mammography to those criticizing newer recommendations that increased the age at which mammographic screening should begin for the average risk woman and argue for retaining the existing guidelines that recommend beginning at age 40. Why not begin at age 35? Or 30? Or even 20? In other words, even the most enthusiastic advocates of screening realize that there is an age below which incidence of breast cancer is so low that the harms from screening far outweigh the potential benefits. It’s always a judgment call to set those cutoffs, and there is always harm from overtreatment whenever there is screening with a significant incidence of overdiagnosis. Balancing the risks and benefits is what is so difficult about constructing screening programs for common cancers like breast and prostate. I only wish that Ben Stiller had done more than acknowledge those risks in passing and avoided concrete declaratory statements like, “Taking the PSA test saved my life. Literally.” And: “I believe the best way to determine a course of action for the most treatable, yet deadly cancer, is to detect it early.”

Unfortunately, we’ve been learning that this is not always true. It’s complicated. I am gratified, though, that the reaction to Stiller’s article has been, unlike what would likely have been the case in the past, to take him to task for making assertions not supported by evidence. Unfortunately, Stiller’s celebrity will likely trump evidence.



from ScienceBlogs http://ift.tt/2dzLYD4

A frequent topic of discussion on this blog is the concept of overdiagnosis. It’s a topic I’ve been writing about regularly since around 2007 or so and is defined as the detection in an asymptomatic person of disease that, if left alone, would never progress to endanger that person’s life or well-being within his or her lifetime. The problem with overdiagnosis is that it pretty much always leads to overtreatment, the treatment of overdiagnosed disease that is not health- or life-threatening. The key shortcoming in our knowledge that leads to overtreatment is that, once we detect disease with many screening tests, we usually do not have any tests that can identify which lesions are potentially dangerous and which ones can be safely watched because they are unlikely to progress (or to progress fast enough) to be a problem within the patient’s lifetime.

Most commonly, I’ve discussed overdiagnosis in the context of mammographic screening programs, but I’ve also looked at the overdiagnosis of thyroid cancer, prostate cancer, and other diseases. The bottom line, from my perspective, is that screening asymptomatic populations for breast and prostate cancer does have some benefit, but over the years that benefit was likely exaggerated as the downside of screening (overtreatment) underestimated. On the other hand, there are those out there whom I view as nihilists, who have gone too far in the other direction, declaring certain forms of screening (particularly mammography) to be worthless or even harmful.

The purpose of that preamble is to let new readers know where I’m coming from, as what I’ve just written will have been no surprise to regular readers. The longer you’ve been a reader, basically, the less of a surprise it will be. So you can probably predict how I reacted to an article that’s been going around social media since it was first published four days ago. In fact, I was going to write about it yesterday, but my Dug The Dog complex hit, and I was distracted by two articles in the latest New England Journal of Medicine. Still, many of you can guess which article I’m talking about, as the title is akin to my retort to an article by a fellow physician that, no, the New York Times did not kill your patient, only in the other direction. This time around, it’s not a physician, but rather an actor and comedian, Ben Stiller, who wrote about The Prostate Cancer Test That Saved My Life. My response is not quite as definitive as my response about the NYT. In response to Mr. Stiller, I can only respond: No, the prostate cancer test probably didn’t save your life, but we can’t know for sure.”

Stiller is clearly very intelligent. His essay is well written and well-argued and even includes a poignant description of his reaction to his diagnosis of prostate cancer:

I got diagnosed with prostate cancer Friday, June 13th, 2014. On September 17th of that year I got a test back telling me I was cancer free. The three months in between were a crazy roller coaster ride with which about 180,000 men a year in America can identify.

Right after I got the news, still trying to process the key words echoing dimly in my head (probability of survival–vival-vival-val…” “incontinence-nence-nence-ence…), I promptly got on my computer and Googled “Men who had prostate cancer.” I had no idea what to do and needed to see some proof this was not the end of the world.

John Kerry… Joe Torre… excellent, both still going strong. Mandy Patinkin… Robert DeNiro. They’re vital. OK great. Feeling relatively optimistic, I then of course had to do one more search, going dark and quickly tapping in “died of” in place of “had” in the search window.

If you’ve just been diagnosed with cancer, never do that search, people.

Pretty much any cancer patient can relate to Stiller’s story, particularly if the cancer is a common one, like prostate, for which there will likely be lots of news stories about celebrities bravely battling the disease. Here’s where Stiller goes wrong, but understandably so given the issues involved:

Taking the PSA test saved my life. Literally. That’s why I am writing this now. There has been a lot of controversy over the test in the last few years. Articles and op-eds on whether it is safe, studies that seem to be interpreted in many different ways, and debates about whether men should take it all. I am not offering a scientific point of view here, just a personal one, based on my experience. The bottom line for me: I was lucky enough to have a doctor who gave me what they call a “baseline” PSA test when I was about 46. I have no history of prostate cancer in my family and I am not in the high-risk group, being neither — to the best of my knowledge — of African or Scandinavian ancestry. I had no symptoms.

What I had — and I’m healthy today because of it — was a thoughtful internist who felt like I was around the age to start checking my PSA level, and discussed it with me.

If he had waited, as the American Cancer Society recommends, until I was 50, I would not have known I had a growing tumor until two years after I got treated. If he had followed the US Preventive Services Task Force guidelines, I would have never gotten tested at all, and not have known I had cancer until it was way too late to treat successfully.

It’s understandable why Stiller feels this way and believes this. Also, in fairness, one aspect of Stiller’s case that tends to be in his favor is how young he is. Prostate cancer before the age of 50 is very uncommon. Be that as it may, it is a maddeningly intuitive notion that early detection always saves lives. Yet, as I’ve pointed out time and time again, beginning back in 2007, the relationship between the early detection of cancer and the likelihood of cure and long term survival is not nearly as straightforward as it seems on the surface. There are a number of factors that confound this seemingly obvious and intuitive relationship, but the two most important are lead time bias and length bias.

I’ll illustrate the concept of lead time bias with a graph that I’ve used many times before to explain:

Lead time bias

And a slightly more complex one:

lead-time-bias

I have explained the concept of lead time bias in more depth here and here, but I do these graphs for illustrating the concept that lead time bias can make it look as though survival from a cancer screened for is longer even in the absence of any therapeutic effect whatsoever from treatment. Since it’s been a while since I’ve shown this particular graph, I’ll add it to the mix, as it shows the effect of lead time bias on cancer survival curves:

< lead_time_bias_02-480x278

Before I move on, I’ll even cite Aaron Carroll again, as he gives one of the simplest and most straightforward explanations of lead time bias that I’ve yet to see. Of course, the phenomena of lead time bias and overdiagnosis are intertwined. Think of overdiagnosis as the detection of a tumor with a lead time, as illustrated above, that is longer than the remaining expected lifespan of the patient, and you’ll see what I mean (I hope). In such a case, patient survival will seem to be, in essence, infinity, at least with respect to the cancer in that the patient dies of something else before the tumor progresses significantly.

The other problem with screening tests that confounds the relationship between early detection and improved cancer survival is a phenomenon known as length bias. Basically, length bias is the term used to describe how regular screening tests tend to detect slower-growing, more indolent disease preferentially. How many of you know someone who had a mammogram on schedule as recommended but came back the next year with a big, advanced breast cancer. Did mammography fail? Probably not. Rather, the tumor was just too fast growing. It went from undetectable by the test (mammography) to advanced during the interval between screenings. More often, what is detected by screening tests like mammography is disease the grows relatively slowly. This concept is illustrated here, by another chart that I also like to use a lot:

untitled

The other problem with length bias is that the more sensitive the test, the more likely it is to detect tiny indolent tumors that are so slow-growing that they wouldn’t progress to the point where they would endanger the patient’s life within the lifetime of the patient. Some, particularly screen-detected cancers, even spontaneously regress. Length bias and overdiagnosis are also related concepts in that length bias contributes to the tendency of screening tests to overdiagnose the disease they are looking for.

So, unfortunately, Ben Stiller is wrong in that there’s no way of knowing that the PSA test “saved” his life or that if he hadn’t been screened at all his tumor would not have been found until it was “too late.” Those are common assumptions on the part of patients and even many physicians. In brief, there is no surefire way of knowing whether or not he was overdiagnosed. However, as we’ve come to appreciate over the last decade, prostate cancer is very commonly overdiagnosed. In fairness, Stiller is definitely more savvy and informed than the average celebrity writing about medical issues. He clearly at least recognizes the controversies involved in screening:

The criticism of the test is that depending on how they interpret the data, doctors can send patients for further tests like the MRI and the more invasive biopsy, when not needed. Physicians can find low-risk cancers that are not life threatening, especially to older patients. In some cases, men with this type of cancer get “over-treatment” like radiation or surgery, resulting in side effects such as impotence or incontinence. Obviously this is not good; however it’s all in the purview of the doctor treating the patient.

This is exactly what happened to Stiller. His PSA rose for a year and a half before he was referred to a urologist, who did an examination and ordered an MRI. This led to a biopsy, which diagnosed a tumor with a Gleason score of 7 (3+4), which is categorized “mid-range aggressive cancer.” The Gleason score, however, is not the be-all and end-all as a criterion for starting treatment. Stiller doesn’t tell us how many cores of his biopsy contained cancer, whether cancer was found in both sides of the prostate, or how much of each core contained cancer, or how fast his PSA was rising. Be that as it may, he is far more likely than not to be mistaken when he declared so bluntly that the PSA test definitely saved his life. This infographic from the National Cancer Institute tells why there’s at least a 95% chance he was mistaken. In fact, for every 1,000 men who undergo PSA screening, only one life is saved, but 100-120 get false positive diagnoses and 110 will get a prostate cancer diagnosis. Again, it’s understandable why he would believe as he does, but that doesn’t make it any less misguided.

Let’s just put it this way. If even Dr. Mehmet Oz throws cold water on your claims, you should rethink:

Of course, Oz then went on to oversell dietary and exercise interventions for preventing death from prostate cancers, but you can’t expect Oz to do a whole segment without getting into some woo, now, can you?

The main problem with Stiller’s article, besides its mistaken argument that PSA screening almost certainly saved his life. He also cited a study by Edward Schaeffer, his surgeon and the chair of urology at Northwestern University, to support his point, stating, “There is growing evidence that these guidelines have led to increased cases of prostate cancers that get detected too late for the patient to survive the disease.”

You might remember this study, as it was in the news back in July. Basically, it purported to find that the incidence of metastatic prostate cancer has been climbing since PSA screening began to fall, the implication being that decreased screening is leading to more men being diagnosed with metastatic (and therefore incurable) prostate cancer. I first saw the study, but it was at a time when I had a lot of other things going on; so I didn’t look at it carefully. Concerned, I whipped off a quick e-mail to a relevant expert whom I trusted and had published with before asking him what he thought of the study and whether maybe I should consider starting PSA screening again. His response was blistering.

I’m not going to quote his e-mail directly because I do not publicly reveal e-mail contents without permission of the person who sent it to me. I can say that he did dismiss it, pointing out that, although the authors used the word “incidence,” what they were really counting were gross numbers of cases reported to the National Cancer Data Base (NCDB) from 1,000 health care facilities and changes in gross numbers from 2004 to 2013. He pointed out that true incidence is the number of cases divided by the number of people in the population (usually expressed per 100,000). So here’s the problem. Schaeffer’s group reported the numerator, but had no idea what the denominator was (the number of people in the population from whom the cases came from), as they even basically admitted:

Limitations to the current study include the lack of national annual incidence rates in the NCDB. Thus, our outcome variable was annual incidence of prostate cancer at over 1000 health-care facilities in the United States relative to that of 2004, the initial year of our study period.

In fact, the incidence of metastatic prostate cancer has been stable from 2004 to 2012 (the time period of the study cited by Stiller) and in fact the incidence was lower in 2012 than it was in 2000. Clearly, the incidence of metastatic prostate cancer is not increasing using more standard methods of measurement.

Suffice to say, I was horribly embarrassed for having asked about this study when in retrospect I knew enough to have figured out how useless it was myself.

Others have been critical of Stiller as well, for example, Kevin Lomangino, the managing editor of HealthNewsReview.org, who wrote Ben Stiller’s misguided prostate cancer recommendations aren’t based on evidence. He hit many of the same points I did, although he didn’t go into lead time bias and length bias, as I did and some I didn’t, noting, as I did, that Stiller is “smart, persuasive, and famous,” but that “his skewed piece may do a great deal of harm to men who may be led astray by his faulty reasoning.” He also cites oncologist Vinay Prasad, MD, MPH, who made a similar point to that which I’ve made many times regarding mammography:

Ben Stiller says everyone over 40 should get a PSA, but why does he discriminate against 39 year olds? If you accept Ben Stiller’s logic, that we should do anything to find cancer early (with near total disregard for net effects, harms or overdiagnosis), why is 40 Ben Stiller’s cutoff? He criticizes the American Cancer Society for 50, and yet equally arbitrarily chooses 40. If Ben Stiller thinks a 40 year old should be offered a PSA, why not a 39 year old? Why not every man? Since Ben Stiller does not employ careful scientific reasoning to reach his position, I would argue that Ben Stiller is logically inconsistent.

I’ve basically said the same thing about mammography to those criticizing newer recommendations that increased the age at which mammographic screening should begin for the average risk woman and argue for retaining the existing guidelines that recommend beginning at age 40. Why not begin at age 35? Or 30? Or even 20? In other words, even the most enthusiastic advocates of screening realize that there is an age below which incidence of breast cancer is so low that the harms from screening far outweigh the potential benefits. It’s always a judgment call to set those cutoffs, and there is always harm from overtreatment whenever there is screening with a significant incidence of overdiagnosis. Balancing the risks and benefits is what is so difficult about constructing screening programs for common cancers like breast and prostate. I only wish that Ben Stiller had done more than acknowledge those risks in passing and avoided concrete declaratory statements like, “Taking the PSA test saved my life. Literally.” And: “I believe the best way to determine a course of action for the most treatable, yet deadly cancer, is to detect it early.”

Unfortunately, we’ve been learning that this is not always true. It’s complicated. I am gratified, though, that the reaction to Stiller’s article has been, unlike what would likely have been the case in the past, to take him to task for making assertions not supported by evidence. Unfortunately, Stiller’s celebrity will likely trump evidence.



from ScienceBlogs http://ift.tt/2dzLYD4

Aucun commentaire:

Enregistrer un commentaire