aads

The CDC whistleblower documents: A whole lot of nothing and no conspiracy to hide an MMR-autism link [Respectful Insolence]

One of the stories dominating my blogging in 2015 was a manufactroversy that started in August 2014 when, after several months of rumbling in the antivaccine crankosphere that there was a CDC scientist ready to blow the whistle on an alleged coverup of evidence that vaccines cause autism, Andrew Wakefield, ever the publicity hog, released a video entitled CDC Whistleblower Revealed, in which he claimed that he had evidence of a “high level deception” of the American people about vaccine safety and revealed the “CDC Whistleblower” to be one William W. Thompson, PhD, a psychologist by training who worked for the CDC studying vaccine safety in epidemiological studies and who had had many telephone conversations with a biochemical engineer turned incompetent epidemiologist named Brian Hooker. Unbeknownst to Thompson, Hooker had been recording their conversations, and carefully cherry picked excerpts were included in the video, interspersed with Andrew Wakefield making hyperbolically offensive claims that this “coverup” was as bad as the Tuskegee syphilis experiment, with the CDC being worse than Hitler, Stalin, and Pol Pot. (I kid you not.) We now know that Thompson had been assisting Hooker in a “reanalysis” of a pivotal study of vaccine safety by DeStefano et al for which he had been co-author looking at whether the MMR vaccine was associated with autism in children in the Atlanta area. (Spoilers: It wasn’t.) The reanalysis claimed to have found an increased risk of autism for a small subset: African American males who had been vaccinated before age 3. Of course, the study, even with Hooker’s incompetent reanalysis, had failed to find a correlation in any other subgroup, leading me to refer to it as having proven Andrew Wakefield wrong.

Thus was born the saga of the “CDC whistleblower,” a.k.a. William Thompson, which has dominated Twitter through the #CDCwhistleblower hashtag for over a year now. There’s been a major new development in this story that I just couldn’t wait to tell you about: Matt Carrey now has the CDC whistleblower documents, and, as a result, so do I and so can you. Let me explain.

But first, let me note that Hooker’s study was unbelievably incompetently done, with failure to control for some obvious key confounders, which is not surprising given Hooker’s misplaced love of “simplicity” in statistical analysis; that, and the fact that he did a cohort study using data collected to do a case control study. Epic incompetence indeed, so much so that his study was ultimately retracted by the journal—and rightly so. Unfortunately, it had been the supposed “coverup” of the preliminary “positive” result in a small subset of the study population that didn’t hold up when confounders were controlled for that had infuriated Thompson, who had felt dismissed and used. He’s been silent since, but the events he set into motion fueled more paranoid conspiracy theories in the antivaccine movement, which led to its teaming up with the Nation of Islam, pulling Robert F. Kennedy, Jr. out from whatever rock he had been hiding under and dusting him off, and holding a protest at the CDC in October. Meanwhile Kevin Barry published a book of transcripts of four of Thompson’s conversations with Hooker, which revealed a rather angry, troubled man out to strike out at his former CDC colleagues even though he still works at the CDC in another branch.

Right now, here’s where the manufactroversy stands. Thompson provided Rep. Bill Posey (R-FL) with a bunch of documents that he claimed to have saved from being disposed of that “prove” that there was a coverup of unwanted results. One of his key claims was that the CDC changed the analysis plan after the study had started because the CDC didn’t like the race results that implied a correlation between MMR vaccination and autism in African American boys, which is a definite no-no. He also accused them of destroying original documents. Posey called for an investigation in a little seen speech a couple of days before Congress left for its summer recess, resulting in a resounding yawn and no action. Ultimately, an opportunistic Alex Jones wannabe anchor of the Atlanta CBS affiliate, Ben Swann, acquired the documents from Posey in late November and promised to do a report on them. There has been no story yet, and now, thanks to Matt, I know why. There’s nothing in those documents that support allegations of a coverup.

How did Matt acquire the documents? Let him explain:

Congressman Posey released the documents to a journalist recently and, given that they are now in the public domain, Dorit Reiss and I requested that they be made available to us as well. Mr. Posey’s office graciously granted our request and I have spent some time going through them.

Matt has also made the documents available to several other bloggers, including me, and I thank him for that. I, too, have gone over the documents, albeit not every single one of them and not in as much detail as Matt. He has also made them available at a DropBox link for anyone out there who is curious and wants to read them. I warn you, though. It’s very tedious reading, particularly various meeting agendas and the like, as well as SAS spreadsheets. In all, there are over 100 MB worth of scanned PDFs. However, there are most definitely not 100,000 documents there, as some antivaccine cranks have claimed. Matt says there are about 1,000 pages, and that seems about right to me, not having counted them all myself.

There are a few key points that arise from this document dump. First, there are multiple drafts of the analysis plan; that is, the protocol for collecting and analyzing the data that Hooker and Wakefield claim was changed after the first analysis of race data. They confirm what we already know, namely that the final analysis plan was dated September 5, 2001 and the first race analysis didn’t occur until October or November. But there’s more than that. Matt found what appears to be the first draft of the analysis plan, complete with markup and notes in the margins:

Note that this draft analysis plan is from April 3, 2001. Well before the final version, the “protocol”, which was September 5. More importantly, this is a long time before a race analysis was started. But even more, notice how there’s an annotation “I would include race as a covariate, not as an exposure variable.” That’s critical–they decided against using race as an exposure variable from the start. Before they did a race analysis. Another point: they were already planning on using birth certificate data right from the start.

A word of explanation here. In his original video, Wakefield zeroed in on a single sentence that says “The only variable available to be assessed as a potential confounder using the entire sample is child’s race.” Based on that, and allegedly confirmed by Thompson during conversations with Hooker, Wakefield and Hooker claimed that “decisions were made regarding which findings to report after the data was collected,” further claiming, “Thompson’s conversations with Hooker confirmed that it was only after the CDC study coauthors observed results indicating a statistical association between MMR timing and autism among African-Americans boys, that they introduced the Georgia birth certificate criterion as a requirement for participation in the study. This had the effect of reducing the sample size by 41% and eliminating the statistical significance of the finding, which Hooker calls a direct deviation from the agreed upon final study protocol – a serious violation.’” Of course, the reason they did the birth certificate analysis is because it allowed them to “obtain additional information, such as each child’s birth weight and gestational age and the mother’s parity, age, race, and education.” More importantly, as Matt discovered, the very first draft of the analysis plan indicated that the investigators were already planning on using birth certificate data. There was no change in protocol to “cover up” results the investigators found “inconvenient,” namely the initial finding of a seeming correlation between a specific age range of MMR vaccination and autism in African American boys.

There’s no way Thompson didn’t know this, at least at the time he was working on this study with his collaborators. Perhaps he forgot. (I’m being charitable.) Of course, I’m not so charitable about Wakefield and Hooker, who also had all these documents. Surely they were poring over them with a fine-toothed comb for any dirt they could find, and in doing so they had to have read the early versions of the analysis plan. I also noticed, as did Matt, that Thompson annotated a number of the documents, in particular a file containing all the agendas for meetings on the study. It’s impossible to know when he did this, whether it was contemporaneously or long after the fact, but it looks as though it was probably after, given how prominently some dates are circled. As Matt notes, it also looks as though Thompson was trying to make the data fit his story, rather than the other way around. His purple marker is all over the place the annotations in purple appear everywhere.

There are other things in these documents as well. For instance, there is this statement by William Thompson with a timeline of his version of events dated September 9, 2014. The funny thing is, even his own timeline doesn’t really support the allegation being made by Hooker and Wakefield that the protocol was altered post hoc in order to “hide” the effect. Rather, he claims:

The final analysis plan described analyses for the TOTAL sample and the BIRTH CERTIFICATE sample which included assessment of the RACE variable. (See pages 7 and 8 of the Final Analysis Plan). There were two primary endpoints for the study. One was using a threshold of 36 months (see Table 3a of Final Analysis Plan), and the second was a threshold of 18 months. (See Table 3b of Final Analysis Plan). We hypothesized that if we found statistically significant effects at either the 18-month or 36-month threshold, we would conclude that vaccinating children early with the MMR vaccine could lead to autism-like characteristics or features. We never claimed or intended that if we found statistically significant effects in the TOTAL SAMPLE, we would ignore the results if they could not be confirmed in the BIRTH CERTIFICATE SAMPLE.

I note that the protocol didn’t mandate reporting effects whose statistical significance went away when tested in the birth certificate cohort, either. Whether or not to report spurious results that disappeared when a more confounders were accounted for would have been a matter of judgment more than anything else. We can argue whether it was good judgment to leave the preliminary result out as insignificant (more on that later), but it wasn’t a violation of the protocol as far as I can tell. Also, as yet I haven’t seen anything objective or contemporaneous that even hints at hiding data, destroying data, or otherwise manipulating data. Instead, there are plenty of comments about *including* things to avoid any appearance of willful omission. If this is a coverup, it’s the worst coverup ever.

We also learn that Thompson was causing trouble resulting in his being in trouble. Matt was too circumspect to mention that, but I think it’s important to mention in order (1) to show that Thompson has an axe to grind and (2) because you know that when it comes out the cry from the antivaccine crankosphere will be that Thompson was being “persecuted.” Thompson’s description in his own words is in the timeline:

On March 9th, I was put on administrative leave. In the Annex to the memorandum, they provided a list of my “inappropriate and unacceptable behavior in the work place” which included “you criticized the NIP/OD for doing very poor job of representing vaccine safety issues, claimed that NIP/OD had failed to be proactive in their handling of vaccine safety issues, and you requested that Dr. Gerberding reply to your letter from a congressional representative before you made your presentation to the IOM.” (See scanned Memorandum dated January 9, 2004.). I stand by that statement and I do not think it was unacceptable to convey that to Dr. Gerberding.

Elsewhere, there is a handwritten note from 2/4/2004:

DeStefanoFire

And another annotation (click to embiggen) from 2/12/2004:

image

Why would Bob Chen have wanted to fire Thompson? It’s not entirely clear from the documents, but Thompson was clearly making trouble. This letter telling Thompson he was being put on administrative leave lists several instances of inappropriate and unacceptable behavior and makes it sound as though this action was being taken out of concern that Thompson was under extreme stress, which was certainly possible. The letter notes the issue described above by Thompson as well as:

  • Refusing to assist Dr. Gina Mootrey when she asked Thompson to clarify some points in a slide presentation regarding influenza so that Dr. Walter Orenstein could modify some of the slides for a different presentation.
  • Approaching Dr. Orenstein in the parking lot and demonstrating “inappropriate anger towards Dr. Orenstein, his request, and your perception that Dr. Orenstein was responsible for permitting a hostile environment within your organizational unit.
  • Sending emails to Dr. Orenstein requesting an apology
  • Writing emails to senior staff complaining about Dr. Orenstein, accusing him of harassment.

The final paragraph:

The general tone and content of your e-mails were inappropriate and gave the appearance that senior management had not fulfilled their public health obligations as they pertain to vaccine safety. Your actions had the effect of eroding the employment relationship between supervisor and subordinate, and appear to make a mockery of management’s authority to direct the activities of this office. Furthermore, your interaction with Dr. Orenstein created concern about your level of anger being out of proportion to the facts.

Interestingly, the memo specifically said that it would not be placed in Thompson’s Official Personnel Folder, which means Thompson must have included it in the document dump to Rep. Posey’s office himself, which means he wanted it to be seen by Posey, perhaps as “evidence” of “persecution” or retaliation for his complaints about the study that became DeStefano et al. In any event, it’s clear that Thompson had (and probably still has) what we refer to as anger issues. This is consistent with previous evidence that we have that Thompson doesn’t play well with others.

So what emerges from all these documents? One thing that doesn’t emerge is any evidence of a coverup. There’s no contemporaneous documentation to suggest an effort to “hide” findings viewed as “inconvenient,” although Thompson’s retroactive markups of the meeting agendas sure tries to make it seem as though there were. In the end, after this document dump, we’re left with no evidence of scientific malfeasance or attempts to whitewash data. Even in the part where Thompson states that the co-investigators got together to throw unneeded documents in the wastebasket, one has to wonder: What was thrown away? Probably, if this document dump is any indication, they probably got rid of old meeting agendas and old drafts of the protocol. No wonder Matt quipped, “I hope people at CDC are not keeping all this paper.” Even Thompson notes that all the original computer files still reside on CDC servers.

All of this brings us back to a point that Matt makes regarding whether it was a good idea to leave out the spurious statistically significant result:

Ah, one will say, what about the finding of an association between the MMR and autism for African American boys vaccinated late (between 18 months and 36 months)? Why wasn’t that included in the published paper or public presentations? The reasons given by Thompson/Hooker/Wakefield don’t hold water as I’ve shown. So, what was the scientific reason for not including this result in the paper? Many online writers have discussed how weak this result is; how it is a spurious result. But I’d like to know the reasoning at the time behind the CDC decision to leave this out. As a community member–an autism parent–I’d like to see all the results and understand the reasons why certain results are spurious. Of course it is easy to say now, but leaving this out of the public’s eye was a mistake. It gave Thompson, Hooker and Wakefield the chance to cherry pick, hide information and craft a story that has been very damaging to the autism communities and to public health.

Matt has a point. On the other hand, as a scientist myself, I realize that decisions are made all the time over what data to include and exclude from a manuscript. We frequently leave out data that seemed statistically significant at first but didn’t hold up to correcting for confounders. But, then, I don’t do research in an area where antiscience loons are waiting to pounce on any inconsistency in order to sow fear and doubt, something we know antivaccinationists were doing even in 2004 when the manuscript that became DeStefano et al was being written and submitted for publication. Even so, although it’s easy to ask why the CDC didn’t see the potential for mischief at the time, we’re viewing history through the retrospectoscope, which is 100% accurate. At the time, how could anyone ever have predicted that Thompson’s disillusionment and anger at his colleagues would lead him to pal around with Brian Hooker and give Hooker and Wakefield enough material to make so much mischief?

One thing’s for sure. As unrevealing as Thompson’s document dump is, you can be sure that the antivaccine movement will, reality be damned, continue to spin it as proof of a coverup. Same as it ever was.



from ScienceBlogs http://ift.tt/1SwuBkJ

One of the stories dominating my blogging in 2015 was a manufactroversy that started in August 2014 when, after several months of rumbling in the antivaccine crankosphere that there was a CDC scientist ready to blow the whistle on an alleged coverup of evidence that vaccines cause autism, Andrew Wakefield, ever the publicity hog, released a video entitled CDC Whistleblower Revealed, in which he claimed that he had evidence of a “high level deception” of the American people about vaccine safety and revealed the “CDC Whistleblower” to be one William W. Thompson, PhD, a psychologist by training who worked for the CDC studying vaccine safety in epidemiological studies and who had had many telephone conversations with a biochemical engineer turned incompetent epidemiologist named Brian Hooker. Unbeknownst to Thompson, Hooker had been recording their conversations, and carefully cherry picked excerpts were included in the video, interspersed with Andrew Wakefield making hyperbolically offensive claims that this “coverup” was as bad as the Tuskegee syphilis experiment, with the CDC being worse than Hitler, Stalin, and Pol Pot. (I kid you not.) We now know that Thompson had been assisting Hooker in a “reanalysis” of a pivotal study of vaccine safety by DeStefano et al for which he had been co-author looking at whether the MMR vaccine was associated with autism in children in the Atlanta area. (Spoilers: It wasn’t.) The reanalysis claimed to have found an increased risk of autism for a small subset: African American males who had been vaccinated before age 3. Of course, the study, even with Hooker’s incompetent reanalysis, had failed to find a correlation in any other subgroup, leading me to refer to it as having proven Andrew Wakefield wrong.

Thus was born the saga of the “CDC whistleblower,” a.k.a. William Thompson, which has dominated Twitter through the #CDCwhistleblower hashtag for over a year now. There’s been a major new development in this story that I just couldn’t wait to tell you about: Matt Carrey now has the CDC whistleblower documents, and, as a result, so do I and so can you. Let me explain.

But first, let me note that Hooker’s study was unbelievably incompetently done, with failure to control for some obvious key confounders, which is not surprising given Hooker’s misplaced love of “simplicity” in statistical analysis; that, and the fact that he did a cohort study using data collected to do a case control study. Epic incompetence indeed, so much so that his study was ultimately retracted by the journal—and rightly so. Unfortunately, it had been the supposed “coverup” of the preliminary “positive” result in a small subset of the study population that didn’t hold up when confounders were controlled for that had infuriated Thompson, who had felt dismissed and used. He’s been silent since, but the events he set into motion fueled more paranoid conspiracy theories in the antivaccine movement, which led to its teaming up with the Nation of Islam, pulling Robert F. Kennedy, Jr. out from whatever rock he had been hiding under and dusting him off, and holding a protest at the CDC in October. Meanwhile Kevin Barry published a book of transcripts of four of Thompson’s conversations with Hooker, which revealed a rather angry, troubled man out to strike out at his former CDC colleagues even though he still works at the CDC in another branch.

Right now, here’s where the manufactroversy stands. Thompson provided Rep. Bill Posey (R-FL) with a bunch of documents that he claimed to have saved from being disposed of that “prove” that there was a coverup of unwanted results. One of his key claims was that the CDC changed the analysis plan after the study had started because the CDC didn’t like the race results that implied a correlation between MMR vaccination and autism in African American boys, which is a definite no-no. He also accused them of destroying original documents. Posey called for an investigation in a little seen speech a couple of days before Congress left for its summer recess, resulting in a resounding yawn and no action. Ultimately, an opportunistic Alex Jones wannabe anchor of the Atlanta CBS affiliate, Ben Swann, acquired the documents from Posey in late November and promised to do a report on them. There has been no story yet, and now, thanks to Matt, I know why. There’s nothing in those documents that support allegations of a coverup.

How did Matt acquire the documents? Let him explain:

Congressman Posey released the documents to a journalist recently and, given that they are now in the public domain, Dorit Reiss and I requested that they be made available to us as well. Mr. Posey’s office graciously granted our request and I have spent some time going through them.

Matt has also made the documents available to several other bloggers, including me, and I thank him for that. I, too, have gone over the documents, albeit not every single one of them and not in as much detail as Matt. He has also made them available at a DropBox link for anyone out there who is curious and wants to read them. I warn you, though. It’s very tedious reading, particularly various meeting agendas and the like, as well as SAS spreadsheets. In all, there are over 100 MB worth of scanned PDFs. However, there are most definitely not 100,000 documents there, as some antivaccine cranks have claimed. Matt says there are about 1,000 pages, and that seems about right to me, not having counted them all myself.

There are a few key points that arise from this document dump. First, there are multiple drafts of the analysis plan; that is, the protocol for collecting and analyzing the data that Hooker and Wakefield claim was changed after the first analysis of race data. They confirm what we already know, namely that the final analysis plan was dated September 5, 2001 and the first race analysis didn’t occur until October or November. But there’s more than that. Matt found what appears to be the first draft of the analysis plan, complete with markup and notes in the margins:

Note that this draft analysis plan is from April 3, 2001. Well before the final version, the “protocol”, which was September 5. More importantly, this is a long time before a race analysis was started. But even more, notice how there’s an annotation “I would include race as a covariate, not as an exposure variable.” That’s critical–they decided against using race as an exposure variable from the start. Before they did a race analysis. Another point: they were already planning on using birth certificate data right from the start.

A word of explanation here. In his original video, Wakefield zeroed in on a single sentence that says “The only variable available to be assessed as a potential confounder using the entire sample is child’s race.” Based on that, and allegedly confirmed by Thompson during conversations with Hooker, Wakefield and Hooker claimed that “decisions were made regarding which findings to report after the data was collected,” further claiming, “Thompson’s conversations with Hooker confirmed that it was only after the CDC study coauthors observed results indicating a statistical association between MMR timing and autism among African-Americans boys, that they introduced the Georgia birth certificate criterion as a requirement for participation in the study. This had the effect of reducing the sample size by 41% and eliminating the statistical significance of the finding, which Hooker calls a direct deviation from the agreed upon final study protocol – a serious violation.’” Of course, the reason they did the birth certificate analysis is because it allowed them to “obtain additional information, such as each child’s birth weight and gestational age and the mother’s parity, age, race, and education.” More importantly, as Matt discovered, the very first draft of the analysis plan indicated that the investigators were already planning on using birth certificate data. There was no change in protocol to “cover up” results the investigators found “inconvenient,” namely the initial finding of a seeming correlation between a specific age range of MMR vaccination and autism in African American boys.

There’s no way Thompson didn’t know this, at least at the time he was working on this study with his collaborators. Perhaps he forgot. (I’m being charitable.) Of course, I’m not so charitable about Wakefield and Hooker, who also had all these documents. Surely they were poring over them with a fine-toothed comb for any dirt they could find, and in doing so they had to have read the early versions of the analysis plan. I also noticed, as did Matt, that Thompson annotated a number of the documents, in particular a file containing all the agendas for meetings on the study. It’s impossible to know when he did this, whether it was contemporaneously or long after the fact, but it looks as though it was probably after, given how prominently some dates are circled. As Matt notes, it also looks as though Thompson was trying to make the data fit his story, rather than the other way around. His purple marker is all over the place the annotations in purple appear everywhere.

There are other things in these documents as well. For instance, there is this statement by William Thompson with a timeline of his version of events dated September 9, 2014. The funny thing is, even his own timeline doesn’t really support the allegation being made by Hooker and Wakefield that the protocol was altered post hoc in order to “hide” the effect. Rather, he claims:

The final analysis plan described analyses for the TOTAL sample and the BIRTH CERTIFICATE sample which included assessment of the RACE variable. (See pages 7 and 8 of the Final Analysis Plan). There were two primary endpoints for the study. One was using a threshold of 36 months (see Table 3a of Final Analysis Plan), and the second was a threshold of 18 months. (See Table 3b of Final Analysis Plan). We hypothesized that if we found statistically significant effects at either the 18-month or 36-month threshold, we would conclude that vaccinating children early with the MMR vaccine could lead to autism-like characteristics or features. We never claimed or intended that if we found statistically significant effects in the TOTAL SAMPLE, we would ignore the results if they could not be confirmed in the BIRTH CERTIFICATE SAMPLE.

I note that the protocol didn’t mandate reporting effects whose statistical significance went away when tested in the birth certificate cohort, either. Whether or not to report spurious results that disappeared when a more confounders were accounted for would have been a matter of judgment more than anything else. We can argue whether it was good judgment to leave the preliminary result out as insignificant (more on that later), but it wasn’t a violation of the protocol as far as I can tell. Also, as yet I haven’t seen anything objective or contemporaneous that even hints at hiding data, destroying data, or otherwise manipulating data. Instead, there are plenty of comments about *including* things to avoid any appearance of willful omission. If this is a coverup, it’s the worst coverup ever.

We also learn that Thompson was causing trouble resulting in his being in trouble. Matt was too circumspect to mention that, but I think it’s important to mention in order (1) to show that Thompson has an axe to grind and (2) because you know that when it comes out the cry from the antivaccine crankosphere will be that Thompson was being “persecuted.” Thompson’s description in his own words is in the timeline:

On March 9th, I was put on administrative leave. In the Annex to the memorandum, they provided a list of my “inappropriate and unacceptable behavior in the work place” which included “you criticized the NIP/OD for doing very poor job of representing vaccine safety issues, claimed that NIP/OD had failed to be proactive in their handling of vaccine safety issues, and you requested that Dr. Gerberding reply to your letter from a congressional representative before you made your presentation to the IOM.” (See scanned Memorandum dated January 9, 2004.). I stand by that statement and I do not think it was unacceptable to convey that to Dr. Gerberding.

Elsewhere, there is a handwritten note from 2/4/2004:

DeStefanoFire

And another annotation (click to embiggen) from 2/12/2004:

image

Why would Bob Chen have wanted to fire Thompson? It’s not entirely clear from the documents, but Thompson was clearly making trouble. This letter telling Thompson he was being put on administrative leave lists several instances of inappropriate and unacceptable behavior and makes it sound as though this action was being taken out of concern that Thompson was under extreme stress, which was certainly possible. The letter notes the issue described above by Thompson as well as:

  • Refusing to assist Dr. Gina Mootrey when she asked Thompson to clarify some points in a slide presentation regarding influenza so that Dr. Walter Orenstein could modify some of the slides for a different presentation.
  • Approaching Dr. Orenstein in the parking lot and demonstrating “inappropriate anger towards Dr. Orenstein, his request, and your perception that Dr. Orenstein was responsible for permitting a hostile environment within your organizational unit.
  • Sending emails to Dr. Orenstein requesting an apology
  • Writing emails to senior staff complaining about Dr. Orenstein, accusing him of harassment.

The final paragraph:

The general tone and content of your e-mails were inappropriate and gave the appearance that senior management had not fulfilled their public health obligations as they pertain to vaccine safety. Your actions had the effect of eroding the employment relationship between supervisor and subordinate, and appear to make a mockery of management’s authority to direct the activities of this office. Furthermore, your interaction with Dr. Orenstein created concern about your level of anger being out of proportion to the facts.

Interestingly, the memo specifically said that it would not be placed in Thompson’s Official Personnel Folder, which means Thompson must have included it in the document dump to Rep. Posey’s office himself, which means he wanted it to be seen by Posey, perhaps as “evidence” of “persecution” or retaliation for his complaints about the study that became DeStefano et al. In any event, it’s clear that Thompson had (and probably still has) what we refer to as anger issues. This is consistent with previous evidence that we have that Thompson doesn’t play well with others.

So what emerges from all these documents? One thing that doesn’t emerge is any evidence of a coverup. There’s no contemporaneous documentation to suggest an effort to “hide” findings viewed as “inconvenient,” although Thompson’s retroactive markups of the meeting agendas sure tries to make it seem as though there were. In the end, after this document dump, we’re left with no evidence of scientific malfeasance or attempts to whitewash data. Even in the part where Thompson states that the co-investigators got together to throw unneeded documents in the wastebasket, one has to wonder: What was thrown away? Probably, if this document dump is any indication, they probably got rid of old meeting agendas and old drafts of the protocol. No wonder Matt quipped, “I hope people at CDC are not keeping all this paper.” Even Thompson notes that all the original computer files still reside on CDC servers.

All of this brings us back to a point that Matt makes regarding whether it was a good idea to leave out the spurious statistically significant result:

Ah, one will say, what about the finding of an association between the MMR and autism for African American boys vaccinated late (between 18 months and 36 months)? Why wasn’t that included in the published paper or public presentations? The reasons given by Thompson/Hooker/Wakefield don’t hold water as I’ve shown. So, what was the scientific reason for not including this result in the paper? Many online writers have discussed how weak this result is; how it is a spurious result. But I’d like to know the reasoning at the time behind the CDC decision to leave this out. As a community member–an autism parent–I’d like to see all the results and understand the reasons why certain results are spurious. Of course it is easy to say now, but leaving this out of the public’s eye was a mistake. It gave Thompson, Hooker and Wakefield the chance to cherry pick, hide information and craft a story that has been very damaging to the autism communities and to public health.

Matt has a point. On the other hand, as a scientist myself, I realize that decisions are made all the time over what data to include and exclude from a manuscript. We frequently leave out data that seemed statistically significant at first but didn’t hold up to correcting for confounders. But, then, I don’t do research in an area where antiscience loons are waiting to pounce on any inconsistency in order to sow fear and doubt, something we know antivaccinationists were doing even in 2004 when the manuscript that became DeStefano et al was being written and submitted for publication. Even so, although it’s easy to ask why the CDC didn’t see the potential for mischief at the time, we’re viewing history through the retrospectoscope, which is 100% accurate. At the time, how could anyone ever have predicted that Thompson’s disillusionment and anger at his colleagues would lead him to pal around with Brian Hooker and give Hooker and Wakefield enough material to make so much mischief?

One thing’s for sure. As unrevealing as Thompson’s document dump is, you can be sure that the antivaccine movement will, reality be damned, continue to spin it as proof of a coverup. Same as it ever was.



from ScienceBlogs http://ift.tt/1SwuBkJ

January 6 before dawn: Moon, Venus, Saturn

Wednesday before dawn – January 6, 2016 – get up early to see the waning crescent moon and dazzling planet Venus near each other in the twilight. In fact, get up early and be rewarded four times over! You can see four visible planets – Venus, Mars, Jupiter and Saturn – in morning sky now.

The moon and Venus rank as the second-brightest and third-brightest celestial bodies, after the sun. Even if you get up as little as one-half hour before sunrise, you’ll still have a good chance of spotting these two brilliant beauties in the morning twilight glare.

If you want to see the planet Saturn and the star Antares in the vicinity of the waning moon and Venus, you probably need to get up more than an hour before sunrise. Ideally, you’ll be up and about about one and one-half hours before the sun to view Saturn and Antares in the neighborhood of the moon and Venus.

When will all five visible planets appear simultaneously?

EarthSky astronomy kits are perfect for beginners. Order today from the EarthSky store

Watch the waning crescent moon swing by Venus, Saturn and Antares over the next few days. The green line highlights the ecliptic - Earth's orbital plane projected onto the great dome of sky.

Watch the waning crescent moon swing by Venus, Saturn and Antares over the next few days. The green line highlights the ecliptic – Earth’s orbital plane projected onto the great dome of sky.

But we said four planets, and so far we’ve mentioned only two. The other two are Mars and Jupiter.

The sky chart below shows the approximate positions of the morning planets as seen from mid-northern latitudes. From anywhere worldwide, though, the lineup of planets enables you to planet-hop – jump from one planet to another.

For instance, you can locate Mars roughly midway between the sky’s two brightest planets, Venus and Jupiter. You can find the planet Saturn very close to Venus, the sky’s brightest planet.

jan-3-venus-jupiter-moon-night-sky-chart

Starting around January 20, 2016, Mercury will join the predawn/dawn planet festival to showcase all five visible planets – Mercury, Venus, Mars, Jupiter and Saturn – in the same sky for the first time since January of 2005.

Bottom line: The moon and Venus are close together on the morning of January 6, 2016. Saturn and the star Antares are nearby. The other very bright planet, besides Venus, is Jupiter. It’s higher in the sky before dawn than the other planets. Mars is about midway between Jupiter and Venus.

Astronomy events, star parties, festivals, workshops for 2016

Donate: Your support means the world to us



from EarthSky http://ift.tt/1Swtj9b

Wednesday before dawn – January 6, 2016 – get up early to see the waning crescent moon and dazzling planet Venus near each other in the twilight. In fact, get up early and be rewarded four times over! You can see four visible planets – Venus, Mars, Jupiter and Saturn – in morning sky now.

The moon and Venus rank as the second-brightest and third-brightest celestial bodies, after the sun. Even if you get up as little as one-half hour before sunrise, you’ll still have a good chance of spotting these two brilliant beauties in the morning twilight glare.

If you want to see the planet Saturn and the star Antares in the vicinity of the waning moon and Venus, you probably need to get up more than an hour before sunrise. Ideally, you’ll be up and about about one and one-half hours before the sun to view Saturn and Antares in the neighborhood of the moon and Venus.

When will all five visible planets appear simultaneously?

EarthSky astronomy kits are perfect for beginners. Order today from the EarthSky store

Watch the waning crescent moon swing by Venus, Saturn and Antares over the next few days. The green line highlights the ecliptic - Earth's orbital plane projected onto the great dome of sky.

Watch the waning crescent moon swing by Venus, Saturn and Antares over the next few days. The green line highlights the ecliptic – Earth’s orbital plane projected onto the great dome of sky.

But we said four planets, and so far we’ve mentioned only two. The other two are Mars and Jupiter.

The sky chart below shows the approximate positions of the morning planets as seen from mid-northern latitudes. From anywhere worldwide, though, the lineup of planets enables you to planet-hop – jump from one planet to another.

For instance, you can locate Mars roughly midway between the sky’s two brightest planets, Venus and Jupiter. You can find the planet Saturn very close to Venus, the sky’s brightest planet.

jan-3-venus-jupiter-moon-night-sky-chart

Starting around January 20, 2016, Mercury will join the predawn/dawn planet festival to showcase all five visible planets – Mercury, Venus, Mars, Jupiter and Saturn – in the same sky for the first time since January of 2005.

Bottom line: The moon and Venus are close together on the morning of January 6, 2016. Saturn and the star Antares are nearby. The other very bright planet, besides Venus, is Jupiter. It’s higher in the sky before dawn than the other planets. Mars is about midway between Jupiter and Venus.

Astronomy events, star parties, festivals, workshops for 2016

Donate: Your support means the world to us



from EarthSky http://ift.tt/1Swtj9b

125/366: Flash! (Aaah-aaaaaahhhhh!) [Uncertain Principles]

One of my Christmas gifts this year was an external flash unit for my DSLR, replacing one that broke a while back. Given that the camera has a built-in flash, you might wonder why I need this extra bulky gadget, so to answer that question, here’s a composite of five pictures I took today:

Composite image showing various flash settings. The top is the intentionally underexposed case with no flash. Top left is the direct flash, bottom left direct flash with the diffuser. Top right is indirect flash at a 45 degree angle, bottom right is indirect flash straight up.

Composite image showing various flash settings. The top is the intentionally underexposed case with no flash. Top left is the direct flash, bottom left direct flash with the diffuser. Top right is indirect flash at a 45 degree angle, bottom right is indirect flash straight up.

I put the camera in full manual mode, and set it to be a little underexposed when photographing The Pip’s PAW Patrol figures and other crap on our dining room table. That’s the top center image.

The other four shots use the external flash unit in various modes. The pair on the left have the flash pointed directly forward, basically like the built-in flash unit. The top one is the regular flash, and you can see that it looks a little harsh, with a big glare spot in the glass behind the toys, and the central dog a little overexposed. The lower one is still straight ahead, but with the diffuser in place (a piece of textured plastic looking a little like a bike reflector that breaks up the outgoing light).

On the right are two images with the indirect flash. The top has the flash tipped up at a 45 degree angle, and the bottom has it straight up. These bounce the light off the ceiling, and as you can see, this gives a less harsh illumination. There’s still a bit of glare on the glass in the 45-degree shot, but that’s almost completely gone in the straight-up one.

The indirect flash images look a whole lot nicer to me than the basic direct flash (the diffuser one looks pretty similar to the 45-degree shot, so would also be okay). This also almost completely avoids the red-eye effect you get with a direct flash– you can sorta-kinda fix that in software, but as a general rule the less work you need to do in GIMP, the better.

(The optics here is basically an inverse-square sort of thing: with the direct flash, it’s acting sort of like a point source, so the light is much more intense on objects close to the lens than those farther away. Bouncing the light off the ceiling first means that there isn’t too much difference in distance between the nearest and farthest objects in the image, so everything is lit about the same. You also get less glare and red-eye because you don’t have light coming straight from the flash and reflecting straight back into the camera.)

And that’s why I like having the external flash unit. You can get some of the same effect by taping a white piece of paper in front of the built-in flash, but that’s inelegant and not quite as flexible (there are other tricks you can pull with the external unit, too). Which is why it was a great Christmas gift…



from ScienceBlogs http://ift.tt/1Rdq0og

One of my Christmas gifts this year was an external flash unit for my DSLR, replacing one that broke a while back. Given that the camera has a built-in flash, you might wonder why I need this extra bulky gadget, so to answer that question, here’s a composite of five pictures I took today:

Composite image showing various flash settings. The top is the intentionally underexposed case with no flash. Top left is the direct flash, bottom left direct flash with the diffuser. Top right is indirect flash at a 45 degree angle, bottom right is indirect flash straight up.

Composite image showing various flash settings. The top is the intentionally underexposed case with no flash. Top left is the direct flash, bottom left direct flash with the diffuser. Top right is indirect flash at a 45 degree angle, bottom right is indirect flash straight up.

I put the camera in full manual mode, and set it to be a little underexposed when photographing The Pip’s PAW Patrol figures and other crap on our dining room table. That’s the top center image.

The other four shots use the external flash unit in various modes. The pair on the left have the flash pointed directly forward, basically like the built-in flash unit. The top one is the regular flash, and you can see that it looks a little harsh, with a big glare spot in the glass behind the toys, and the central dog a little overexposed. The lower one is still straight ahead, but with the diffuser in place (a piece of textured plastic looking a little like a bike reflector that breaks up the outgoing light).

On the right are two images with the indirect flash. The top has the flash tipped up at a 45 degree angle, and the bottom has it straight up. These bounce the light off the ceiling, and as you can see, this gives a less harsh illumination. There’s still a bit of glare on the glass in the 45-degree shot, but that’s almost completely gone in the straight-up one.

The indirect flash images look a whole lot nicer to me than the basic direct flash (the diffuser one looks pretty similar to the 45-degree shot, so would also be okay). This also almost completely avoids the red-eye effect you get with a direct flash– you can sorta-kinda fix that in software, but as a general rule the less work you need to do in GIMP, the better.

(The optics here is basically an inverse-square sort of thing: with the direct flash, it’s acting sort of like a point source, so the light is much more intense on objects close to the lens than those farther away. Bouncing the light off the ceiling first means that there isn’t too much difference in distance between the nearest and farthest objects in the image, so everything is lit about the same. You also get less glare and red-eye because you don’t have light coming straight from the flash and reflecting straight back into the camera.)

And that’s why I like having the external flash unit. You can get some of the same effect by taping a white piece of paper in front of the built-in flash, but that’s inelegant and not quite as flexible (there are other tricks you can pull with the external unit, too). Which is why it was a great Christmas gift…



from ScienceBlogs http://ift.tt/1Rdq0og

ZOMG: ExxonMobil and Sierra Club Agreed on Climate Policy—and Kept It Secret [Stoat]

o I’ve decided to try the “ZOMG” prefix for these things, instead of postfixing a mark of interrogation. Perhaps it makes things clearer. Anyway, the latest breathless nonsense is ExxonMobil and Sierra Club Agreed on Climate Policy—and Kept It Secret from Bloomberg (h/t JS). Why is it nonsense? Firstly, they’re pretending this is news. It isn’t news: this is essentially a re-tread of How two ExxonMobil and Sierra Club lawyers agreed on a carbon tax which is a much better article and more than a year old. Notice that it doesn’t make any foolish claims about secrets. Secondly, Exxon’s support for a carbon tax in 2009 was public; see the Calgary Herald: Exxonmobil corp., the world’s largest crude oil refiner, supports taxing carbon dioxide as the most efficient way of curbing greenhouse gas emissions, its chief executive said, via the highly-sekrit [[carbon tax]]. Note that precedes the document Bloomberg swoons over by months.



from ScienceBlogs http://ift.tt/1mw5e6p

o I’ve decided to try the “ZOMG” prefix for these things, instead of postfixing a mark of interrogation. Perhaps it makes things clearer. Anyway, the latest breathless nonsense is ExxonMobil and Sierra Club Agreed on Climate Policy—and Kept It Secret from Bloomberg (h/t JS). Why is it nonsense? Firstly, they’re pretending this is news. It isn’t news: this is essentially a re-tread of How two ExxonMobil and Sierra Club lawyers agreed on a carbon tax which is a much better article and more than a year old. Notice that it doesn’t make any foolish claims about secrets. Secondly, Exxon’s support for a carbon tax in 2009 was public; see the Calgary Herald: Exxonmobil corp., the world’s largest crude oil refiner, supports taxing carbon dioxide as the most efficient way of curbing greenhouse gas emissions, its chief executive said, via the highly-sekrit [[carbon tax]]. Note that precedes the document Bloomberg swoons over by months.



from ScienceBlogs http://ift.tt/1mw5e6p

Dispersion in organic chemistry – a review and another example

The role of dispersion in organic chemistry has been slowly recognized as being quite critical in a variety of systems. I have blogged on this subject many times, discussing new methods for properly treating dispersion within quantum computations along with a variety of molecular systems where dispersion plays a critical role. Schreiner1 has recently published a very nice review of molecular systems where dispersion is a key component towards understanding structure and/or properties.

In a similar vein, Wegner and coworkers have examined the Z to E transition of azobenzene systems (1a-g2a-g) using both experiment and computation.2 They excited the azobenzenes to the Z conformation and then monitored the rate for conversion to the E conformation. In addition they optimized the geometries of the two conformers and the transition state for their interconversion at both B3LYP/6-311G(d,p) and B3LYP-D3/6-311G(d,p). The optimized structure of the t-butyl-substituted system is shown in Figure 1.


a: R=H; b: R=tBu; c: R=Me; d: R=iPr; e: R=Cyclohexyl; f: R=Adamantyl; g: R=Ph

1b

1b-TS-2b

2b

Figure 1. B3LYP-D3/6-311G(d,p) optimized geometries of 1a, 2a, and the TS connecting them.

The experiment finds that the largest activation barriers are for the adamantly 1f and t-butyl 1b azobenzenes, while the lowest barriers are for the parent 1a and methylated 1c azobenzenes.

The trends in these barriers are not reproduced at B3LYP but are reproduced at B3LYP-D3. This suggests that dispersion is playing a role. In the Z conformations, the two phenyl groups are close together, and if appropriately substituted with bulky substituents, contrary to what might be traditionally thought, the steric bulk does not destabilize the Z form but actually serves to increase the dispersion stabilization between these groups. This leads to a higher barrier for conversion from the Z conformer to the E conformer with increasing steric bulk.

References

(1) Wagner, J. P.; Schreiner, P. R. "London Dispersion in Molecular Chemistry—Reconsidering Steric Effects," Angew. Chem. Int. Ed. 2015, 54, 12274-12296, DOI: 10.1002/anie.201503476.

(2) Schweighauser, L.; Strauss, M. A.; Bellotto, S.; Wegner, H. A. "Attraction or Repulsion? London Dispersion Forces Control Azobenzene Switches," Angew. Chem. Int. Ed. 2015, 54, 13436-13439, DOI: 10.1002/anie.201506126.

InChIs

1b: InChI=1S/C28H42N2/c1-25(2,3)19-13-20(26(4,5)6)16-23(15-19)29-30-24-17-21(27(7,8)9)14-22(18-24)28(10,11)12/h13-18H,1-12H3/b30-29-
InChIKey=SOCNVTNVHBWFKC-FLWNBWAVSA-N

2b: InChI=1S/C28H42N2/c1-25(2,3)19-13-20(26(4,5)6)16-23(15-19)29-30-24-17-21(27(7,8)9)14-22(18-24)28(10,11)12/h13-18H,1-12H3/b30-29+
InChIKey=SOCNVTNVHBWFKC-QVIHXGFCSA-N



from Computational Organic Chemistry http://ift.tt/1PbL5JD

The role of dispersion in organic chemistry has been slowly recognized as being quite critical in a variety of systems. I have blogged on this subject many times, discussing new methods for properly treating dispersion within quantum computations along with a variety of molecular systems where dispersion plays a critical role. Schreiner1 has recently published a very nice review of molecular systems where dispersion is a key component towards understanding structure and/or properties.

In a similar vein, Wegner and coworkers have examined the Z to E transition of azobenzene systems (1a-g2a-g) using both experiment and computation.2 They excited the azobenzenes to the Z conformation and then monitored the rate for conversion to the E conformation. In addition they optimized the geometries of the two conformers and the transition state for their interconversion at both B3LYP/6-311G(d,p) and B3LYP-D3/6-311G(d,p). The optimized structure of the t-butyl-substituted system is shown in Figure 1.


a: R=H; b: R=tBu; c: R=Me; d: R=iPr; e: R=Cyclohexyl; f: R=Adamantyl; g: R=Ph

1b

1b-TS-2b

2b

Figure 1. B3LYP-D3/6-311G(d,p) optimized geometries of 1a, 2a, and the TS connecting them.

The experiment finds that the largest activation barriers are for the adamantly 1f and t-butyl 1b azobenzenes, while the lowest barriers are for the parent 1a and methylated 1c azobenzenes.

The trends in these barriers are not reproduced at B3LYP but are reproduced at B3LYP-D3. This suggests that dispersion is playing a role. In the Z conformations, the two phenyl groups are close together, and if appropriately substituted with bulky substituents, contrary to what might be traditionally thought, the steric bulk does not destabilize the Z form but actually serves to increase the dispersion stabilization between these groups. This leads to a higher barrier for conversion from the Z conformer to the E conformer with increasing steric bulk.

References

(1) Wagner, J. P.; Schreiner, P. R. "London Dispersion in Molecular Chemistry—Reconsidering Steric Effects," Angew. Chem. Int. Ed. 2015, 54, 12274-12296, DOI: 10.1002/anie.201503476.

(2) Schweighauser, L.; Strauss, M. A.; Bellotto, S.; Wegner, H. A. "Attraction or Repulsion? London Dispersion Forces Control Azobenzene Switches," Angew. Chem. Int. Ed. 2015, 54, 13436-13439, DOI: 10.1002/anie.201506126.

InChIs

1b: InChI=1S/C28H42N2/c1-25(2,3)19-13-20(26(4,5)6)16-23(15-19)29-30-24-17-21(27(7,8)9)14-22(18-24)28(10,11)12/h13-18H,1-12H3/b30-29-
InChIKey=SOCNVTNVHBWFKC-FLWNBWAVSA-N

2b: InChI=1S/C28H42N2/c1-25(2,3)19-13-20(26(4,5)6)16-23(15-19)29-30-24-17-21(27(7,8)9)14-22(18-24)28(10,11)12/h13-18H,1-12H3/b30-29+
InChIKey=SOCNVTNVHBWFKC-QVIHXGFCSA-N



from Computational Organic Chemistry http://ift.tt/1PbL5JD

How warm was 2015, how warm will 2016 be? [Greg Laden's Blog]

The year that just finished, 2015, was the warmest year recorded in the instrumental record. The actual data for December is not officially available yet, but my friend and colleague John Abraham keeps track of the global surface temperature daily and has done an amazing job at estimating the final temperature anomaly value that is eventually reported in each of several databases. He has provided a graph using his estimated value, above.

There are two major contributing factors, maybe three depending on how you count everything, to 2015 being the warmest year. The main factor is, of course, global warming. The Earth’s surface temperature is going up because of the Greenhouse Effect, and along with that, we are seeing remarkable climate disruption, including floods, other inclement weather, and a host of problems. On top of this, the last part of 2015 saw a strong El Niño, the strongest recorded in historic documents. This weather event, which involves the departure of ocean-stored heat in the Pacific into the atmosphere, is continuing, though it will likely peak soon and begin to decline (but see below). That is all we need, really, to explain 2015, but there may be a third factor that overlaps with those two worth singling out. Some areas of the world’s oceans, including parts of the Atlantic and the Pacific (outside the usual Pacific El Niño warming effect), have been exceptionally warm on the surface. This is really just part of the whole anthropogenic global warming thing, but seems more extreme this year. In other words, it seems as though the ocean is putting more stored heat into the atmosphere than just that part that El Niño contributes, and the surface temperature measurements include sea surface temperature.

How warm will 2016 be? Playing the odds, it would always be a good bet that the next year will be warmer than the current year, on average, because global warming continues. However, even as the surface temperature trends upwards over time, the actual measurements from year to year wiggle up and down a fair amount owing to a number of factors. So, on average, if you bet on warming for each subsequent year you would overall win, but you might lose that bet during some years. (In fact, you could lose your shirt if warming happens to occur with infrequent large spikes interspersed among years that see modest cooling, so be careful!)

However, 2016 is actually more than 50-something percent likely to be warm compared to 2015. One reason is that El Nino will continue for the first part of 2016, and the effect that El Niño has on surface temperature is delayed. The peak effect occurs several months after the peak of the El Niño itself. So, if El Niño peaks in February, for example, we will have global warming + El Niño enhancement through early summer. So at least half of the months of 2016 will be very warm. There is a very good chance, then, that 2016 will be warmer even than 2015.

Mark Boslough, a physicist who writes quite a bit about Global Warming, has made a bet along these lines. He is not betting that 2016 will be warmer than 2015, but he is betting on the long term upward trend of the Earth’s surface temperatures. He’s really putting his money where his mouth is, by the way, to the tune of 25,000 US dollars. The details of his bet are here. So far, as far as I know, none of those in the climate science denial world have taken him up.



from ScienceBlogs http://ift.tt/1Rc7oVM

The year that just finished, 2015, was the warmest year recorded in the instrumental record. The actual data for December is not officially available yet, but my friend and colleague John Abraham keeps track of the global surface temperature daily and has done an amazing job at estimating the final temperature anomaly value that is eventually reported in each of several databases. He has provided a graph using his estimated value, above.

There are two major contributing factors, maybe three depending on how you count everything, to 2015 being the warmest year. The main factor is, of course, global warming. The Earth’s surface temperature is going up because of the Greenhouse Effect, and along with that, we are seeing remarkable climate disruption, including floods, other inclement weather, and a host of problems. On top of this, the last part of 2015 saw a strong El Niño, the strongest recorded in historic documents. This weather event, which involves the departure of ocean-stored heat in the Pacific into the atmosphere, is continuing, though it will likely peak soon and begin to decline (but see below). That is all we need, really, to explain 2015, but there may be a third factor that overlaps with those two worth singling out. Some areas of the world’s oceans, including parts of the Atlantic and the Pacific (outside the usual Pacific El Niño warming effect), have been exceptionally warm on the surface. This is really just part of the whole anthropogenic global warming thing, but seems more extreme this year. In other words, it seems as though the ocean is putting more stored heat into the atmosphere than just that part that El Niño contributes, and the surface temperature measurements include sea surface temperature.

How warm will 2016 be? Playing the odds, it would always be a good bet that the next year will be warmer than the current year, on average, because global warming continues. However, even as the surface temperature trends upwards over time, the actual measurements from year to year wiggle up and down a fair amount owing to a number of factors. So, on average, if you bet on warming for each subsequent year you would overall win, but you might lose that bet during some years. (In fact, you could lose your shirt if warming happens to occur with infrequent large spikes interspersed among years that see modest cooling, so be careful!)

However, 2016 is actually more than 50-something percent likely to be warm compared to 2015. One reason is that El Nino will continue for the first part of 2016, and the effect that El Niño has on surface temperature is delayed. The peak effect occurs several months after the peak of the El Niño itself. So, if El Niño peaks in February, for example, we will have global warming + El Niño enhancement through early summer. So at least half of the months of 2016 will be very warm. There is a very good chance, then, that 2016 will be warmer even than 2015.

Mark Boslough, a physicist who writes quite a bit about Global Warming, has made a bet along these lines. He is not betting that 2016 will be warmer than 2015, but he is betting on the long term upward trend of the Earth’s surface temperatures. He’s really putting his money where his mouth is, by the way, to the tune of 25,000 US dollars. The details of his bet are here. So far, as far as I know, none of those in the climate science denial world have taken him up.



from ScienceBlogs http://ift.tt/1Rc7oVM

After multiple changes, fewer patients suffer hospital-acquired conditions [The Pump Handle]

As 2015 drew to a close, the Agency for Healthcare Research and Quality announced some good news: Fewer US patients are dying from hospital-acquired conditions (HACs) like pressure ulcers and catheter-associated infections. Between 2011 and 2014, patients had 2.1 million fewer HACs than they would have if the 2010 baseline rate had continued. The drop translated to an estimated $20 billion in healthcare-cost savings and 87,000 fewer deaths.

Of the HACs averted over the four-year period, 40% were adverse drug events, 28% were pressure ulcers, and 16% were catheter-associated urinary tract infections. AHRQ’s report explains that the reasons for the HAC decline aren’t fully understood, but it highlights some things that probably helped:

Likely contributing causes are financial incentives created by CMS and other payers’ payment policies, public reporting of hospital-level results, technical assistance offered by the QIO program to hospitals, and technical assistance and catalytic efforts of the HHS PfP [Partnership for Patients] initiative led by CMS. Numerous other public and private initiatives to improve healthcare quality and patient safety were implemented during these years; for example, the widespread implementation and improved use of Electronic Health Records at hospitals. And crucially, the progress was made possible by the results of investments made by the Agency for Healthcare Research and Quality in producing evidence about how to make care safer, investing in tools and training to catalyze improvement, and investments in data and measures to be able to track change.

The HHS news release about the HAC findings describes some of the relevant AHRQ tools:

AHRQ has produced a variety of tools and resources to help hospitals and other providers prevent hospital-acquired conditions, such as reducing infections, pressure ulcers, and falls. Recently the agency released the Toolkit for Reducing CAUTI in Hospitals, which is based on the experiences of more than 1,200 hospitals nationwide that participated in an AHRQ-funded project to apply the Comprehensive Unit-based Safety Program to reducing catheter associated urinary tract infections (CAUTI). Preliminary data indicate that hospitals using these tools reduced CAUTIs by approximately 15 percent overall. AHRQ works with its HHS colleagues and researchers across the country to create new knowledge about how to improve care, particularly in understudied areas such as diagnostic error and antibiotic resistance.

As these paragraphs make clear, there isn’t one single solution to reducing hospital-acquired conditions – much as there isn’t one single solution to reducing US healthcare costs while providing high-quality care. Having good data is essential for knowing where the problems are, and whether new initiatives to address them actually work. Progress may seem maddeningly incremental at times, but it is possible, as this new AHRQ report demonstrates. The many health professionals who’ve contributed to reducing hospital-acquired conditions should be proud of what they’ve accomplished.



from ScienceBlogs http://ift.tt/1Z2uGli

As 2015 drew to a close, the Agency for Healthcare Research and Quality announced some good news: Fewer US patients are dying from hospital-acquired conditions (HACs) like pressure ulcers and catheter-associated infections. Between 2011 and 2014, patients had 2.1 million fewer HACs than they would have if the 2010 baseline rate had continued. The drop translated to an estimated $20 billion in healthcare-cost savings and 87,000 fewer deaths.

Of the HACs averted over the four-year period, 40% were adverse drug events, 28% were pressure ulcers, and 16% were catheter-associated urinary tract infections. AHRQ’s report explains that the reasons for the HAC decline aren’t fully understood, but it highlights some things that probably helped:

Likely contributing causes are financial incentives created by CMS and other payers’ payment policies, public reporting of hospital-level results, technical assistance offered by the QIO program to hospitals, and technical assistance and catalytic efforts of the HHS PfP [Partnership for Patients] initiative led by CMS. Numerous other public and private initiatives to improve healthcare quality and patient safety were implemented during these years; for example, the widespread implementation and improved use of Electronic Health Records at hospitals. And crucially, the progress was made possible by the results of investments made by the Agency for Healthcare Research and Quality in producing evidence about how to make care safer, investing in tools and training to catalyze improvement, and investments in data and measures to be able to track change.

The HHS news release about the HAC findings describes some of the relevant AHRQ tools:

AHRQ has produced a variety of tools and resources to help hospitals and other providers prevent hospital-acquired conditions, such as reducing infections, pressure ulcers, and falls. Recently the agency released the Toolkit for Reducing CAUTI in Hospitals, which is based on the experiences of more than 1,200 hospitals nationwide that participated in an AHRQ-funded project to apply the Comprehensive Unit-based Safety Program to reducing catheter associated urinary tract infections (CAUTI). Preliminary data indicate that hospitals using these tools reduced CAUTIs by approximately 15 percent overall. AHRQ works with its HHS colleagues and researchers across the country to create new knowledge about how to improve care, particularly in understudied areas such as diagnostic error and antibiotic resistance.

As these paragraphs make clear, there isn’t one single solution to reducing hospital-acquired conditions – much as there isn’t one single solution to reducing US healthcare costs while providing high-quality care. Having good data is essential for knowing where the problems are, and whether new initiatives to address them actually work. Progress may seem maddeningly incremental at times, but it is possible, as this new AHRQ report demonstrates. The many health professionals who’ve contributed to reducing hospital-acquired conditions should be proud of what they’ve accomplished.



from ScienceBlogs http://ift.tt/1Z2uGli

adds 2