Project TENDR: A call to action to protect children from harmful neurotoxins [The Pump Handle]

Just 10 years ago, it wouldn’t have been possible to bring leading physicians, scientists and advocates together in a consensus on toxic chemicals and neurological disorders in children, says Maureen Swanson. But with the science increasing “exponentially,” she said the time was ripe for a concerted call to action.

Swanson is co-director of Project TENDR (Targeting Environmental Neuro-Developmental Risks), a coalition of doctors, public health scientists and environmental health advocates who joined forces in 2015 to call for reducing chemical exposures that interfere with fetal and child brain development. This past summer in July, after more than a year of work, the group published its TENDR Consensus Statement in Environmental Health Perspectives, laying down a foundation for developing future recommendations to monitor, assess and reduce neurotoxic chemical exposures. The consensus concludes that a new framework for assessing such chemicals is desperately needed, as the “current system in the United States for evaluating scientific evidence and making health-based decisions about environmental chemicals is fundamentally broken.”

Swanson said the consensus statement is a first of its kind, adding that it’s “unprecedented” to have such a breadth of scientists come together and agree that the “science is clear” on toxic chemicals and neurodevelopmental disorders.

“Part of the urgency is because these toxic chemicals are in such widespread use and exposures for children and pregnant women are so widespread — they’re just ubiquitous,” Swanson, who also directs the Healthy Children Project at the Learning Disabilities Association of America, told me. “The urgency is also in seeing the trends in learning and developmental disorders and cognitive and behavioral difficulties — they’re problems that only seem to be increasing.”

According to the consensus statement, which Swanson said involved hundreds of studies and “countless hours” and reviewing and assessing the evidence, the U.S. is home to an alarming increase in childhood learning and behavioral problems, with parents reporting that one in six American children are living with some form of developmental disability, such as autism or attention deficit hyperactivity disorder. That statistic is an increase of 17 percent from a decade ago. The statement offers examples of toxic chemicals that can contribute to such disorders and lays out the argument for a new approach to chemical safety.

For example, the statement notes that many studies offer evidence that “clearly demonstrates or strongly suggests” adverse neurodevelopmental toxicity for lead, mercury, organophosphate pesticides, combustion-related air pollution, PBDE flame retardants and PCBs. Lead, as Swanson noted, is a perfect example of a widely used chemical that contributes to cognitive problems and intellectual impairment — “and yet it’s still everywhere, in water pipes, in cosmetics. We thought we’d done a good job of eliminating lead problems, but we haven’t done enough,” she said. Another prime example are chemical flame retardants, one of the most common household toxic exposures associated with neurodevelopmental delays in children.

“Of course these disorders are complex and multifactorial, so genetics plays a role, nutrition does and social stressors do,” Swanson told me. “But the contribution of toxic chemicals is a piece that we can prevent. We can do something about this part to decrease the risk to children.”

On taking action, the consensus argues that the current system for evaluating the human health effects of chemicals is “broken,” noting that of the thousands of chemicals now on the market, only a fraction have been tested for health impacts. The consensus reads:

Our failures to protect children from harm underscore the urgent need for a better approach to developing and assessing scientific evidence and using it to make decisions. We as a society should be able to take protective action when scientific evidence indicates a chemical is of concern, and not wait for unequivocal proof that a chemical is causing harm to our children.

Evidence of neurodevelopmental toxicity of any type — epidemiological or toxicological or mechanistic — by itself should constitute a signal sufficient to trigger prioritization and some level of action. Such an approach would enable policy makers and regulators to proactively test and identify chemicals that are emerging concerns for brain development and prevent widespread human exposures.

As many of you know, President Obama signed the Frank R. Lautenberg Chemical Safety for the 21st Century Act into law in June, reforming the woefully outdated federal Toxic Substances Control Act (TSCA), which hadn’t been updated since 1976. And while TSCA reform is certainly a “step in the right direction,” Swanson said the sheer backlog of chemical safety testing as well as the pace of testing set forth in the new law means the process of reducing or removing toxic exposures will likely be incredibly slow — and even that’s still dependent on whether the U.S. Environmental Protection Agency is fully funded to implement TSCA reform.

“TSCA reform by itself is insufficient to address the magnitude of these problems,” she said, noting that pesticides are outside of TSCA’s and the new law’s jurisdiction.

In turn, consensus authors called on regulators to follow scientific guidance when assessing a chemical’s impact on brain development, with a particular emphasis on fetuses and children; called on businesses to eliminate neurotoxic chemicals from their products; called on health providers to integrate knowledge about neurotoxics into patient care and public health practice; and called on policymakers to be more aggressive in reducing childhood lead exposures.

The problem of harmful chemical exposures can seem like an overwhelming one — “nobody can shop they’re way out of this problem,” Swanson said — but there are steps that can be taken right away to reduce exposures. For example, Swanson noted that when the U.S. phased out the use of lead in gasoline, children’s blood lead levels plummeted. Similarly, after Sweden banned PBDEs in the 1990s, levels of the chemical found in breast milk dropped sharply.

In terms of next steps, Swanson said Project TENDR will continue reaching out to policymakers, health professionals and businesses on how to work together toward safer chemical use and healthier children.

“There’s a lot we can do that can make a substantial difference in a relatively short time frame,” Swanson told me. “Our key message is that this is problem we can do something about. There’s reason for alarm, but also reason to get working and take care of it collectively so that our children are not at greater risk for neurodevelopmental disorders.”

To download a full copy of the Consensus Statement, as well as find tips on reducing harmful exposures on an individual level, visit Project TENDR.

Kim Krisberg is a freelance public health writer living in Austin, Texas, and has been writing about public health for nearly 15 years.



from ScienceBlogs http://ift.tt/2dh1yEc

Just 10 years ago, it wouldn’t have been possible to bring leading physicians, scientists and advocates together in a consensus on toxic chemicals and neurological disorders in children, says Maureen Swanson. But with the science increasing “exponentially,” she said the time was ripe for a concerted call to action.

Swanson is co-director of Project TENDR (Targeting Environmental Neuro-Developmental Risks), a coalition of doctors, public health scientists and environmental health advocates who joined forces in 2015 to call for reducing chemical exposures that interfere with fetal and child brain development. This past summer in July, after more than a year of work, the group published its TENDR Consensus Statement in Environmental Health Perspectives, laying down a foundation for developing future recommendations to monitor, assess and reduce neurotoxic chemical exposures. The consensus concludes that a new framework for assessing such chemicals is desperately needed, as the “current system in the United States for evaluating scientific evidence and making health-based decisions about environmental chemicals is fundamentally broken.”

Swanson said the consensus statement is a first of its kind, adding that it’s “unprecedented” to have such a breadth of scientists come together and agree that the “science is clear” on toxic chemicals and neurodevelopmental disorders.

“Part of the urgency is because these toxic chemicals are in such widespread use and exposures for children and pregnant women are so widespread — they’re just ubiquitous,” Swanson, who also directs the Healthy Children Project at the Learning Disabilities Association of America, told me. “The urgency is also in seeing the trends in learning and developmental disorders and cognitive and behavioral difficulties — they’re problems that only seem to be increasing.”

According to the consensus statement, which Swanson said involved hundreds of studies and “countless hours” and reviewing and assessing the evidence, the U.S. is home to an alarming increase in childhood learning and behavioral problems, with parents reporting that one in six American children are living with some form of developmental disability, such as autism or attention deficit hyperactivity disorder. That statistic is an increase of 17 percent from a decade ago. The statement offers examples of toxic chemicals that can contribute to such disorders and lays out the argument for a new approach to chemical safety.

For example, the statement notes that many studies offer evidence that “clearly demonstrates or strongly suggests” adverse neurodevelopmental toxicity for lead, mercury, organophosphate pesticides, combustion-related air pollution, PBDE flame retardants and PCBs. Lead, as Swanson noted, is a perfect example of a widely used chemical that contributes to cognitive problems and intellectual impairment — “and yet it’s still everywhere, in water pipes, in cosmetics. We thought we’d done a good job of eliminating lead problems, but we haven’t done enough,” she said. Another prime example are chemical flame retardants, one of the most common household toxic exposures associated with neurodevelopmental delays in children.

“Of course these disorders are complex and multifactorial, so genetics plays a role, nutrition does and social stressors do,” Swanson told me. “But the contribution of toxic chemicals is a piece that we can prevent. We can do something about this part to decrease the risk to children.”

On taking action, the consensus argues that the current system for evaluating the human health effects of chemicals is “broken,” noting that of the thousands of chemicals now on the market, only a fraction have been tested for health impacts. The consensus reads:

Our failures to protect children from harm underscore the urgent need for a better approach to developing and assessing scientific evidence and using it to make decisions. We as a society should be able to take protective action when scientific evidence indicates a chemical is of concern, and not wait for unequivocal proof that a chemical is causing harm to our children.

Evidence of neurodevelopmental toxicity of any type — epidemiological or toxicological or mechanistic — by itself should constitute a signal sufficient to trigger prioritization and some level of action. Such an approach would enable policy makers and regulators to proactively test and identify chemicals that are emerging concerns for brain development and prevent widespread human exposures.

As many of you know, President Obama signed the Frank R. Lautenberg Chemical Safety for the 21st Century Act into law in June, reforming the woefully outdated federal Toxic Substances Control Act (TSCA), which hadn’t been updated since 1976. And while TSCA reform is certainly a “step in the right direction,” Swanson said the sheer backlog of chemical safety testing as well as the pace of testing set forth in the new law means the process of reducing or removing toxic exposures will likely be incredibly slow — and even that’s still dependent on whether the U.S. Environmental Protection Agency is fully funded to implement TSCA reform.

“TSCA reform by itself is insufficient to address the magnitude of these problems,” she said, noting that pesticides are outside of TSCA’s and the new law’s jurisdiction.

In turn, consensus authors called on regulators to follow scientific guidance when assessing a chemical’s impact on brain development, with a particular emphasis on fetuses and children; called on businesses to eliminate neurotoxic chemicals from their products; called on health providers to integrate knowledge about neurotoxics into patient care and public health practice; and called on policymakers to be more aggressive in reducing childhood lead exposures.

The problem of harmful chemical exposures can seem like an overwhelming one — “nobody can shop they’re way out of this problem,” Swanson said — but there are steps that can be taken right away to reduce exposures. For example, Swanson noted that when the U.S. phased out the use of lead in gasoline, children’s blood lead levels plummeted. Similarly, after Sweden banned PBDEs in the 1990s, levels of the chemical found in breast milk dropped sharply.

In terms of next steps, Swanson said Project TENDR will continue reaching out to policymakers, health professionals and businesses on how to work together toward safer chemical use and healthier children.

“There’s a lot we can do that can make a substantial difference in a relatively short time frame,” Swanson told me. “Our key message is that this is problem we can do something about. There’s reason for alarm, but also reason to get working and take care of it collectively so that our children are not at greater risk for neurodevelopmental disorders.”

To download a full copy of the Consensus Statement, as well as find tips on reducing harmful exposures on an individual level, visit Project TENDR.

Kim Krisberg is a freelance public health writer living in Austin, Texas, and has been writing about public health for nearly 15 years.



from ScienceBlogs http://ift.tt/2dh1yEc

Friday Cephalopod: Net traps and chiller [Pharyngula]

If you ever wondered how to breed nautiluses



from ScienceBlogs http://ift.tt/2durJ7b

If you ever wondered how to breed nautiluses



from ScienceBlogs http://ift.tt/2durJ7b

More Wadhams [Stoat]

paladin Browsing Twitter after a break I was unsurprised to see the usual suspects dissing that fine chap, Peter Wadhams. Heaven forfend that I should ever stoop so low. It is tempting to describe the “lame article” they were dissing as the usual stuff, but alas it isn’t. It lards extra Yellow Peril guff onto the pre-existing guff. Incidentally the author, Paul Brown, was once a respectable chap – my great-aunt Proctor knew him somewhat. But that was many years ago. Bizarrely, the first “related posts” link in the article is to a far better article by Ed Hawkins pointing out how bad the previous article about Wadhams was.

[You may be wondering “why the image?” At least, if you’re new here you might. The answer is that web-indexers tend to throw up the first image on a page that they find; and I didn’t want that nice PH to be the “image” for this post.]

Bordering on dishonest

“What is needed is something that has not been invented yet − a way of stopping elderly scientists from talking nonsense” (I may have fabricated part of that quote). And my section header is a sub-headline in the article, to they can hardly complain if I reproduce it to my own ends. The rest of this post is just character assassination (or, to dignify it somewhat, trying to work out what his current status is); look away if you like Wadhams.

I was intrigued by the article calling him “former head of the Polar Ocean Physics Group at the University of Cambridge” (my bold). It appears to be wrong; as far as I can tell, he is still head of this little known group. The article may have got confused by him also being (accurately) the former head of SPRI. But the POPG is an odd little thing. Just look at it’s web page: Professor Peter Wadhams has run the Group since January 1976, which until December 2002 was based in the Scott Polar Research Institute (SPRI), University of Cambridge. In January 2003 the Group moved within the university from SPRI to the Department of Applied Mathematics and Theoretical Physics. It was previously called the Sea Ice and Polar Oceanography Group. Text by Prof Peter Wadhams, 2002. Updated by Oliver Merrington, POPG Webmaster, October 2005. This is certainly not an active web page. [I’ve just re-read that. He’s run the group for forty years!?! Can that be healthy?]

Poking further, I find his Clare Hall bio, where he self-describes as “became Emeritus Professor in October 2015” (and there’s also this little letter which may or may not be deliberately public). So his DAMPT page is clearly out of date (it still describes him as “Professor”, which he is isn’t, any more that Murry Salby is). I think the best explanation is that the DAMPT pages are just out of date and unloved; they don’t give the impression of vibrancy.

Within DAMPT, the POPG is a touch anomalous, to my eye. It fits within the highly-respected Geophysics group, and might be compared (in the sense that it appears to sit on the same organisational level as), say, to the Atmosphere-Ocean Dynamics Group. This is a highly active research group featuring hard man (just look at those eyes; you wouldn’t want to run across him on a dark river) Peter Haynes (who, I might perhaps hasten to add, has nothing at all to do with the story (story? This post has a plot? Well no. OK then, ramble) I’m telling) and mysterious old wizard Michael McIntyre (famous for telling you to repeat words). Compare that to the POPG page and it looks a touch moribund; poke further into the list of projects and the impression re-surfaces. Can an active research group be lead by an emeritus professor? It seems odd to me, but what do I know?

There’s also this bio which, perhaps fittingly, ends on the ludicrous Vast costs of Arctic change.



from ScienceBlogs http://ift.tt/2dxMPSP

paladin Browsing Twitter after a break I was unsurprised to see the usual suspects dissing that fine chap, Peter Wadhams. Heaven forfend that I should ever stoop so low. It is tempting to describe the “lame article” they were dissing as the usual stuff, but alas it isn’t. It lards extra Yellow Peril guff onto the pre-existing guff. Incidentally the author, Paul Brown, was once a respectable chap – my great-aunt Proctor knew him somewhat. But that was many years ago. Bizarrely, the first “related posts” link in the article is to a far better article by Ed Hawkins pointing out how bad the previous article about Wadhams was.

[You may be wondering “why the image?” At least, if you’re new here you might. The answer is that web-indexers tend to throw up the first image on a page that they find; and I didn’t want that nice PH to be the “image” for this post.]

Bordering on dishonest

“What is needed is something that has not been invented yet − a way of stopping elderly scientists from talking nonsense” (I may have fabricated part of that quote). And my section header is a sub-headline in the article, to they can hardly complain if I reproduce it to my own ends. The rest of this post is just character assassination (or, to dignify it somewhat, trying to work out what his current status is); look away if you like Wadhams.

I was intrigued by the article calling him “former head of the Polar Ocean Physics Group at the University of Cambridge” (my bold). It appears to be wrong; as far as I can tell, he is still head of this little known group. The article may have got confused by him also being (accurately) the former head of SPRI. But the POPG is an odd little thing. Just look at it’s web page: Professor Peter Wadhams has run the Group since January 1976, which until December 2002 was based in the Scott Polar Research Institute (SPRI), University of Cambridge. In January 2003 the Group moved within the university from SPRI to the Department of Applied Mathematics and Theoretical Physics. It was previously called the Sea Ice and Polar Oceanography Group. Text by Prof Peter Wadhams, 2002. Updated by Oliver Merrington, POPG Webmaster, October 2005. This is certainly not an active web page. [I’ve just re-read that. He’s run the group for forty years!?! Can that be healthy?]

Poking further, I find his Clare Hall bio, where he self-describes as “became Emeritus Professor in October 2015” (and there’s also this little letter which may or may not be deliberately public). So his DAMPT page is clearly out of date (it still describes him as “Professor”, which he is isn’t, any more that Murry Salby is). I think the best explanation is that the DAMPT pages are just out of date and unloved; they don’t give the impression of vibrancy.

Within DAMPT, the POPG is a touch anomalous, to my eye. It fits within the highly-respected Geophysics group, and might be compared (in the sense that it appears to sit on the same organisational level as), say, to the Atmosphere-Ocean Dynamics Group. This is a highly active research group featuring hard man (just look at those eyes; you wouldn’t want to run across him on a dark river) Peter Haynes (who, I might perhaps hasten to add, has nothing at all to do with the story (story? This post has a plot? Well no. OK then, ramble) I’m telling) and mysterious old wizard Michael McIntyre (famous for telling you to repeat words). Compare that to the POPG page and it looks a touch moribund; poke further into the list of projects and the impression re-surfaces. Can an active research group be lead by an emeritus professor? It seems odd to me, but what do I know?

There’s also this bio which, perhaps fittingly, ends on the ludicrous Vast costs of Arctic change.



from ScienceBlogs http://ift.tt/2dxMPSP

Sensitivity training

Climate scientists are certain that human-caused emissions have increased carbon dioxide in the atmosphere by 44 per cent since the Industrial Revolution. Very few of them dispute that this has already caused average global temperatures to rise roughly 1 degree. Accompanying the warming is disruption to weather patterns, rising sea levels and increased ocean acidity. There is no doubt that further emissions will only make matters worse, possibly much worse. In a nutshell, that is the settled science on human-caused climate change.

What scientists cannot yet pin down is exactly how much warming we will get in the future. They do not know with precision how much a given quantity of emissions will lead to increased concentrations of greenhouse gases in the atmosphere. For climate impact it is the concentrations that matter, not the emissions. Up until now, 29 per cent of human emissions of carbon dioxide has been taken up by the oceans, 28 per cent has been absorbed by plant growth on land, and the remaining 43 per cent has accumulated in the atmosphere. Humans have increased carbon dioxide concentrations in the atmosphere from a pre-industrial level of 280 parts per million to over 400 today, a level not seen for millions of years.

There’s a possibility that the 43 per cent atmospheric fraction may increase as ocean and terrestrial carbon sinks start to become saturated. This means that a given amount of emissions will lead to a bigger increase in concentrations than we saw before. In addition, the warming climate may well provoke increased emissions from non-fossil fuel sources. For example, as permafrost thaws, the long-frozen organic matter contained within it rots and oxidizes, giving off greenhouse gases. Nature has given us a major helping hand, so far, by the oceans and plants taking up more than half of our added fossil carbon, but there’s no guarantee that it will continue to be so supportive forever. These so-called carbon-cycle feedbacks will play a big role in determining how our climate future will unfold, but they are not the largest unknown. 

Feedbacks

Atmospheric physicists have long tried to pin down a number to express what they refer to as climate sensitivity, the amount of warming we will get from a certain increase in concentration of greenhouse gases. Usually, this is expressed as the average global warming, measured in degrees Celsius that results from a doubling of carbon dioxide concentrations. The problem is not so much being able to calculate how much warming the doubling of the carbon dioxide alone will cause – that is relatively easy to estimate and is about 1 degree C. The big challenge is in figuring out the range of size of the feedbacks. These are the phenomena that arise from warming temperatures and that amplify or dampen the direct effects of the greenhouse gases that humans have added to the atmosphere.

The biggest feedback is water vapour, which is actually the most important single greenhouse gas in the atmosphere. Warm air holds more water vapour. As carbon dioxide increases and the air warms, there is plenty of water on land and in the sea available to evaporate. The increased amount of vapour in the air, in turn, provokes more warming and increased evaporation. If temperatures go down, the water vapour condenses and precipitates out of the atmosphere as rain and snow. Water vapour goes quickly into and out of the air as temperatures rise and fall, but the level of carbon dioxide stays around for centuries, which is why water vapour is considered a feedback and not a forcing agent. Roughly speaking, the water vapour feedback increases the sensitivity of carbon dioxide alone from 1 to 2 degrees C.

Another feedback results from the melting of sea ice in the Arctic. Ice reflects the sun’s energy back out into space, whereas oceans that are free of ice absorb more of the sun’s radiated heat. As warming temperatures melt the sea ice, the Earth absorbs more solar energy and the surface warms faster. The loss of sea ice is a major reason that Arctic temperatures are increasing about twice as fast as the rest of the globe. The Antarctic has gained rather than lost sea ice over recent decades due to the effects of ocean currents and other factors. But this gain is much smaller than the Arctic ice loss, so the overall effect of all polar sea ice on the climate is to amplify the global response to increased carbon dioxide concentrations.

The least well-defined feedback is the effect of clouds. The quantity and distribution of clouds is expected to change in a warming climate, but exactly how is not yet fully known and is debated. High clouds tend to keep more heat in, while low clouds tend to reflect more sunlight back into space, providing a cooling effect. Most experts estimate that clouds, on balance, will have anywhere from a slight cooling feedback to a significant warming feedback.

On top of the variance in the estimates of the feedbacks, the size of the human and natural factors that drive climate change, apart from carbon dioxide, also have a wide range. Greenhouse gases like methane play a big role in warming, while sulphate-particle pollution from coal-burning plants actually cools the planet by blocking the sun. Land-use changes – clearing forests, for example – also affect climate by either reflecting or absorbing more of the sun’s energy. Natural ejections of reflective particles from volcanoes can also influence the climate in significant, but unpredictable ways.

This year’s model

An early estimate of climate sensitivity was made in 1979 by the American scientist Jule Charney. He based his estimate on just two sets of climate calculations – or models – that were available at that time. One set of models predicted a sensitivity of 2 degrees, the other, 4 degrees, which he averaged to get a mean value of 3 degrees. Rather arbitrarily, he subtracted or added half a degree from the two model estimates to produce a minimum-to-maximum range of 1.5 to 4.5 degrees. Despite the shakiness of this approach, Charney’s estimate has proved remarkably durable.

sensitive1

The five Intergovernmental Panel on Climate Change (IPCC) reports produced between 1990 and 2013 drew upon the results of many more climate models that were also much more sophisticated. Nevertheless, all of the reports came up with estimates of the minimum, maximum and most likely sensitivities that were within half a degree of Charney’s rough estimate. In 2007, the fourth assessment report (AR4) provided a climate sensitivity range of 2 to 4.5 degrees C, with a most likely value of 3 degrees C. The latest report, AR5 in 2013, estimates the likely range of sensitivity at 1.5 to 4.5 degrees C, exactly the range Charney provided 34 years earlier with his educated guesswork.

It is worth noting that climate sensitivity is not an input factor into the climate models but a calculated result.

In the past few years, some scientists have made calculations based on recent temperature measurements and simple energy-balance climate models. This approach has tended to produce an estimate of a most-likely climate sensitivity number around 2 degrees, which is significantly lower than the previous best estimate of around 3 degrees from more complex climate models. Taking account of this work, the IPCC adjusted its lower estimates downward in the 2013 AR5 report and, because of the newly increased range, opted not to settle upon a most-likely central value. These new, lower values suggest that the average, complex climate models may be predicting too much warming.

However, a recent publication in the journal Nature Climate Change by NASA scientist Mark Richardson and his colleagues has exposed flaws in those simple, low sensitivity models. One problem is that the simple calculations took ocean temperatures measured just below the surface (which is the common measurement made by climate scientists) and compared them to the calculated air temperatures near the Earth’s surface that is output by climate models. Since air above the ocean warms more than the water, the comparison is not valid over the oceans. Richardson and his colleagues also factored in the effect of retreating Arctic sea ice on temperature measurements, and the lack of measured historical data in some regions. They then checked the calculations again, and as Richardson explained to Corporate Knights:

“Once you do a fair test then you get the same result from both the simple calculation using real-world data and from complex climate models. We took model water temperatures when the measurements are of water temperatures, and didn’t use model output when and where there were no measurements. This matters because fast-warming areas like the Arctic, where there is now less summer sea ice than in at least 1,450 years, have not historically been measured by thermometers. All of the effects combined in the same way; they hid warming. This is the main reason that climate models looked like they were warmed a bit too much since 1861.”

Additional recent research from a NASA team led by scientist Kate Marvel took a hard look at some other simplifying assumptions made in the low-sensitivity calculations. Marvel and her colleagues modified the inputs to more complex climate models to explore how much certain factors, like sulphate pollution or land-use changes, affected the climate when modelled in isolation. They found that these agents are more effective in causing temperature changes because they tend to be located in the northern hemisphere and on land where they carry a bigger punch than if it is simply assumed that their effect is distributed evenly across the planet, as some of the simpler, low-sensitivity studies have done.

Combining the Richardson and the Marvel results brings estimates of climate sensitivity back to, or even a little above Jule Charney’s estimates. To the non-specialist, all of this may seem like a rather pointless process where we end up where we started from, still stuck with a stubbornly wide range of a factor of 3 or so from minimum (1.5 degrees) to maximum (4.5 degrees). But as Gavin Schmidt, director of NASA’s Goddard Institute for Space Studies, told Scientific American last year: “We may be just as unsure as before, but we are unsure on a much more solid footing.”

Uncertainty provides no comfort

Climate sensitivity estimates are not just estimated by climate models using modern data. Scientists also have observations of how the Earth behaved in periods of past climatic change. From the ice-age cycles that occurred over the past 800,000 years there are samples of past atmospheres trapped in gas bubbles in ice cores that reveal the chemical mix of the air and the temperatures at the time.

Scientists can look back much further in time, many millions of years ago, when the Earth was in a hot-house state. In those times there was little ice even at the poles and sea levels were several tens of metres higher than they are today.

These observations of the geological past have their own considerable ranges of uncertainty, but, taken together, they produce estimates of climate sensitivity that are broadly consistent with the range calculated by climate models of the modern era. This consilience, which is to say, different approaches pointing to the same general result, explains why climate scientists are so confident that increasing concentrations of greenhouse gases lead to increased warming, even if nobody can yet be sure how much the human-induced warming will be over this century and beyond.

One thing we do know with great confidence is that if we continue to emit greenhouse gases at the current rate, then sometime in the second half of this century we will have doubled the concentration of carbon dioxide in the atmosphere. The last time concentrations were that high, 30 million years ago, there was no ice on Greenland and little on Antarctica.

Click here to read the rest



from Skeptical Science http://ift.tt/2cRl5dK

Climate scientists are certain that human-caused emissions have increased carbon dioxide in the atmosphere by 44 per cent since the Industrial Revolution. Very few of them dispute that this has already caused average global temperatures to rise roughly 1 degree. Accompanying the warming is disruption to weather patterns, rising sea levels and increased ocean acidity. There is no doubt that further emissions will only make matters worse, possibly much worse. In a nutshell, that is the settled science on human-caused climate change.

What scientists cannot yet pin down is exactly how much warming we will get in the future. They do not know with precision how much a given quantity of emissions will lead to increased concentrations of greenhouse gases in the atmosphere. For climate impact it is the concentrations that matter, not the emissions. Up until now, 29 per cent of human emissions of carbon dioxide has been taken up by the oceans, 28 per cent has been absorbed by plant growth on land, and the remaining 43 per cent has accumulated in the atmosphere. Humans have increased carbon dioxide concentrations in the atmosphere from a pre-industrial level of 280 parts per million to over 400 today, a level not seen for millions of years.

There’s a possibility that the 43 per cent atmospheric fraction may increase as ocean and terrestrial carbon sinks start to become saturated. This means that a given amount of emissions will lead to a bigger increase in concentrations than we saw before. In addition, the warming climate may well provoke increased emissions from non-fossil fuel sources. For example, as permafrost thaws, the long-frozen organic matter contained within it rots and oxidizes, giving off greenhouse gases. Nature has given us a major helping hand, so far, by the oceans and plants taking up more than half of our added fossil carbon, but there’s no guarantee that it will continue to be so supportive forever. These so-called carbon-cycle feedbacks will play a big role in determining how our climate future will unfold, but they are not the largest unknown. 

Feedbacks

Atmospheric physicists have long tried to pin down a number to express what they refer to as climate sensitivity, the amount of warming we will get from a certain increase in concentration of greenhouse gases. Usually, this is expressed as the average global warming, measured in degrees Celsius that results from a doubling of carbon dioxide concentrations. The problem is not so much being able to calculate how much warming the doubling of the carbon dioxide alone will cause – that is relatively easy to estimate and is about 1 degree C. The big challenge is in figuring out the range of size of the feedbacks. These are the phenomena that arise from warming temperatures and that amplify or dampen the direct effects of the greenhouse gases that humans have added to the atmosphere.

The biggest feedback is water vapour, which is actually the most important single greenhouse gas in the atmosphere. Warm air holds more water vapour. As carbon dioxide increases and the air warms, there is plenty of water on land and in the sea available to evaporate. The increased amount of vapour in the air, in turn, provokes more warming and increased evaporation. If temperatures go down, the water vapour condenses and precipitates out of the atmosphere as rain and snow. Water vapour goes quickly into and out of the air as temperatures rise and fall, but the level of carbon dioxide stays around for centuries, which is why water vapour is considered a feedback and not a forcing agent. Roughly speaking, the water vapour feedback increases the sensitivity of carbon dioxide alone from 1 to 2 degrees C.

Another feedback results from the melting of sea ice in the Arctic. Ice reflects the sun’s energy back out into space, whereas oceans that are free of ice absorb more of the sun’s radiated heat. As warming temperatures melt the sea ice, the Earth absorbs more solar energy and the surface warms faster. The loss of sea ice is a major reason that Arctic temperatures are increasing about twice as fast as the rest of the globe. The Antarctic has gained rather than lost sea ice over recent decades due to the effects of ocean currents and other factors. But this gain is much smaller than the Arctic ice loss, so the overall effect of all polar sea ice on the climate is to amplify the global response to increased carbon dioxide concentrations.

The least well-defined feedback is the effect of clouds. The quantity and distribution of clouds is expected to change in a warming climate, but exactly how is not yet fully known and is debated. High clouds tend to keep more heat in, while low clouds tend to reflect more sunlight back into space, providing a cooling effect. Most experts estimate that clouds, on balance, will have anywhere from a slight cooling feedback to a significant warming feedback.

On top of the variance in the estimates of the feedbacks, the size of the human and natural factors that drive climate change, apart from carbon dioxide, also have a wide range. Greenhouse gases like methane play a big role in warming, while sulphate-particle pollution from coal-burning plants actually cools the planet by blocking the sun. Land-use changes – clearing forests, for example – also affect climate by either reflecting or absorbing more of the sun’s energy. Natural ejections of reflective particles from volcanoes can also influence the climate in significant, but unpredictable ways.

This year’s model

An early estimate of climate sensitivity was made in 1979 by the American scientist Jule Charney. He based his estimate on just two sets of climate calculations – or models – that were available at that time. One set of models predicted a sensitivity of 2 degrees, the other, 4 degrees, which he averaged to get a mean value of 3 degrees. Rather arbitrarily, he subtracted or added half a degree from the two model estimates to produce a minimum-to-maximum range of 1.5 to 4.5 degrees. Despite the shakiness of this approach, Charney’s estimate has proved remarkably durable.

sensitive1

The five Intergovernmental Panel on Climate Change (IPCC) reports produced between 1990 and 2013 drew upon the results of many more climate models that were also much more sophisticated. Nevertheless, all of the reports came up with estimates of the minimum, maximum and most likely sensitivities that were within half a degree of Charney’s rough estimate. In 2007, the fourth assessment report (AR4) provided a climate sensitivity range of 2 to 4.5 degrees C, with a most likely value of 3 degrees C. The latest report, AR5 in 2013, estimates the likely range of sensitivity at 1.5 to 4.5 degrees C, exactly the range Charney provided 34 years earlier with his educated guesswork.

It is worth noting that climate sensitivity is not an input factor into the climate models but a calculated result.

In the past few years, some scientists have made calculations based on recent temperature measurements and simple energy-balance climate models. This approach has tended to produce an estimate of a most-likely climate sensitivity number around 2 degrees, which is significantly lower than the previous best estimate of around 3 degrees from more complex climate models. Taking account of this work, the IPCC adjusted its lower estimates downward in the 2013 AR5 report and, because of the newly increased range, opted not to settle upon a most-likely central value. These new, lower values suggest that the average, complex climate models may be predicting too much warming.

However, a recent publication in the journal Nature Climate Change by NASA scientist Mark Richardson and his colleagues has exposed flaws in those simple, low sensitivity models. One problem is that the simple calculations took ocean temperatures measured just below the surface (which is the common measurement made by climate scientists) and compared them to the calculated air temperatures near the Earth’s surface that is output by climate models. Since air above the ocean warms more than the water, the comparison is not valid over the oceans. Richardson and his colleagues also factored in the effect of retreating Arctic sea ice on temperature measurements, and the lack of measured historical data in some regions. They then checked the calculations again, and as Richardson explained to Corporate Knights:

“Once you do a fair test then you get the same result from both the simple calculation using real-world data and from complex climate models. We took model water temperatures when the measurements are of water temperatures, and didn’t use model output when and where there were no measurements. This matters because fast-warming areas like the Arctic, where there is now less summer sea ice than in at least 1,450 years, have not historically been measured by thermometers. All of the effects combined in the same way; they hid warming. This is the main reason that climate models looked like they were warmed a bit too much since 1861.”

Additional recent research from a NASA team led by scientist Kate Marvel took a hard look at some other simplifying assumptions made in the low-sensitivity calculations. Marvel and her colleagues modified the inputs to more complex climate models to explore how much certain factors, like sulphate pollution or land-use changes, affected the climate when modelled in isolation. They found that these agents are more effective in causing temperature changes because they tend to be located in the northern hemisphere and on land where they carry a bigger punch than if it is simply assumed that their effect is distributed evenly across the planet, as some of the simpler, low-sensitivity studies have done.

Combining the Richardson and the Marvel results brings estimates of climate sensitivity back to, or even a little above Jule Charney’s estimates. To the non-specialist, all of this may seem like a rather pointless process where we end up where we started from, still stuck with a stubbornly wide range of a factor of 3 or so from minimum (1.5 degrees) to maximum (4.5 degrees). But as Gavin Schmidt, director of NASA’s Goddard Institute for Space Studies, told Scientific American last year: “We may be just as unsure as before, but we are unsure on a much more solid footing.”

Uncertainty provides no comfort

Climate sensitivity estimates are not just estimated by climate models using modern data. Scientists also have observations of how the Earth behaved in periods of past climatic change. From the ice-age cycles that occurred over the past 800,000 years there are samples of past atmospheres trapped in gas bubbles in ice cores that reveal the chemical mix of the air and the temperatures at the time.

Scientists can look back much further in time, many millions of years ago, when the Earth was in a hot-house state. In those times there was little ice even at the poles and sea levels were several tens of metres higher than they are today.

These observations of the geological past have their own considerable ranges of uncertainty, but, taken together, they produce estimates of climate sensitivity that are broadly consistent with the range calculated by climate models of the modern era. This consilience, which is to say, different approaches pointing to the same general result, explains why climate scientists are so confident that increasing concentrations of greenhouse gases lead to increased warming, even if nobody can yet be sure how much the human-induced warming will be over this century and beyond.

One thing we do know with great confidence is that if we continue to emit greenhouse gases at the current rate, then sometime in the second half of this century we will have doubled the concentration of carbon dioxide in the atmosphere. The last time concentrations were that high, 30 million years ago, there was no ice on Greenland and little on Antarctica.

Click here to read the rest



from Skeptical Science http://ift.tt/2cRl5dK

New MIT app: check if your car meets climate targets

In a new study published in the journal Environmental Science & Technology, with an accompanying app for the public, scientists at MIT compare the carbon pollution from today’s cars to the international 2°C climate target. In order to meet that target, overall emissions need to decline dramatically over the coming decades.

The MIT team compared emissions from 125 electric, hybrid, and gasoline cars to the levels we need to achieve from the transportation sector in 2030, 2040, and 2050 to stay below 2°C global warming. They also looked at the cost efficiency of each car, including vehicle, fuel, and maintenance costs. The bottom line:

Although the average carbon intensity of vehicles sold in 2014 exceeds the climate target for 2030 by more than 50%, we find that most hybrid and battery electric vehicles available today meet this target. By 2050, only electric vehicles supplied with almost completely carbon-free electric power are expected to meet climate-policy targets.

figure

Cost-carbon space for light-duty vehicles, assuming a 14 year lifetime, 12,100 miles driven annually, and an 8% discount rate. Data points show the most popular internal-combustion-engine vehicles (black), hybrid electric vehicles (pink), plug-in hybrid electric vehicles (red), and battery electric vehicles (yellow) in 2014, as well as one of the first fully commercial fuel-cell vehicles (blue). Illustration: Miotti et al. (2016), Environmental Science & Technology.

The MIT app allows consumers to check how their own vehicles – or cars they’re considering purchasing – stack up on the carbon emissions and cost curves. As co-author Jessika Trancik noted,

One goal of the work is to translate climate mitigation scenarios to the level of individual decision-makers who will ultimately be the ones to decide whether or not a clean energy transition occurs (in a market economy, at least). In the case of transportation, private citizens are key decision-makers.

How can electric cars already be the cheapest?

The study used average US fuel and electricity prices over the decade of 2004–2013 (e.g. $3.14 per gallon of gasoline and 12 cents per kilowatt-hour), and the app allows consumers to test different fuel costs. The lifetime of a car is estimated at 14 years and about 170,000 miles. As co-author Marco Miotti explained,

We use parameters that reflect the U.S. consumer experience when going to buy a new car.

As the chart in the lower right corner of the above figure shows, when only accounting for the vehicle purchase cost, gasoline-powered cars are the cheapest. However, hybrid and electric vehicles have lower fuel and regular maintenance costs. In the US, there are also federal rebates that bring the consumer costs down further yet, so the cheapest electric cars (in cost per distance driven) cost less than the cheapest gasoline cars. When including state tax rebates like in California, electric cars are by far the best deal available.

Some might object that it’s unfair to include tax rebates, and the MIT app allows for a comparison with or without those rebates. However, as Trancik noted, the app is aimed at consumers, and for them the rebates are a reality. Moreover, the cost of gasoline in the USA does not reflect the costs inflicted by its carbon pollution via climate change. Over the lifetime of the car, the electric car tax rebates roughly offset the gasoline carbon pollution subsidy. Gasoline combustion releases other pollutants that result in additional societal costs as well.

To meet climate targets we need EVs and clean energy

As the study notes, most of today’s hybrids and plug-in hybrids meet the 2030 climate targets, and electric cars beat them. However, if we’re going to stay below the internationally-accepted ‘danger limit’ of 2°C global warming above pre-industrial temperatures, the US needs to achieve about an 80% cut in emissions by 2050. As co-author Geoffrey Supran noted,

The bottom line is that meeting long-term targets requires simultaneous and comprehensive vehicle electrification and grid decarbonization.

The good news is that we already have the necessary technology available today.Another recent study from MIT’s Trancik Lab found that today’s lowest-cost electric vehicles meet the daily range requirements of 87% of cars on the road, even if they can only recharge once a day. And the technology is quickly advancing; Chevrolet and Tesla will soon release electric cars with 200 mile-per-charge range and before-rebate prices under $40,000. Solar and wind energy prices continue to fall rapidly as well, as we progress toward a “renewable energy revolution.”

Click here to read the rest



from Skeptical Science http://ift.tt/2cRlsoI

In a new study published in the journal Environmental Science & Technology, with an accompanying app for the public, scientists at MIT compare the carbon pollution from today’s cars to the international 2°C climate target. In order to meet that target, overall emissions need to decline dramatically over the coming decades.

The MIT team compared emissions from 125 electric, hybrid, and gasoline cars to the levels we need to achieve from the transportation sector in 2030, 2040, and 2050 to stay below 2°C global warming. They also looked at the cost efficiency of each car, including vehicle, fuel, and maintenance costs. The bottom line:

Although the average carbon intensity of vehicles sold in 2014 exceeds the climate target for 2030 by more than 50%, we find that most hybrid and battery electric vehicles available today meet this target. By 2050, only electric vehicles supplied with almost completely carbon-free electric power are expected to meet climate-policy targets.

figure

Cost-carbon space for light-duty vehicles, assuming a 14 year lifetime, 12,100 miles driven annually, and an 8% discount rate. Data points show the most popular internal-combustion-engine vehicles (black), hybrid electric vehicles (pink), plug-in hybrid electric vehicles (red), and battery electric vehicles (yellow) in 2014, as well as one of the first fully commercial fuel-cell vehicles (blue). Illustration: Miotti et al. (2016), Environmental Science & Technology.

The MIT app allows consumers to check how their own vehicles – or cars they’re considering purchasing – stack up on the carbon emissions and cost curves. As co-author Jessika Trancik noted,

One goal of the work is to translate climate mitigation scenarios to the level of individual decision-makers who will ultimately be the ones to decide whether or not a clean energy transition occurs (in a market economy, at least). In the case of transportation, private citizens are key decision-makers.

How can electric cars already be the cheapest?

The study used average US fuel and electricity prices over the decade of 2004–2013 (e.g. $3.14 per gallon of gasoline and 12 cents per kilowatt-hour), and the app allows consumers to test different fuel costs. The lifetime of a car is estimated at 14 years and about 170,000 miles. As co-author Marco Miotti explained,

We use parameters that reflect the U.S. consumer experience when going to buy a new car.

As the chart in the lower right corner of the above figure shows, when only accounting for the vehicle purchase cost, gasoline-powered cars are the cheapest. However, hybrid and electric vehicles have lower fuel and regular maintenance costs. In the US, there are also federal rebates that bring the consumer costs down further yet, so the cheapest electric cars (in cost per distance driven) cost less than the cheapest gasoline cars. When including state tax rebates like in California, electric cars are by far the best deal available.

Some might object that it’s unfair to include tax rebates, and the MIT app allows for a comparison with or without those rebates. However, as Trancik noted, the app is aimed at consumers, and for them the rebates are a reality. Moreover, the cost of gasoline in the USA does not reflect the costs inflicted by its carbon pollution via climate change. Over the lifetime of the car, the electric car tax rebates roughly offset the gasoline carbon pollution subsidy. Gasoline combustion releases other pollutants that result in additional societal costs as well.

To meet climate targets we need EVs and clean energy

As the study notes, most of today’s hybrids and plug-in hybrids meet the 2030 climate targets, and electric cars beat them. However, if we’re going to stay below the internationally-accepted ‘danger limit’ of 2°C global warming above pre-industrial temperatures, the US needs to achieve about an 80% cut in emissions by 2050. As co-author Geoffrey Supran noted,

The bottom line is that meeting long-term targets requires simultaneous and comprehensive vehicle electrification and grid decarbonization.

The good news is that we already have the necessary technology available today.Another recent study from MIT’s Trancik Lab found that today’s lowest-cost electric vehicles meet the daily range requirements of 87% of cars on the road, even if they can only recharge once a day. And the technology is quickly advancing; Chevrolet and Tesla will soon release electric cars with 200 mile-per-charge range and before-rebate prices under $40,000. Solar and wind energy prices continue to fall rapidly as well, as we progress toward a “renewable energy revolution.”

Click here to read the rest



from Skeptical Science http://ift.tt/2cRlsoI

IPCC special report to scrutinise ‘feasibility’ of 1.5C climate goal

This is a re-post from Carbon Brief by Roz Pidcock

The head of the United Nation’s climate body has called for a thorough assessment of the feasibility of the international goal to limit warming to 1.5C.

Dr Hoesung Lee, chair of the Intergovernmental Panel on Climate Change (IPCC), told delegates at a meeting in Geneva, which is designed to flesh out the contents of a special report on 1.5C, that they bore a “great responsibility” in making sure it meets the expectations of the international climate community.

To be policy-relevant, the report will need to spell out what’s to be gained by limiting warming to 1.5C, as well as the practical steps needed to get there within sustainability and poverty eradication goals.

More than ever, urged Lee, the report must be easily understandable for a non-scientific audience. The IPCC has come under fire in the past over what some have called its “increasingly unreadable” reports.

Feasibility

In between the main “assessment reports” every five or six years, the IPCC publishes shorter “special reports” on specific topics. Past ones have included extreme weatherand renewable energy.

The IPCC was “invited” by the United Nations Convention on Climate Change (UNFCCC) to do a special report on 1.5C after the Paris Agreement codified a goal to limit global temperature rise to “well below 2C” and to “pursue efforts towards 1.5C”.

The aim for this week’s meeting in Geneva is, in theory, simple: to decide on a title for the report; come up with chapter headings; and write a few bullet points summarising what the report will cover.

On day two of three, Carbon Brief understands six “themes” have emerged as contenders. Judging by proceeding so far,  it seems likely that the feasibility of the 1.5C goal features highly on that list.

Referring to a questionnaire sent out to scientists, policymakers and other “interested parties” ahead of the scoping meeting to ask what they thought the 1.5C report should cover, Lee told the conference:

“One notion that runs through all this, is feasibility. How feasible is it to limit warming to 1.5C? How feasible is it to develop the technologies that will get us there?…We must analyse policy measures in terms of feasibility.”

The explicit mention of 1.5C in the Paris Agreement caught the scientific community somewhat off-guard, said Elena Manaenkova, incoming deputy secretary-general of theWorld Meteorological Organization.

Speaking in Geneva yesterday, she told delegates she felt “proud, but also somewhat concerned” about the outcome of the Paris talks. She said:

“I was there. I know the reason why it was done…[P]arties were keen to do even better, to go faster, to go even further…The word ‘feasibility’ is not in the Paris Agreement, is not in the decision. But that’s really what it is [about].”

Overshoot

Dr Andrew King, a researcher in climate extremes at the University of Melbourne, echoes the call for a rational discussion about the way ahead, now that the dust has settled after Paris. The question of what it would take to achieve the 1.5C goal has been largely sidestepped so far, he tells Carbon Brief:

“I think one unintended outcome of the Paris Agreement was that it made the public think limiting warming to 1.5C is possible with only marginally stronger policy from government on reducing emissions and this is simply not the case.”

Carbon Countdown: How many years of current emissions would use up the IPCC's carbon budgets for different levels of warming?

Carbon Countdown: How many years of current emissions would use up the IPCC’s carbon budgets for different levels of warming? Infographic by Rosamund Pearce for Carbon Brief.

The reality is that staying under the 1.5C threshold is now nigh-on impossible, says King. Meeting the 1.5C target now means overshooting and coming back down using negative emissions technologies that “suck” carbon dioxide out of the air. The report will need to be explicit about this, he says.

King is cautious about overstating the world’s ability to meet the 1.5C goal, given that no single technology yet exists approaching the scale that would be required. He tells Carbon Brief:

“We will need negative emissions on a large-scale and for a long period of time to bring global temperatures back down to 1.5C. This isn’t possible with current technologies.”

Earlier this year, Carbon Brief published a series of articles on negative emissions, including a close up on the most talked-about option – Bioenergy with Carbon Capture and Storage (BECCS) – and a survey of which technologies climate experts think hold the most potential.

‘A great responsibility’

Another point on which the special report must be very clear is the difference between impacts at 1.5C compared to 2C, noted Thelma Krug, chair of the scientific steering committee for the special report.

The first study to compare the consequences at both temperatures found that an extra 0.5C could see global sea levels rise 10cm more by 2100 and is also “likely to be decisive for the future of coral reefs”.

King tells Carbon Brief:

“We need to know more about the benefits of limiting warming to 1.5C. If scientists can demonstrate to policymakers that we would see significantly fewer and less intense extreme weather events by putting the brakes on our emissions then it might lead to the necessary action to protect society and the environment from the worst outcomes of climate change.”

Infographic: How do the impacts of 1.5C of warming compare to 2C of warming?

Infographic: How do the impacts of 1.5C of warming compare to 2C of warming? By Rosamund Pearce for Carbon Brief.

The timing of the 1.5C special report is critical, said Lee yesterday. Due for delivery in September 2018, the IPCC’s aim is that the report should be “in time for” the UNFCCC’s “facilitative dialogue” scheduled that year.

This will be the first informal review under the global stocktake – a process that will enable countries to assess progress towards meeting the long-term goals set out under the Paris Agreement.

Expectations will be high, Lee told delegates yesterday:

“You can be sure that the report, when it is available in two years’ time…will attract enormous attention. So you have a great responsibility.”

Any scientist wishing their research to be included in the special report on 1.5C will need to submit it to a peer-reviewed journal by October 2017, and have it accepted for publication by April 2018, according to the IPCC’s timeline.

The scientific community is already mobilising behind this tight deadline. An international conference at Oxford University in September will see scientists, policymakers, businesses and civil society gather to discuss the challenges of meeting the 1.5C goal, which the organisers say “caught the world by surprise”.

Clearer communication

More than ever, the IPCC should strive to communicate the special report on 1.5C as clearly and accessibly as possible, Lee told the conference yesterday.

Given the primary audience will be non-specialists, the authors should think from the outset about how FAQs (Frequently Asked Questions) and graphics could be used to best effect, he said.

“The special report on 1.5C is not intended to replicate a comprehensive IPCC regular assessment reports. It should be focused on the matter at hand.”

The importance of the 1.5C topic calls for a different approach to previous IPCC reports, says King. He tells Carbon Brief:

“The report will fail to have much effect if the findings aren’t communicated well to policymakers and the public. This could be seen as a failing of the climate science community in the past. It has led to much weaker action on reducing climate change than is needed; this report needs to change this.”

A couple of recently published papers might give the authors some food for thought on this point.

The first study looks at how the process by which governments approve the IPCC’s Summaries for Policymakers (SPMs) affects their “readability”. Of the eight examples the study considers, all got longer during the government review stage. On average, they expanded by 30% or 1,500-2,000 words. The review process improved “readability” in half of cases, though all eight scored low for “storytelling”.

second paper explores the power of visuals for communicating climate science to non-specialists, and highlights where the IPCC may be falling short. Giving the examples below from the IPCC’s third and fourth assessment reports, the paper notes:

“A feeling of confusion among non-climate students is certainly not congruent with positive engagement yet this emotional state was frequently reported for SPM visuals.”

Images and infographics can be powerful, but only if the trade-off between scientific credibility and ease of understanding is carefully handled, the paper concludes.

Four examples of visuals used in the IPCC's third and fourth assessment reports. Source: McMahon et al., (2016)

Four examples of visuals used in the IPCC’s third and fourth assessment reports. Source: McMahon et al., (2016)

With all this mind, the scientists will leave the Geneva conference on Wednesday and prepare an outline for the 1.5C report based on their discussions over the previous three days.

They will submit the proposed plan to the IPCC panel at its next meeting in Bangkok in October. If the outline meets the panel’s expectations, it will accept it and things move forward. If it falls short, they can request changes be made. The discussions in Geneva are, therefore, unlikely to be the last word.



from Skeptical Science http://ift.tt/2cRl1uv

This is a re-post from Carbon Brief by Roz Pidcock

The head of the United Nation’s climate body has called for a thorough assessment of the feasibility of the international goal to limit warming to 1.5C.

Dr Hoesung Lee, chair of the Intergovernmental Panel on Climate Change (IPCC), told delegates at a meeting in Geneva, which is designed to flesh out the contents of a special report on 1.5C, that they bore a “great responsibility” in making sure it meets the expectations of the international climate community.

To be policy-relevant, the report will need to spell out what’s to be gained by limiting warming to 1.5C, as well as the practical steps needed to get there within sustainability and poverty eradication goals.

More than ever, urged Lee, the report must be easily understandable for a non-scientific audience. The IPCC has come under fire in the past over what some have called its “increasingly unreadable” reports.

Feasibility

In between the main “assessment reports” every five or six years, the IPCC publishes shorter “special reports” on specific topics. Past ones have included extreme weatherand renewable energy.

The IPCC was “invited” by the United Nations Convention on Climate Change (UNFCCC) to do a special report on 1.5C after the Paris Agreement codified a goal to limit global temperature rise to “well below 2C” and to “pursue efforts towards 1.5C”.

The aim for this week’s meeting in Geneva is, in theory, simple: to decide on a title for the report; come up with chapter headings; and write a few bullet points summarising what the report will cover.

On day two of three, Carbon Brief understands six “themes” have emerged as contenders. Judging by proceeding so far,  it seems likely that the feasibility of the 1.5C goal features highly on that list.

Referring to a questionnaire sent out to scientists, policymakers and other “interested parties” ahead of the scoping meeting to ask what they thought the 1.5C report should cover, Lee told the conference:

“One notion that runs through all this, is feasibility. How feasible is it to limit warming to 1.5C? How feasible is it to develop the technologies that will get us there?…We must analyse policy measures in terms of feasibility.”

The explicit mention of 1.5C in the Paris Agreement caught the scientific community somewhat off-guard, said Elena Manaenkova, incoming deputy secretary-general of theWorld Meteorological Organization.

Speaking in Geneva yesterday, she told delegates she felt “proud, but also somewhat concerned” about the outcome of the Paris talks. She said:

“I was there. I know the reason why it was done…[P]arties were keen to do even better, to go faster, to go even further…The word ‘feasibility’ is not in the Paris Agreement, is not in the decision. But that’s really what it is [about].”

Overshoot

Dr Andrew King, a researcher in climate extremes at the University of Melbourne, echoes the call for a rational discussion about the way ahead, now that the dust has settled after Paris. The question of what it would take to achieve the 1.5C goal has been largely sidestepped so far, he tells Carbon Brief:

“I think one unintended outcome of the Paris Agreement was that it made the public think limiting warming to 1.5C is possible with only marginally stronger policy from government on reducing emissions and this is simply not the case.”

Carbon Countdown: How many years of current emissions would use up the IPCC's carbon budgets for different levels of warming?

Carbon Countdown: How many years of current emissions would use up the IPCC’s carbon budgets for different levels of warming? Infographic by Rosamund Pearce for Carbon Brief.

The reality is that staying under the 1.5C threshold is now nigh-on impossible, says King. Meeting the 1.5C target now means overshooting and coming back down using negative emissions technologies that “suck” carbon dioxide out of the air. The report will need to be explicit about this, he says.

King is cautious about overstating the world’s ability to meet the 1.5C goal, given that no single technology yet exists approaching the scale that would be required. He tells Carbon Brief:

“We will need negative emissions on a large-scale and for a long period of time to bring global temperatures back down to 1.5C. This isn’t possible with current technologies.”

Earlier this year, Carbon Brief published a series of articles on negative emissions, including a close up on the most talked-about option – Bioenergy with Carbon Capture and Storage (BECCS) – and a survey of which technologies climate experts think hold the most potential.

‘A great responsibility’

Another point on which the special report must be very clear is the difference between impacts at 1.5C compared to 2C, noted Thelma Krug, chair of the scientific steering committee for the special report.

The first study to compare the consequences at both temperatures found that an extra 0.5C could see global sea levels rise 10cm more by 2100 and is also “likely to be decisive for the future of coral reefs”.

King tells Carbon Brief:

“We need to know more about the benefits of limiting warming to 1.5C. If scientists can demonstrate to policymakers that we would see significantly fewer and less intense extreme weather events by putting the brakes on our emissions then it might lead to the necessary action to protect society and the environment from the worst outcomes of climate change.”

Infographic: How do the impacts of 1.5C of warming compare to 2C of warming?

Infographic: How do the impacts of 1.5C of warming compare to 2C of warming? By Rosamund Pearce for Carbon Brief.

The timing of the 1.5C special report is critical, said Lee yesterday. Due for delivery in September 2018, the IPCC’s aim is that the report should be “in time for” the UNFCCC’s “facilitative dialogue” scheduled that year.

This will be the first informal review under the global stocktake – a process that will enable countries to assess progress towards meeting the long-term goals set out under the Paris Agreement.

Expectations will be high, Lee told delegates yesterday:

“You can be sure that the report, when it is available in two years’ time…will attract enormous attention. So you have a great responsibility.”

Any scientist wishing their research to be included in the special report on 1.5C will need to submit it to a peer-reviewed journal by October 2017, and have it accepted for publication by April 2018, according to the IPCC’s timeline.

The scientific community is already mobilising behind this tight deadline. An international conference at Oxford University in September will see scientists, policymakers, businesses and civil society gather to discuss the challenges of meeting the 1.5C goal, which the organisers say “caught the world by surprise”.

Clearer communication

More than ever, the IPCC should strive to communicate the special report on 1.5C as clearly and accessibly as possible, Lee told the conference yesterday.

Given the primary audience will be non-specialists, the authors should think from the outset about how FAQs (Frequently Asked Questions) and graphics could be used to best effect, he said.

“The special report on 1.5C is not intended to replicate a comprehensive IPCC regular assessment reports. It should be focused on the matter at hand.”

The importance of the 1.5C topic calls for a different approach to previous IPCC reports, says King. He tells Carbon Brief:

“The report will fail to have much effect if the findings aren’t communicated well to policymakers and the public. This could be seen as a failing of the climate science community in the past. It has led to much weaker action on reducing climate change than is needed; this report needs to change this.”

A couple of recently published papers might give the authors some food for thought on this point.

The first study looks at how the process by which governments approve the IPCC’s Summaries for Policymakers (SPMs) affects their “readability”. Of the eight examples the study considers, all got longer during the government review stage. On average, they expanded by 30% or 1,500-2,000 words. The review process improved “readability” in half of cases, though all eight scored low for “storytelling”.

second paper explores the power of visuals for communicating climate science to non-specialists, and highlights where the IPCC may be falling short. Giving the examples below from the IPCC’s third and fourth assessment reports, the paper notes:

“A feeling of confusion among non-climate students is certainly not congruent with positive engagement yet this emotional state was frequently reported for SPM visuals.”

Images and infographics can be powerful, but only if the trade-off between scientific credibility and ease of understanding is carefully handled, the paper concludes.

Four examples of visuals used in the IPCC's third and fourth assessment reports. Source: McMahon et al., (2016)

Four examples of visuals used in the IPCC’s third and fourth assessment reports. Source: McMahon et al., (2016)

With all this mind, the scientists will leave the Geneva conference on Wednesday and prepare an outline for the 1.5C report based on their discussions over the previous three days.

They will submit the proposed plan to the IPCC panel at its next meeting in Bangkok in October. If the outline meets the panel’s expectations, it will accept it and things move forward. If it falls short, they can request changes be made. The discussions in Geneva are, therefore, unlikely to be the last word.



from Skeptical Science http://ift.tt/2cRl1uv

The Madhouse Effect of climate denial

A new book by Michael Mann and Tom Toles takes a fresh look on the effects humans are having on our climate and the additional impacts on our politics. While there have been countless books about climate change over the past two decades, this one – entitled The Madhouse Effect - distinguishes itself by its clear and straightforward science mixed with clever and sometimes comedic presentation. 

In approximately 150 pages, this books deals with the basic science and the denial industry, which has lost the battle in the scientific arena and is working feverishly to confuse the public. The authors also cover potential solutions to halt or slow our changing climate. Perhaps most importantly, this book gives individual guidance – what can we do, as individuals, to help the Earth heal from the real and present harm of climate change?

To start the book, the authors discuss how the scientific method works, the importance of the precautionary principle, and how delaying actions have caused us to lose precious time in this global race to halt climate change. And all of this done in only 13 pages!

Next, the book dives briefly into the basic science of the greenhouse effect. Readers of this column know that the science of global warming is very well established with decades of research. But some people don’t realize that this research originated in the early 1800s with scientists such as Joseph Fourier. The book takes us on a short tour of history. Moving beyond these early works that focused exclusively on global temperatures, the authors come to expected impacts. They explain that a warming world, for instance, can be both drier and wetter!

This seeming paradox is a result of our expectation that areas which are currently wet will become wetter with an increase in the most-heavy downpours. A great example is the flooding this year in the Southeast United States. The reason for this is simple: a warmer atmosphere has more water vapor. In other areas – especially those already dry - it will become drier because evaporation will speed up. If there is no available moisture to enter into the atmosphere (i.e. an arid region), then the result is more drying. Using their candid, humorous, and clear language, the authors also discuss impacts on storms, hurricanes, rising oceans, and so on.

With the basic science covered, the authors quickly move into reasons why readers should care. Many of the big risks climate change presents are covered. Included here are security, water availability, food production, land use and availability, human health, and risks to the world economies and ecosystems. Simply put, these are not future and abstract risks. They are current risks that are becoming more severe and will affect all of us.

The chapter that was most interesting to me was the stages of climate denial. The first stage is “it’s not happening,” where outright denial of the reality of climate change was discussed. The authors use an example of this stage which came from the climate contrarians Roy Spencer and John Christy, who claimed that the Earth wasn’t warming. 

Technically, Christy and Spencer claimed that a part of the Earth’s atmosphere was cooling – in contrast to mountains of contrary evidence from around the globe. It was discovered that these contrarians had made some elementary errors in their calculations. When those calculations were corrected, their results fell into line with other research. I’ve written many times about the many errors from contrarian scientists – certainly not limited to Christy and Spencer. Within the scientific community, the contrarians have made so many technical errors that their work is no longer taken seriously by many scientists. But this hasn’t stopped the denial industry from showcasing their work, even though it is discredited. 

The next stages of denial involve admissions that climate change is happening but that it is natural, it will self-correct, or it will climate change will benefit us. The tale of mistakes made in the science underlying these arguments is told in the book, and would be humorous if not so serious. The tales involve cherry-picked data, resigning editors, and other mistakes. For readers who may be concerned about getting lost in the weeds, don’t worry. The authors do an excellent job hitting high points in an intelligible manner so that readers don’t need a PhD in climate science to see the patterns.

Next, the book covers the denial industry including some of the key groups and persons who were responsible for a systematic rejection of the prevailing scientific view, or at least insidious questioning so that in the public’s mind there would be no consensus. This effort which not only denied the causes and effects of climate change, but also the health risks of tobacco was coupled with funding to scientists, “petitions” designed to appear as if they originated from the National Academy of Sciences, and an interconnected network of funded think tanks.

One result of all of this is that actual scientists who spend their lives studying the Earth’s climate have been attacked. One of the authors of this book (Michael Mann) is perhaps in the world’s best position to tell this tale because he personally knows many of the scientists who have been attacked professionally and personally, including the late Stephen Schneider, Ben Santer, Naomi Oreskes, and himself. 

I have had the fortune of knowing these scientists and I can attest that they (and others) go into this field because they have an intense curiosity.

Click here to read the rest



from Skeptical Science http://ift.tt/2cRkXL6

A new book by Michael Mann and Tom Toles takes a fresh look on the effects humans are having on our climate and the additional impacts on our politics. While there have been countless books about climate change over the past two decades, this one – entitled The Madhouse Effect - distinguishes itself by its clear and straightforward science mixed with clever and sometimes comedic presentation. 

In approximately 150 pages, this books deals with the basic science and the denial industry, which has lost the battle in the scientific arena and is working feverishly to confuse the public. The authors also cover potential solutions to halt or slow our changing climate. Perhaps most importantly, this book gives individual guidance – what can we do, as individuals, to help the Earth heal from the real and present harm of climate change?

To start the book, the authors discuss how the scientific method works, the importance of the precautionary principle, and how delaying actions have caused us to lose precious time in this global race to halt climate change. And all of this done in only 13 pages!

Next, the book dives briefly into the basic science of the greenhouse effect. Readers of this column know that the science of global warming is very well established with decades of research. But some people don’t realize that this research originated in the early 1800s with scientists such as Joseph Fourier. The book takes us on a short tour of history. Moving beyond these early works that focused exclusively on global temperatures, the authors come to expected impacts. They explain that a warming world, for instance, can be both drier and wetter!

This seeming paradox is a result of our expectation that areas which are currently wet will become wetter with an increase in the most-heavy downpours. A great example is the flooding this year in the Southeast United States. The reason for this is simple: a warmer atmosphere has more water vapor. In other areas – especially those already dry - it will become drier because evaporation will speed up. If there is no available moisture to enter into the atmosphere (i.e. an arid region), then the result is more drying. Using their candid, humorous, and clear language, the authors also discuss impacts on storms, hurricanes, rising oceans, and so on.

With the basic science covered, the authors quickly move into reasons why readers should care. Many of the big risks climate change presents are covered. Included here are security, water availability, food production, land use and availability, human health, and risks to the world economies and ecosystems. Simply put, these are not future and abstract risks. They are current risks that are becoming more severe and will affect all of us.

The chapter that was most interesting to me was the stages of climate denial. The first stage is “it’s not happening,” where outright denial of the reality of climate change was discussed. The authors use an example of this stage which came from the climate contrarians Roy Spencer and John Christy, who claimed that the Earth wasn’t warming. 

Technically, Christy and Spencer claimed that a part of the Earth’s atmosphere was cooling – in contrast to mountains of contrary evidence from around the globe. It was discovered that these contrarians had made some elementary errors in their calculations. When those calculations were corrected, their results fell into line with other research. I’ve written many times about the many errors from contrarian scientists – certainly not limited to Christy and Spencer. Within the scientific community, the contrarians have made so many technical errors that their work is no longer taken seriously by many scientists. But this hasn’t stopped the denial industry from showcasing their work, even though it is discredited. 

The next stages of denial involve admissions that climate change is happening but that it is natural, it will self-correct, or it will climate change will benefit us. The tale of mistakes made in the science underlying these arguments is told in the book, and would be humorous if not so serious. The tales involve cherry-picked data, resigning editors, and other mistakes. For readers who may be concerned about getting lost in the weeds, don’t worry. The authors do an excellent job hitting high points in an intelligible manner so that readers don’t need a PhD in climate science to see the patterns.

Next, the book covers the denial industry including some of the key groups and persons who were responsible for a systematic rejection of the prevailing scientific view, or at least insidious questioning so that in the public’s mind there would be no consensus. This effort which not only denied the causes and effects of climate change, but also the health risks of tobacco was coupled with funding to scientists, “petitions” designed to appear as if they originated from the National Academy of Sciences, and an interconnected network of funded think tanks.

One result of all of this is that actual scientists who spend their lives studying the Earth’s climate have been attacked. One of the authors of this book (Michael Mann) is perhaps in the world’s best position to tell this tale because he personally knows many of the scientists who have been attacked professionally and personally, including the late Stephen Schneider, Ben Santer, Naomi Oreskes, and himself. 

I have had the fortune of knowing these scientists and I can attest that they (and others) go into this field because they have an intense curiosity.

Click here to read the rest



from Skeptical Science http://ift.tt/2cRkXL6

This Week in EPA Science

By Kacey Fitzpatrickto-go coffee cup with research recap graphic

You know what would go great with that pumpkin spice latte? Reading about the latest in EPA science!

Indoor Chemical Exposure Research
Many cleaning products, personal care products, pesticides, furnishings, and electronics contain chemicals known as semivolatile organic compounds (SVOCs). The compounds are released slowly into the air and can attach to surfaces or airborne particles, allowing them to enter the body by inhalation, ingestion, or absorption through the skin.  Because SVOCs have been associated with negative health effects, EPA is funding research to learn more about their exposure and how we can reduce it. Learn more about this research in the blog Indoor Chemical Exposure: Novel Research for the 21st Century.

Empowering a Community with Scientific Knowledge
EPA researchers are working with a small community in Puerto Rico to install and maintain low-cost air monitoring devices. These devices will help community members analyze local pollutant levels and better understand the local environmental conditions. Learn more about the project in the blog Air Sensors in Puerto Rico: Empowering a Community with Scientific Knowledge.

Navigating Towards a More Sustainable Future
With the help of a smartphone, navigating from point A to point B is easier than ever. EPA is bringing that kind of convenience to environmental decision making with the release of Community-Focused Exposure Risk and Screening Tool (C-FERST), an online mapping tool. The tool provides access to resources that can help communities and decision makers learn more about their local environmental issues, compare conditions in their community with their county and state averages, and explore exposure and risk reduction options. Learn more about the tool in the blog C-FERST: A New Tool to Help Communities Navigate toward a Healthier, More Sustainable Future.

EPA Researchers at Work
EPA scientist Joachim Pleil is the EPA “breath guy” and was involved with the founding of the International Association of Breath Research and the Journal of Breath Research. He started off developing methods for measuring volatile organic carcinogens in air, and then progressed to linking chemical biomarkers to absorption, metabolism and elimination by analyzing human blood, breath, and urine. Meet EPA Scientist Joachim Pleil!

About the Author: Kacey Fitzpatrick is a writer working with the science communication team in EPA’s Office of Research and Development. She is a regular contributor to It All Starts with Science and the founding writer of “The Research Recap.”



from The EPA Blog http://ift.tt/2dhxerj

By Kacey Fitzpatrickto-go coffee cup with research recap graphic

You know what would go great with that pumpkin spice latte? Reading about the latest in EPA science!

Indoor Chemical Exposure Research
Many cleaning products, personal care products, pesticides, furnishings, and electronics contain chemicals known as semivolatile organic compounds (SVOCs). The compounds are released slowly into the air and can attach to surfaces or airborne particles, allowing them to enter the body by inhalation, ingestion, or absorption through the skin.  Because SVOCs have been associated with negative health effects, EPA is funding research to learn more about their exposure and how we can reduce it. Learn more about this research in the blog Indoor Chemical Exposure: Novel Research for the 21st Century.

Empowering a Community with Scientific Knowledge
EPA researchers are working with a small community in Puerto Rico to install and maintain low-cost air monitoring devices. These devices will help community members analyze local pollutant levels and better understand the local environmental conditions. Learn more about the project in the blog Air Sensors in Puerto Rico: Empowering a Community with Scientific Knowledge.

Navigating Towards a More Sustainable Future
With the help of a smartphone, navigating from point A to point B is easier than ever. EPA is bringing that kind of convenience to environmental decision making with the release of Community-Focused Exposure Risk and Screening Tool (C-FERST), an online mapping tool. The tool provides access to resources that can help communities and decision makers learn more about their local environmental issues, compare conditions in their community with their county and state averages, and explore exposure and risk reduction options. Learn more about the tool in the blog C-FERST: A New Tool to Help Communities Navigate toward a Healthier, More Sustainable Future.

EPA Researchers at Work
EPA scientist Joachim Pleil is the EPA “breath guy” and was involved with the founding of the International Association of Breath Research and the Journal of Breath Research. He started off developing methods for measuring volatile organic carcinogens in air, and then progressed to linking chemical biomarkers to absorption, metabolism and elimination by analyzing human blood, breath, and urine. Meet EPA Scientist Joachim Pleil!

About the Author: Kacey Fitzpatrick is a writer working with the science communication team in EPA’s Office of Research and Development. She is a regular contributor to It All Starts with Science and the founding writer of “The Research Recap.”



from The EPA Blog http://ift.tt/2dhxerj

Gas and the Size of a Marshmallow

Kitchen science: explore laws of chemistry that can be observed by experimenting with the air surrounding a marshmallow!

from Science Buddies Blog http://ift.tt/2cGFK5C
Kitchen science: explore laws of chemistry that can be observed by experimenting with the air surrounding a marshmallow!

from Science Buddies Blog http://ift.tt/2cGFK5C

Can bees experience positive emotions? [Life Lines]

Bee close up (29224914512).jpg

By Uroš Novina from Semič, Slovenia – Bee close up, CC BY 2.0, http://ift.tt/2dFG4Ps

A new study was designed to test whether bees can experience some kind of primordial “emotions”. In the study bees were trained to associate a tunnel marked with a blue flower with a sugar water treat at its end. In contrast, a green flower meant no reward at the end of the tunnel. However, when bees were exposed to flowers with both hues, they either chose not to enter the tunnel or took a long time to choose to enter. But, when half of the bees were given a sugar water treat first, they chose to enter the tunnel marked with the green-blue flower much more quickly. When faced with a simulated attack from a predator, bees that were not given a sugar treat took longer to begin foraging again.

While the studies above suggest that bees can learn avoidance behaviors or where to find the best treats, the more interesting part of their study involved inhibiting dopamine, the neurotransmitter that signals reward-seeking behavior. When they blocked the effects of this neurotransmitter, the sugar treat no longer had an effect in the bees. The authors suggest that these findings indicate bees may have some primordial form of positive or optimistic emotions. Although, it is equally likely that the sugar water simply triggered reward seeking behaviors without making the bees become what we humans know as “optimistic.”

Source:

Perry CJ, Baciadonna L, Chittka L. Unexpected rewards induce dopamine-dependent positive emotion-like state changes in bumblebees. Science. 353(6307): 1529-1531, 2016.



from ScienceBlogs http://ift.tt/2dfXDHP
Bee close up (29224914512).jpg

By Uroš Novina from Semič, Slovenia – Bee close up, CC BY 2.0, http://ift.tt/2dFG4Ps

A new study was designed to test whether bees can experience some kind of primordial “emotions”. In the study bees were trained to associate a tunnel marked with a blue flower with a sugar water treat at its end. In contrast, a green flower meant no reward at the end of the tunnel. However, when bees were exposed to flowers with both hues, they either chose not to enter the tunnel or took a long time to choose to enter. But, when half of the bees were given a sugar water treat first, they chose to enter the tunnel marked with the green-blue flower much more quickly. When faced with a simulated attack from a predator, bees that were not given a sugar treat took longer to begin foraging again.

While the studies above suggest that bees can learn avoidance behaviors or where to find the best treats, the more interesting part of their study involved inhibiting dopamine, the neurotransmitter that signals reward-seeking behavior. When they blocked the effects of this neurotransmitter, the sugar treat no longer had an effect in the bees. The authors suggest that these findings indicate bees may have some primordial form of positive or optimistic emotions. Although, it is equally likely that the sugar water simply triggered reward seeking behaviors without making the bees become what we humans know as “optimistic.”

Source:

Perry CJ, Baciadonna L, Chittka L. Unexpected rewards induce dopamine-dependent positive emotion-like state changes in bumblebees. Science. 353(6307): 1529-1531, 2016.



from ScienceBlogs http://ift.tt/2dfXDHP

Emory's 'Rolosense' rolling to finals of Collegiate Inventors Competition

“I think the advantage we have with our technology is that it's so simple," says Aaron Blanchard, left, a PhD student in Emory's Laney Graduate School, shown using the Rolosense with his advisor, Emory chemist Khalid Salaita. 

By Carol Clark

The first rolling DNA motor – the biological equivalent of the invention of the wheel for the field of DNA machines – is headed from its origins in an Emory University chemistry lab to the finals of the 2016 Collegiate Inventors Competition in Washington D.C.

Kevin Yehl and Aaron Blanchard make up one of six teams of graduate students who will be flown to the finals in early November. Yehl and Blanchard developed the DNA motor (dubbed Rolosense), and its application as a chemical sensor, in the laboratory of their advisor – Emory chemist Khalid Salaita.

The entries of the elite student teams represent the most promising inventions from U.S. universities. “Their ideas will shape the future,” wrote Michael Oister, CEO of the National Inventors Hall of Fame, in a letter announcing the finalists.

The Collegiate Inventors Competition annually gives out about $100,000 in cash prizes and is considered the foremost program in the country encouraging invention and creativity in undergraduate and graduate students. The competition also promotes entrepreneurship, by rewarding ideas that hold value for society.

The Rolosense is 1,000 times faster than any other synthetic DNA motor. Its speed means a simple iPhone microscope can capture its movement through video, giving it potential for real-world applications, such as disease diagnostics.

Kevin Yehl sets up a smart-phone microscope to get a readout for the particle motion of the rolling DNA-based motor.

"It's exciting," Yehl says. "Previous winners have gone on to start companies with their inventions and become successful scientists. It will be great to get feedback from the judges on the Rolosense."

The judges will include inductees to the National Inventors Hall of Fame, officials from the U.S. Patent and Trademark Office, and scientists from the global healthcare firm AbbVie.

Some of the best discoveries involve serendipity, and that was the case for the Rolosense. Yehl was working last year as a post-doctoral fellow in the Salaita lab, which specializes in visualizing and measuring mechanical forces at the nano-scale. He was conducting experiments using enzymatic nano-particles – micron-sized glass spheres. “We were originally just interested in understanding the properties of enzymes when they’re confined to a surface,” Yehl says.

During the experiments, however, he learned by accident that the nano-particles roll. That gave him the idea of constructing a rolling DNA-based motor using the glass spheres.

The field of synthetic DNA-based motors, also known as nano-walkers, is about 15 years old. Researchers are striving to duplicate the action of nature’s nano-walkers. Myosin, for example, are tiny biological mechanisms that “walk” on filaments to carry nutrients throughout the human body. 

So far, however, mankind’s efforts have fallen far short of nature’s myosin, which speeds effortlessly about its biological errands. Some synthetic nano-walkers move on two legs. They are essentially enzymes made of DNA, powered by the fuel RNA. These nano-walkers tend to be extremely unstable, due to the high levels of Brownian motion at the nano-scale. Other versions with four, and even six, legs have proved more stable, but much slower. In fact, their pace is glacial: A four-legged DNA-based motor would need about 20 years to move one centimeter.

 A cell phone app is in the works.
The Rolosense design mows over these limitations. Hundreds of DNA strands, or “legs,” are allowed to bind to the sphere. These DNA legs are placed on a glass slide coated with the reactant: RNA.

The DNA legs are drawn to the RNA, but as soon as they set foot on it they destroy it through the activity of an enzyme called RNase H. As the legs bind and then release from the substrate, they guide the sphere along, allowing more of the DNA legs to keep binding and pulling.

“The Rolosense can travel one centimeter in seven days, instead of 20 years, making it 1,000 times faster than other synthetic DNA motors,” Salaita says. “In fact, nature’s myosin motors are only 10 times faster than the Rolosense, and it took them billions of years to evolve.”

The researchers next demonstrated the Rolosense could be used to detect a single DNA mutation by measuring particle displacement. Yehl simply glued lenses from two inexpensive laser pointers to the camera of an iPhone to turn the phone’s camera into a microscope and capture videos of the particle motion.

The simple, low-tech method could come in handy for doing diagnostic sensing in the field, or anywhere with limited resources.

Nature Nanotechnology published the work on the rolling DNA motor. The researchers have filed an invention disclosure patent for the concept of using the particle motion of the Rolosense as a sensor for everything from a single DNA mutation in a biological sample to heavy metals in water.

Yehl has since left Emory for a position at MIT, but he continues to work with Salaita and Aaron Blanchard, a second year student of biomedical engineering in Emory’s Laney Graduate School, on refining the Rolosense.

Blanchard, who has a background in computer coding, is integrating the data analysis of the Rolosense into a smart phone app that will provide a readout of the results.

“I feel really fortunate as a graduate student to be working on this project,” Blanchard says. “As the molecular detection field grows, I think that Rolosense will grow with it.”

For their demonstration during the finals, Yehl and Blanchard plan to hand the judges smart phones and samples of water (including some containing lead), and let the judges use Rolosense to test the samples.

“It can be easy to dazzle with complex technologies like a robot,” Blanchard says, “but I think the advantage that we have with our technology is that it’s so simple. We can let the judges see for themselves how they can use Rolosense to quickly learn something useful, like whether a water source is contaminated with a heavy metal.”

Related:
Nano-walkers take speedy leap forward with first rolling DNA motor
Chemists reveal the force within you
Molecular beacons shine light on how cells crawl

from eScienceCommons http://ift.tt/2dKpdxQ
“I think the advantage we have with our technology is that it's so simple," says Aaron Blanchard, left, a PhD student in Emory's Laney Graduate School, shown using the Rolosense with his advisor, Emory chemist Khalid Salaita. 

By Carol Clark

The first rolling DNA motor – the biological equivalent of the invention of the wheel for the field of DNA machines – is headed from its origins in an Emory University chemistry lab to the finals of the 2016 Collegiate Inventors Competition in Washington D.C.

Kevin Yehl and Aaron Blanchard make up one of six teams of graduate students who will be flown to the finals in early November. Yehl and Blanchard developed the DNA motor (dubbed Rolosense), and its application as a chemical sensor, in the laboratory of their advisor – Emory chemist Khalid Salaita.

The entries of the elite student teams represent the most promising inventions from U.S. universities. “Their ideas will shape the future,” wrote Michael Oister, CEO of the National Inventors Hall of Fame, in a letter announcing the finalists.

The Collegiate Inventors Competition annually gives out about $100,000 in cash prizes and is considered the foremost program in the country encouraging invention and creativity in undergraduate and graduate students. The competition also promotes entrepreneurship, by rewarding ideas that hold value for society.

The Rolosense is 1,000 times faster than any other synthetic DNA motor. Its speed means a simple iPhone microscope can capture its movement through video, giving it potential for real-world applications, such as disease diagnostics.

Kevin Yehl sets up a smart-phone microscope to get a readout for the particle motion of the rolling DNA-based motor.

"It's exciting," Yehl says. "Previous winners have gone on to start companies with their inventions and become successful scientists. It will be great to get feedback from the judges on the Rolosense."

The judges will include inductees to the National Inventors Hall of Fame, officials from the U.S. Patent and Trademark Office, and scientists from the global healthcare firm AbbVie.

Some of the best discoveries involve serendipity, and that was the case for the Rolosense. Yehl was working last year as a post-doctoral fellow in the Salaita lab, which specializes in visualizing and measuring mechanical forces at the nano-scale. He was conducting experiments using enzymatic nano-particles – micron-sized glass spheres. “We were originally just interested in understanding the properties of enzymes when they’re confined to a surface,” Yehl says.

During the experiments, however, he learned by accident that the nano-particles roll. That gave him the idea of constructing a rolling DNA-based motor using the glass spheres.

The field of synthetic DNA-based motors, also known as nano-walkers, is about 15 years old. Researchers are striving to duplicate the action of nature’s nano-walkers. Myosin, for example, are tiny biological mechanisms that “walk” on filaments to carry nutrients throughout the human body. 

So far, however, mankind’s efforts have fallen far short of nature’s myosin, which speeds effortlessly about its biological errands. Some synthetic nano-walkers move on two legs. They are essentially enzymes made of DNA, powered by the fuel RNA. These nano-walkers tend to be extremely unstable, due to the high levels of Brownian motion at the nano-scale. Other versions with four, and even six, legs have proved more stable, but much slower. In fact, their pace is glacial: A four-legged DNA-based motor would need about 20 years to move one centimeter.

 A cell phone app is in the works.
The Rolosense design mows over these limitations. Hundreds of DNA strands, or “legs,” are allowed to bind to the sphere. These DNA legs are placed on a glass slide coated with the reactant: RNA.

The DNA legs are drawn to the RNA, but as soon as they set foot on it they destroy it through the activity of an enzyme called RNase H. As the legs bind and then release from the substrate, they guide the sphere along, allowing more of the DNA legs to keep binding and pulling.

“The Rolosense can travel one centimeter in seven days, instead of 20 years, making it 1,000 times faster than other synthetic DNA motors,” Salaita says. “In fact, nature’s myosin motors are only 10 times faster than the Rolosense, and it took them billions of years to evolve.”

The researchers next demonstrated the Rolosense could be used to detect a single DNA mutation by measuring particle displacement. Yehl simply glued lenses from two inexpensive laser pointers to the camera of an iPhone to turn the phone’s camera into a microscope and capture videos of the particle motion.

The simple, low-tech method could come in handy for doing diagnostic sensing in the field, or anywhere with limited resources.

Nature Nanotechnology published the work on the rolling DNA motor. The researchers have filed an invention disclosure patent for the concept of using the particle motion of the Rolosense as a sensor for everything from a single DNA mutation in a biological sample to heavy metals in water.

Yehl has since left Emory for a position at MIT, but he continues to work with Salaita and Aaron Blanchard, a second year student of biomedical engineering in Emory’s Laney Graduate School, on refining the Rolosense.

Blanchard, who has a background in computer coding, is integrating the data analysis of the Rolosense into a smart phone app that will provide a readout of the results.

“I feel really fortunate as a graduate student to be working on this project,” Blanchard says. “As the molecular detection field grows, I think that Rolosense will grow with it.”

For their demonstration during the finals, Yehl and Blanchard plan to hand the judges smart phones and samples of water (including some containing lead), and let the judges use Rolosense to test the samples.

“It can be easy to dazzle with complex technologies like a robot,” Blanchard says, “but I think the advantage that we have with our technology is that it’s so simple. We can let the judges see for themselves how they can use Rolosense to quickly learn something useful, like whether a water source is contaminated with a heavy metal.”

Related:
Nano-walkers take speedy leap forward with first rolling DNA motor
Chemists reveal the force within you
Molecular beacons shine light on how cells crawl

from eScienceCommons http://ift.tt/2dKpdxQ

How do photons experience time? (Synopsis) [Starts With A Bang]

“Everyone has his dream; I would like to live till dawn, but I know I have less than three hours left. It will be night, but no matter. Dying is simple. It does not take daylight. So be it: I will die by starlight.” –Victor Hugo

Whether you’re at rest or in motion, you can be confident that — from your point of view — the laws of physics will behave exactly the same no matter how quickly you’re moving. You can move slowly, quickly or not at all, up to the limits that the Universe imposes on you: the speed of light.

Light, in a vacuum, always appears to move at the same speed -- the speed of light -- regardless of the observer's velocity. Image credit: pixabay user Melmak.

Light, in a vacuum, always appears to move at the same speed — the speed of light — regardless of the observer’s velocity. Image credit: pixabay user Melmak.

But what if you’re actually a photon? What if you don’t move near the speed of light, but at the speed of light? As it turns out, the way any massless particle experiences time, distance, and the Universe in general is entirely counterintuitive, and there’s nothing in our common experience that matches up.

A relativistic journey toward the constellation of Orion. Image credit: Alexis Brandeker, via http://ift.tt/1kvZ4B9. StarStrider, a relativistic 3D planetarium program by FMJ-Software, was used to produce the Orion illustrations.

A relativistic journey toward the constellation of Orion. Image credit: Alexis Brandeker, via http://ift.tt/1kvZ4B9. StarStrider, a relativistic 3D planetarium program by FMJ-Software, was used to produce the Orion illustrations.

It’s a relatively interesting story if you want to think about it deeply, and yet once you arrive at the answer, it couldn’t be any simpler.



from ScienceBlogs http://ift.tt/2dKrCsb

“Everyone has his dream; I would like to live till dawn, but I know I have less than three hours left. It will be night, but no matter. Dying is simple. It does not take daylight. So be it: I will die by starlight.” –Victor Hugo

Whether you’re at rest or in motion, you can be confident that — from your point of view — the laws of physics will behave exactly the same no matter how quickly you’re moving. You can move slowly, quickly or not at all, up to the limits that the Universe imposes on you: the speed of light.

Light, in a vacuum, always appears to move at the same speed -- the speed of light -- regardless of the observer's velocity. Image credit: pixabay user Melmak.

Light, in a vacuum, always appears to move at the same speed — the speed of light — regardless of the observer’s velocity. Image credit: pixabay user Melmak.

But what if you’re actually a photon? What if you don’t move near the speed of light, but at the speed of light? As it turns out, the way any massless particle experiences time, distance, and the Universe in general is entirely counterintuitive, and there’s nothing in our common experience that matches up.

A relativistic journey toward the constellation of Orion. Image credit: Alexis Brandeker, via http://ift.tt/1kvZ4B9. StarStrider, a relativistic 3D planetarium program by FMJ-Software, was used to produce the Orion illustrations.

A relativistic journey toward the constellation of Orion. Image credit: Alexis Brandeker, via http://ift.tt/1kvZ4B9. StarStrider, a relativistic 3D planetarium program by FMJ-Software, was used to produce the Orion illustrations.

It’s a relatively interesting story if you want to think about it deeply, and yet once you arrive at the answer, it couldn’t be any simpler.



from ScienceBlogs http://ift.tt/2dKrCsb

September Pieces Of My Mind #3 [Aardvarchaeology]

  • Just got the application referees’ evaluation for a job I’ve been hoping for. I’m afraid to read it. Taking a walk first.
  • I’m really tired of this thankless shit. Impatient for December, when I’ll know if I’ll have money to write that castles book or if I should start calling people about a steady job in contract archaeology. The one I stupidly turned down in fucking 1994.
  • Osteologist Rudolf Gustavsson has documented traces of flaying on cat bones that we’ve found at Stensö Castle. Reading the ribald 15th century “Marriage Song” that has just appeared in a new critical edition, I found a passage where the poet warns a father of daughters: “If he has beautiful cats, then my advice is not to invite many furriers to his feast”.
  • Affluent Chinese Swedes who want to throw out the Middle Eastern refugees. I can’t even.
  • Torbjörn Lodén writes something interesting in the latest issue of the Swedish-Chinese Association’s monthly. According to him, the Chinese Communist Party’s continued emphasis on Mao Zedong Sixiang, “Mao Zedong’s Thought” after the man’s death should be read as an implicit step away from his many disastrous actions. Apparently Mao Zedong failed to follow his own Thought.
  • I see flashing knobs!
  • These castle sites really aren’t very rich in pottery. Birgittas udde didn’t yield a single sherd. Skällvik offered only 68 g of Medieval wares and 332 g of Modern stuff.
  • Not-so-great usability. The interface of our front door’s inside lets you do four things. Press the handle, pull the handle, turn the lock knob one way, turn the lock knob the other way. When the door is locked, if you press the handle, the pressure of the door’s rubber insulation seal disables the lock knob. You can then only leave our house if you know that you have to pull the handle first.
  • Sewed the buttons back onto my crappy Lithuanian shirt and imposed Ordnung on the needlework box.
  • Much-needed encouragement: a journalist did a long interview with me about my new book.
  • It’s 100 meters high and several 100 degrees Celsius warm inside. It’s a pile of ashes from a mid-20th century shale oil plant. It’s just
    outside Örebro.
  • Reasonable commentators unanimously declare Trump unfit for office. Sadly this makes Trump voters unfit for democracy.
I've never tossed a grocery basket onto a roof. Have I really lived at all?

I’ve never tossed a grocery basket onto a roof. Have I really lived at all?



from ScienceBlogs http://ift.tt/2df8iSO
  • Just got the application referees’ evaluation for a job I’ve been hoping for. I’m afraid to read it. Taking a walk first.
  • I’m really tired of this thankless shit. Impatient for December, when I’ll know if I’ll have money to write that castles book or if I should start calling people about a steady job in contract archaeology. The one I stupidly turned down in fucking 1994.
  • Osteologist Rudolf Gustavsson has documented traces of flaying on cat bones that we’ve found at Stensö Castle. Reading the ribald 15th century “Marriage Song” that has just appeared in a new critical edition, I found a passage where the poet warns a father of daughters: “If he has beautiful cats, then my advice is not to invite many furriers to his feast”.
  • Affluent Chinese Swedes who want to throw out the Middle Eastern refugees. I can’t even.
  • Torbjörn Lodén writes something interesting in the latest issue of the Swedish-Chinese Association’s monthly. According to him, the Chinese Communist Party’s continued emphasis on Mao Zedong Sixiang, “Mao Zedong’s Thought” after the man’s death should be read as an implicit step away from his many disastrous actions. Apparently Mao Zedong failed to follow his own Thought.
  • I see flashing knobs!
  • These castle sites really aren’t very rich in pottery. Birgittas udde didn’t yield a single sherd. Skällvik offered only 68 g of Medieval wares and 332 g of Modern stuff.
  • Not-so-great usability. The interface of our front door’s inside lets you do four things. Press the handle, pull the handle, turn the lock knob one way, turn the lock knob the other way. When the door is locked, if you press the handle, the pressure of the door’s rubber insulation seal disables the lock knob. You can then only leave our house if you know that you have to pull the handle first.
  • Sewed the buttons back onto my crappy Lithuanian shirt and imposed Ordnung on the needlework box.
  • Much-needed encouragement: a journalist did a long interview with me about my new book.
  • It’s 100 meters high and several 100 degrees Celsius warm inside. It’s a pile of ashes from a mid-20th century shale oil plant. It’s just
    outside Örebro.
  • Reasonable commentators unanimously declare Trump unfit for office. Sadly this makes Trump voters unfit for democracy.
I've never tossed a grocery basket onto a roof. Have I really lived at all?

I’ve never tossed a grocery basket onto a roof. Have I really lived at all?



from ScienceBlogs http://ift.tt/2df8iSO

Why do leaves change color in fall?

Autumn 2016 in the Colorado Rocky Mountains. Photo via Jessi Leigh Photography. Thanks Jessi!

Autumn 2016 in the Colorado Rocky Mountains. Photo via Jessi Leigh Photography. Thanks Jessi!

The vivid yellow and orange colors have actually been there throughout the spring and summer, but we haven’t been able to see them. The deep green color of chlorophyll, which helps plants absorb life-giving sunlight, hides the other colors. In the fall, trees break down the green pigments and nutrients stored in the leaves. The nutrients are shuttled into the roots for reuse in the spring

As leaves lose their chlorophyll, other pigments become visible to the human eye, according to Bryan A. Hanson, professor of chemistry and biochemistry at DePauw University who studies plant pigments. Some tree leaves turn mostly brown, indicating that all pigments are gone.

Autumn leaves at Hurricane Mountain in the Adirondacks, New York, September, 2015. Photo by John Holmes. Thank you John!

Autumn leaves at Hurricane Mountain in the Adirondacks, New York, September, 2015. Photo by John Holmes. Thank you John!

Autumn leaf in about mid-September 2012 from our friend Colin Chatfield in Saskatoon, Saskatchewan.

Autumn leaf in about mid-September from our friend Colin Chatfield in Saskatoon, Saskatchewan.

Burgundy and red colors are a different story. Dana A. Dudle is a DePauw professor of biology who researches red pigment in plant flowers, stems and leaves. Dudle said:

The red color is actively made in leaves by bright light and cold. The crisp, cold nights in the fall combine with bright, sunny days to spur production of red in leaves – especially in sugar maple and red maple trees. Burgundy leaves often result from a combination of red pigment and chlorophyll. Autumn seasons with a lot of sunny days and cold nights will have the brightest colors.

Image Credit: treehouse1977

Image via treehouse1977

In some cases, about half of a tree’s leaves are red/orange and the other half green. Dudle says that results from micro-environmental factors – such as only half the tree being exposed to sunlight or cold.

Hardwoods in the Midwest and on the East Coast are famous for good color selections. Some of the more reliably colorful trees, Hanson notes, are liquid amber trees (also called sweet gum) that turn a variety of colors on the same tree, and sometimes the same leaf. Ash tree leaves often turn a deep burgundy color. Ginkgo trees, although not native to North America, will feature an intense yellow, almost golden, color.

A lone red tree against bare branches. Photo via Daniel de Leeuw Photog.

A lone red tree against bare branches. Photo via Daniel de Leeuw Photog.

“Autumn picture from Sweden…” from our friend Jörgen Norrland

The colors are doing something for the plant, or they wouldn’t be there, said Hansen. But what is the colors’ purpose?

Scientists think that with some trees, pigments serve as a kind of sunscreen to filter out sunlight. Hanson said:

It’s an underappreciated fact that plants cannot take an infinite amount of sun. Some leaves, if they get too much sun, will get something equivalent of a sunburn. They get stressed out and die.

Image via Tosca Yemoh Zanon in London wishes

Image via Tosca Yemoh Zanon in London.

Another theory is that the color of a plant’s leaves is often related to the ability to warn away pests or attract insect pollinators. Hanson said:

In some cases, a plant and insect might have co-evolved. One of the more intriguing scientific theories is that the beautiful leaf colors we see today are indicative of a relationship between a plant and insects that developed millions of years ago. However, as the Earth’s climate changed over the years, the insects might have gone extinct, but the plant was able to survive for whatever reason.

Because plants evolve very slowly, we still see the colors. So leaf color is a fossil memory, something that existed for a reason millions of years ago but that serves no purpose now.

Image Credit: Ross Elliott

Autumn, early October 2012 in Hibbing, Minnesota. Photo by EarthSky Facebook friend Rosalbina Segura.

Enjoying EarthSky? Sign up for our free daily newsletter today!

Bottom line: Biologists discuss why leaves change color.

Read more from DePaux University



from EarthSky http://ift.tt/1vDGYQe
Autumn 2016 in the Colorado Rocky Mountains. Photo via Jessi Leigh Photography. Thanks Jessi!

Autumn 2016 in the Colorado Rocky Mountains. Photo via Jessi Leigh Photography. Thanks Jessi!

The vivid yellow and orange colors have actually been there throughout the spring and summer, but we haven’t been able to see them. The deep green color of chlorophyll, which helps plants absorb life-giving sunlight, hides the other colors. In the fall, trees break down the green pigments and nutrients stored in the leaves. The nutrients are shuttled into the roots for reuse in the spring

As leaves lose their chlorophyll, other pigments become visible to the human eye, according to Bryan A. Hanson, professor of chemistry and biochemistry at DePauw University who studies plant pigments. Some tree leaves turn mostly brown, indicating that all pigments are gone.

Autumn leaves at Hurricane Mountain in the Adirondacks, New York, September, 2015. Photo by John Holmes. Thank you John!

Autumn leaves at Hurricane Mountain in the Adirondacks, New York, September, 2015. Photo by John Holmes. Thank you John!

Autumn leaf in about mid-September 2012 from our friend Colin Chatfield in Saskatoon, Saskatchewan.

Autumn leaf in about mid-September from our friend Colin Chatfield in Saskatoon, Saskatchewan.

Burgundy and red colors are a different story. Dana A. Dudle is a DePauw professor of biology who researches red pigment in plant flowers, stems and leaves. Dudle said:

The red color is actively made in leaves by bright light and cold. The crisp, cold nights in the fall combine with bright, sunny days to spur production of red in leaves – especially in sugar maple and red maple trees. Burgundy leaves often result from a combination of red pigment and chlorophyll. Autumn seasons with a lot of sunny days and cold nights will have the brightest colors.

Image Credit: treehouse1977

Image via treehouse1977

In some cases, about half of a tree’s leaves are red/orange and the other half green. Dudle says that results from micro-environmental factors – such as only half the tree being exposed to sunlight or cold.

Hardwoods in the Midwest and on the East Coast are famous for good color selections. Some of the more reliably colorful trees, Hanson notes, are liquid amber trees (also called sweet gum) that turn a variety of colors on the same tree, and sometimes the same leaf. Ash tree leaves often turn a deep burgundy color. Ginkgo trees, although not native to North America, will feature an intense yellow, almost golden, color.

A lone red tree against bare branches. Photo via Daniel de Leeuw Photog.

A lone red tree against bare branches. Photo via Daniel de Leeuw Photog.

“Autumn picture from Sweden…” from our friend Jörgen Norrland

The colors are doing something for the plant, or they wouldn’t be there, said Hansen. But what is the colors’ purpose?

Scientists think that with some trees, pigments serve as a kind of sunscreen to filter out sunlight. Hanson said:

It’s an underappreciated fact that plants cannot take an infinite amount of sun. Some leaves, if they get too much sun, will get something equivalent of a sunburn. They get stressed out and die.

Image via Tosca Yemoh Zanon in London wishes

Image via Tosca Yemoh Zanon in London.

Another theory is that the color of a plant’s leaves is often related to the ability to warn away pests or attract insect pollinators. Hanson said:

In some cases, a plant and insect might have co-evolved. One of the more intriguing scientific theories is that the beautiful leaf colors we see today are indicative of a relationship between a plant and insects that developed millions of years ago. However, as the Earth’s climate changed over the years, the insects might have gone extinct, but the plant was able to survive for whatever reason.

Because plants evolve very slowly, we still see the colors. So leaf color is a fossil memory, something that existed for a reason millions of years ago but that serves no purpose now.

Image Credit: Ross Elliott

Autumn, early October 2012 in Hibbing, Minnesota. Photo by EarthSky Facebook friend Rosalbina Segura.

Enjoying EarthSky? Sign up for our free daily newsletter today!

Bottom line: Biologists discuss why leaves change color.

Read more from DePaux University



from EarthSky http://ift.tt/1vDGYQe

Star party ahead

Just before the start of a star party, at McDonald Observatory in West Texas. Photo by Karen Janczak.

Just before the start of a star party, at McDonald Observatory in West Texas. Photo by Karen Janczak.

Star parties at McDonald Observatory (reservations required)

Visit EarthSky’s events page: Star parties and other astro events

Recommend your favorite stargazing spot at EarthSky’s Best Places to Stargaze page



from EarthSky http://ift.tt/2dAniwp
Just before the start of a star party, at McDonald Observatory in West Texas. Photo by Karen Janczak.

Just before the start of a star party, at McDonald Observatory in West Texas. Photo by Karen Janczak.

Star parties at McDonald Observatory (reservations required)

Visit EarthSky’s events page: Star parties and other astro events

Recommend your favorite stargazing spot at EarthSky’s Best Places to Stargaze page



from EarthSky http://ift.tt/2dAniwp

The trans-Atlantic quest to find a winning combination of cancer drugs

Cancer drugs

In a corner of our head office, sits a team of six people. And since 2010, they’ve been quietly, but resolutely helping to develop new treatment options for cancer patients.

This is the team behind the Combinations Alliance, one of several projects run through the Experimental Cancer Medicine Centre network (ECMC).

The Alliance brings together UK researchers and drug companies from around the world to explore new combinations of cancer drugs. By combining multiple drugs in a single clinical trial, they can test whether or not the combination is better at treating cancer than the standard treatment available.

It’s a unique scheme that’s increasing treatment options for patients, and tackling drug resistance – arguably one of the biggest problems in cancer treatment.

And a new deal signed today will see a combination of drugs tested in mesothelioma, non-small cell lung and pancreatic cancers for the first time.

A match made in heaven

The Combinations Alliance works very much like a match-making agency for researchers and drug companies. The goal of developing these relationships is to hopefully launch clinical trials that test promising new combinations of drugs to treat different types of cancer.

The Combinations Alliance focuses on therapies that wouldn’t progress without our support

– Dr Ian Walker, director of clinical research and strategic partnerships at Cancer Research UK

So why exactly was the Combinations Alliance set up? Surely companies have thought of collaborating in this way before? Well, this can be tricky. Each company has a different way of working.

So it’s often much simpler for many drug companies to go it alone. But they’re beginning to see how limiting this approach is. Despite huge resources, it’s simply impossible for any company to test every combination of drugs.

But our Combinations Alliance offers a structured way for drug companies – alongside researchers – to work together. And, ultimately, get better treatments to patients, sooner.

How the initiative works

Our team begins by meeting a drug company that’s developing promising cancer drugs. Researchers within our ECMC network then consider which drugs could work together in a clinical trial.

So far twelve partners have signed onto the scheme, with many more set to join in the next few years.

Alternatively, the team receives an idea from a researcher to combine two or more drugs, which aren’t necessarily owned by an existing Alliance partner.

The team then approaches the company who owns that drug for permission to use it in a clinical trial.

Most trials to date have tested a single company’s experimental new drug in combination with a drug that’s already available as standard treatment for patients – or with radiotherapy. There have also been a few trials involving two new experimental drugs owned by the same company.

But for the first time today, the Alliance has brought together two drug companies to test an exciting combination of two drugs: one an immunotherapy drug, the other a so-called ‘targeted’ cancer treatment.

And it’s thanks to the inspirational idea of two researchers – Dr Stefan Symeonides, at the University of Edinburgh, and Professor Dean Fennell, at the University of Leicester.

The bright idea

Together, Symeonides and Fennell designed the combination clinical trial that will be managed at Cancer Research UK’s Clinical Trials Unit in Glasgow. It’s an idea that’s based on work from another of our researchers, Professor Margaret Frame, at the University of Edinburgh.

But designing the trial was only one part of the story. To run it, Symeonides and Fennell needed approval to use the drugs, which are owned and made by two different drug companies.

Let’s step back a couple of years to see how the team made this happen.

Finding a drug that works

Back in 2014, a small drug company based just north of New Jersey in Massachusetts in the US, called Verastem, got in touch with the team.

They’re an existing Alliance partner, introduced by Fennell, and have been working on a drug called VS-6063, which switches off a molecule called FAK that’s found inside cells.

The drug works by stopping FAK forming a cellular barrier that blocks the body’s immune system.

But once the barrier is down, how can the body attack the tumour itself?

Symeonides and Fennell believe that Verastem’s drug could work better with another drug that boosts the immune system and the army of cells it unleashes. And they had a good idea of what could work.

The final piece of the jigsaw

Fast-forward to 2015 and the answer to this problem could be found back in New Jersey at a different drug company, MSD, which is one of the largest drug companies in the world.

Although not an Alliance partner at the time, they were in talks with the team, and agreed to let Symeonides and Fennell use an immunotherapy drug called pembrolizumab (Keytruda) for the trial. Pembrolizumab is designed to target an antenna-like molecule that sticks out on the surface of certain forms of immune cell.

Normally, this ‘antenna’ – called the programmed cell death 1 (PD-1) receptor – picks up signals, preventing the immune system from inappropriately reacting to certain triggers. But in people with some types of cancer, the same antenna receives signals stopping the body’s immune system from recognising the cancer cells, allowing the tumour to remain undetected.

Pembrolizumab blocks these signals, jumpstarting the immune system into recognising, targeting and destroying tumours.

Symeonides and Fennell thought that once the barrier surrounding the cancer cells has been taken down by Verastem’s drug, MSD’s pembrolizumab could then activate cancer-killing immune cells to attack the tumour.

Pembrolizumab on its own has shown promise in treating bladder, melanoma, kidney and non-small cell lung cancer, but it’s had little effect in people with other types of cancer. So this would be a chance to see if the drug could treat more types than originally thought.

Fast-forward to today, and both Verastem and MSD have agreed to the use of their drugs in combination for the first time in a clinical trial. The trial will look at whether the two drugs can be used safely together, and test whether the combination is better for treating people with mesothelioma, non-small cell lung and pancreatic cancers – all of which have very low survival.

What’s next?

By working with UK researchers, and drug companies around the world, there are still many more trial ideas to be explored. These aren’t limited to those involving drug combinations, but using radiotherapy and surgery too. It’s just about getting the right people with the right drugs to start working together.

What the Combinations Alliance team have achieved to date is no small feat. They’re responsible for pulling in some of the best science and making sure that these trials are run in the UK (MSD and Verastem are both American companies and logistically, it’d be a lot easier for them to run the trial in the US), so that patients here can benefit first.

It’s early days, the trial isn’t recruiting patients yet – but it should be up and running later this year.

So far 356 people have taken part in a Combinations Alliance trial and have been given another shot at tackling cancer that’s come back.

One trial in particular was so promising that it’s now progressed to the next step, a larger clinical to see how well it actually works in a larger number of people.

The Combinations Alliance team might be sitting quietly in the corner in our office, but make no mistake, they’re causing quite the stir outside it.

Amille



from Cancer Research UK – Science blog http://ift.tt/2cQegZI
Cancer drugs

In a corner of our head office, sits a team of six people. And since 2010, they’ve been quietly, but resolutely helping to develop new treatment options for cancer patients.

This is the team behind the Combinations Alliance, one of several projects run through the Experimental Cancer Medicine Centre network (ECMC).

The Alliance brings together UK researchers and drug companies from around the world to explore new combinations of cancer drugs. By combining multiple drugs in a single clinical trial, they can test whether or not the combination is better at treating cancer than the standard treatment available.

It’s a unique scheme that’s increasing treatment options for patients, and tackling drug resistance – arguably one of the biggest problems in cancer treatment.

And a new deal signed today will see a combination of drugs tested in mesothelioma, non-small cell lung and pancreatic cancers for the first time.

A match made in heaven

The Combinations Alliance works very much like a match-making agency for researchers and drug companies. The goal of developing these relationships is to hopefully launch clinical trials that test promising new combinations of drugs to treat different types of cancer.

The Combinations Alliance focuses on therapies that wouldn’t progress without our support

– Dr Ian Walker, director of clinical research and strategic partnerships at Cancer Research UK

So why exactly was the Combinations Alliance set up? Surely companies have thought of collaborating in this way before? Well, this can be tricky. Each company has a different way of working.

So it’s often much simpler for many drug companies to go it alone. But they’re beginning to see how limiting this approach is. Despite huge resources, it’s simply impossible for any company to test every combination of drugs.

But our Combinations Alliance offers a structured way for drug companies – alongside researchers – to work together. And, ultimately, get better treatments to patients, sooner.

How the initiative works

Our team begins by meeting a drug company that’s developing promising cancer drugs. Researchers within our ECMC network then consider which drugs could work together in a clinical trial.

So far twelve partners have signed onto the scheme, with many more set to join in the next few years.

Alternatively, the team receives an idea from a researcher to combine two or more drugs, which aren’t necessarily owned by an existing Alliance partner.

The team then approaches the company who owns that drug for permission to use it in a clinical trial.

Most trials to date have tested a single company’s experimental new drug in combination with a drug that’s already available as standard treatment for patients – or with radiotherapy. There have also been a few trials involving two new experimental drugs owned by the same company.

But for the first time today, the Alliance has brought together two drug companies to test an exciting combination of two drugs: one an immunotherapy drug, the other a so-called ‘targeted’ cancer treatment.

And it’s thanks to the inspirational idea of two researchers – Dr Stefan Symeonides, at the University of Edinburgh, and Professor Dean Fennell, at the University of Leicester.

The bright idea

Together, Symeonides and Fennell designed the combination clinical trial that will be managed at Cancer Research UK’s Clinical Trials Unit in Glasgow. It’s an idea that’s based on work from another of our researchers, Professor Margaret Frame, at the University of Edinburgh.

But designing the trial was only one part of the story. To run it, Symeonides and Fennell needed approval to use the drugs, which are owned and made by two different drug companies.

Let’s step back a couple of years to see how the team made this happen.

Finding a drug that works

Back in 2014, a small drug company based just north of New Jersey in Massachusetts in the US, called Verastem, got in touch with the team.

They’re an existing Alliance partner, introduced by Fennell, and have been working on a drug called VS-6063, which switches off a molecule called FAK that’s found inside cells.

The drug works by stopping FAK forming a cellular barrier that blocks the body’s immune system.

But once the barrier is down, how can the body attack the tumour itself?

Symeonides and Fennell believe that Verastem’s drug could work better with another drug that boosts the immune system and the army of cells it unleashes. And they had a good idea of what could work.

The final piece of the jigsaw

Fast-forward to 2015 and the answer to this problem could be found back in New Jersey at a different drug company, MSD, which is one of the largest drug companies in the world.

Although not an Alliance partner at the time, they were in talks with the team, and agreed to let Symeonides and Fennell use an immunotherapy drug called pembrolizumab (Keytruda) for the trial. Pembrolizumab is designed to target an antenna-like molecule that sticks out on the surface of certain forms of immune cell.

Normally, this ‘antenna’ – called the programmed cell death 1 (PD-1) receptor – picks up signals, preventing the immune system from inappropriately reacting to certain triggers. But in people with some types of cancer, the same antenna receives signals stopping the body’s immune system from recognising the cancer cells, allowing the tumour to remain undetected.

Pembrolizumab blocks these signals, jumpstarting the immune system into recognising, targeting and destroying tumours.

Symeonides and Fennell thought that once the barrier surrounding the cancer cells has been taken down by Verastem’s drug, MSD’s pembrolizumab could then activate cancer-killing immune cells to attack the tumour.

Pembrolizumab on its own has shown promise in treating bladder, melanoma, kidney and non-small cell lung cancer, but it’s had little effect in people with other types of cancer. So this would be a chance to see if the drug could treat more types than originally thought.

Fast-forward to today, and both Verastem and MSD have agreed to the use of their drugs in combination for the first time in a clinical trial. The trial will look at whether the two drugs can be used safely together, and test whether the combination is better for treating people with mesothelioma, non-small cell lung and pancreatic cancers – all of which have very low survival.

What’s next?

By working with UK researchers, and drug companies around the world, there are still many more trial ideas to be explored. These aren’t limited to those involving drug combinations, but using radiotherapy and surgery too. It’s just about getting the right people with the right drugs to start working together.

What the Combinations Alliance team have achieved to date is no small feat. They’re responsible for pulling in some of the best science and making sure that these trials are run in the UK (MSD and Verastem are both American companies and logistically, it’d be a lot easier for them to run the trial in the US), so that patients here can benefit first.

It’s early days, the trial isn’t recruiting patients yet – but it should be up and running later this year.

So far 356 people have taken part in a Combinations Alliance trial and have been given another shot at tackling cancer that’s come back.

One trial in particular was so promising that it’s now progressed to the next step, a larger clinical to see how well it actually works in a larger number of people.

The Combinations Alliance team might be sitting quietly in the corner in our office, but make no mistake, they’re causing quite the stir outside it.

Amille



from Cancer Research UK – Science blog http://ift.tt/2cQegZI

A victory and a more substantial defeat for the cruel sham known as “right to try” [Respectful Insolence]

I’ve referred to so-called “right to try” laws as a cruel sham.on more than one occasion. Since 2014, these laws, all based on a template provided by the libertarian Goldwater Institute, have been proliferating at the state level with the help of lobbying by the aforementioned Goldwater Institute and a concept that makes it pitifully easy to caricature opposition to these laws as wanting to heartlessly snatch away from terminally ill patients the last chance at life while laughing and twirling one’s mustache like Snidely Whiplash. Not surprisingly, state legislatures all over the country have found such laws irresistible, leading to their passage in over 30 states in just two and a half years. Over the last week, right-to-try has had a major victory in that, contrary to what he did last year and contrary to the hope of science advocates, California Governor Jerry Brown signed the right-to-try bill (AB-1668) that was passed earlier this month. However, it has also suffered a major defeat in that a couple of days ago the federal right-to-try bill was blocked in the Senate.

The basic premise behind right-to-try laws is that people are dying in droves because the FDA is too slow and too hidebound to allow dying patients access to experimental drugs that are still undergoing clinical trials to be approved by the FDA. No, really, that’s the argument libertarians make, that the FDA is literally (yes, I mean literally—just ask Nick Gillespie and Ronald Bailey) “killing” people. Enter right-to-try, laws that purport to allow terminally ill patients (or, in some cases, patients with life-threatening but not necessarily terminal illnesses) to access experimental therapeutics in a desperate bid to save their lives. Sounds reasonable on the surface, right? What is assiduously not mentioned are other libertarian-based aspects of these laws. For instance, there is no mechanism in most right-to-try laws to help patients seeking to access experimental therapeutics financially. Indeed, pointedly, such bills go out of their way to state that health insurance companies do not have to pay for suc treatments and can be interpreted to state that they don’t have to pay for treating complications arising from the use of right-to-try drugs or devices. Given that such bills also allow pharmaceutical companies to charge for experimental therapeutics and such expenses can be very high, this effectively means that only the rich or those skilled (or whose families are skilled) at using social media to raise a lot of money fast could potentially access right-to-try.

These laws also explicitly remove patient protections in that most of them state that doctors recommending right-to-try can’t be sued for malpractice or disciplined by their state medical boards, seemingly no matter how inappropriate or incompetently executed such a request might be. Nor can drug manufacturers be sued. Basically, these laws tell terminally ill patients: Good luck. You’re on your own. And don’t sue if things go bad, no matter what. Given that right-to-try laws also only require that experimental therapeutics have passed phase I trials and still be in clinical trials to be eligible, there’s a high probability of adverse events and harm. Indeed, I not uncommonly laugh derisively and contemptuously whenever I hear a Goldwater Institute flack claim with a straight face that right-to-try only allows drugs that have been shown to be safe to be used, because phase I trials generally only have a few dozen patients followed briefly. Let’s just put it this way: No one who knows what he’s talking about views drugs that have passed phase I trials as having been shown to be safe. At best, such drugs have been shown not to have high levels of life-threatening toxicity.

Of course, the biggest flaw in these laws is that it is federal law, not state law, that controls drug approval. Right-to-try laws can say that terminally ill patients have the “right” to access experimental therapeutics, but it is the FDA that determines whether they, in fact, do. Companies are understandably reluctant to grant access to experimental therapeutics without the FDA’s prior approval because (1) the FDA will not look kindly upon it and they want FDA approval and (2) if there are any adverse events it could harm their chances of winning approval for their drugs. Also, the FDA does have what it calls its Expanded Access Program (sometimes referred to as Compassionate Use) already to allow terminally ill patients to access experimental therapeutics, and it does it without removing patient protections under Institutional Review Board (IRB) supervision. Moreover, the FDA already grants the overwhelming majority of Expanded Access requests. Indeed, as I pointed out, thus far, after two and a half years of existence, right-to-try has been a miserable failure. The Goldwater Institute can’t identify a single patient who has received an experimental drug under a state right-to-try law, although it claims to know of 40-60. Meanwhile a quack like Stanislaw Burzynski has abused right-to-try. Meanwhile, the only patient I’ve been able to find who actually used right-to-try died.

So it was that I was very disappointed to learn that Governor Jerry Brown had betrayed the citizens of the State of California by buckling under this time:

Terminally ill patients in California will be able to try potentially life-saving medication before it passes FDA final review thanks to a new piece of legislation inspired by the movie “Dallas Buyers Club.”

The bill, dubbed the “Right to Try” law, makes California the 32nd state to allow patients with terminal illnesses to try drugs that have passed the FDA’s Phase 1, but haven’t been fully approved.

Phase 1 is the first stage of drug testing in human patients. Drugmakers earn approval to conduct clinical trials on people after presenting the results of successful trials on animals, according to the FDA. After presenting the data — and a plan for human trials — the FDA determines whether drug companies can go forward with additional testing.

Patients can try experimental treatments only after exhausting all other options, according to the libertarian think tank Goldwater Institute, and the patients’ treatments with Phase 1 drugs cannot be included as data in ongoing clinical trials.

Of course, as I pointed out before when I discussed the California bill, this description is utter bollocks. In a way, the California bill (now law) is worse than the average state right-to-try law. It doesn’t actually require that the patient be terminally ill, only that he has an “immediately life-threatening disease or condition.” As I put it at the time, that’s incredibly broad. A severe case of pneumonia could be “immediately life-threatening.” A heart attack is definitely “immediately life-threatening.” A stroke is “immediately life-threatening.” “Immediately life-threatening” is not the same thing as “terminal.” Yet AB-1668 tries to have it both ways, as it defines “immediately life-threatening disease or condition” as “a stage of disease in which there is a reasonable likelihood that death will occur within a matter of months.” That implies something less acute, but “within a matter of months” encompasses more immediately life-threatening diseases as well.

You could ask, quite reasonably: Why does this matter? One reason is that it’s California, the most populous state in the country. Any law passed in California matters. California is always the biggest prize, and right-to-try advocates were bitterly disappointed when Gov. Brown vetoed a previous right-to-try bill last year.

Unfortunately, the Goldwater Institute has been very effective in co-opting terminally ill patients to use the considerable justified sympathy voters and legislators feel for them to lobby for right-to-try:

In a guest column for the Washington Post, 32-year-old Matthew Bellina says Right to Try laws are an improvement on the FDA’s Expanded Access program because they stipulate that the FDA won’t shut down or delay clinical trials if an experimental treatment goes wrong.

Because derailing clinical trials can deal a significant blow to drug companies that have poured millions of dollars into research and development, drug companies are less likely to sponsor terminally ill patients without guarantees that the FDA won’t retaliate for failed treatment.

Bellina, a military veteran and father who has terminal ALS — amyotrophic lateral sclerosis, also called Lou Gehrig’s disease — testified before the Senate on a federal version of Right to Try.

Turning down someone dying of Lou Gehrig’s disease is pretty much close to impossible for a politician, even if the legislation being proposed is profoundly anti-patient, as right-to-try is. Also, this is one of the most pernicious aspects of right-to-try, as you will see. So let’s segue to the federal right-to-try bill. Gov. Brown might have buckled and signed what I like to refer to as “placebo legislation,” which makes legislators feel good and believe that they’ve done something when in reality they’ve done nothing, but the federal bill is where the action is because that’s the real goal of the Goldwater Institute, to weaken and then ultimately neuter the FDA. The Goldwater Institute is politically savvy enough not to come right out and explicitly say this, but other libertarians are not.

A week ago, hearings were held on a federal right-to-try bill, S.2912, known as The Trickett Wendler Right to Try Act of 2016. This is a bill being pushed by Republican U.S. Sen. Ron Johnson. Interestingly, as the bill has been US Senate Committee on Health, Education, Labor, and Pensions, Johnson used his position as committee chair of the US Senate Committee on Homeland Security and Government Affairs to hold hearings on the bill, even though it has nothing to do with his committee’s purview. Sen. Johnson’s opening statement is basically a rehash of Goldwater Institute talking points, complete with the usual anecdotes about patients with terminal illnesses who might have been saved:

Despite the legal uncertainty there are doctors willing to jeopardize their practice to give patients needed, but unfortunately unapproved, treatments. One of them is Houston oncologist Dr. Ebrahim Delpassand. Even though the FDA has told him no, he bravely continues to treat patients under his state’s law. Now nearly 80 patients, whose chance of survival would be, as he puts it, “close to none,” are alive thanks to his treatment.

This caught my attention, as this is a potentially verifiable claim. There is a video of Dr. Delpassand giving a statement included in the testimony:

Whoa. He’s with Excel Diagnostics, the very same company that I discussed when I looked into the one patient in Texas whom I could find who had accessed right-to-try and who had not been saved. Contrary to Sen. Johnson’s claims, he is not an oncologist; he is a radiologist. Of course, nothing Dr. Delpassand claims in his video statement shows that 80 patients who would have died have been saved, thanks to his being a brave maverick doctor willing to buck the FDA. One notes a highly one-sided account designed to make Dr. Delpassand look as good as possible. In any case, the therapy discussed by Dr. Delpassand does have potential, as I mentioned before. However, one thing that stood out to me was how the Goldwater Institute reached out to Dr. Delpassand. So basically, right-to-try allowed Dr. Delpassand to charge for the use of his treatment, even though it is not FDA-approved. Not surprisingly, the Goldwater Institute is painting this example as the nefarious FDA preventing patients from saving their lives, even though this treatment is not curative, as I described. He’s also a flack for the Goldwater Institute, having participated in a promotional video touting right-to-try:

So his evidence is that “many of these patients were given three or six months to live” and are alive a year later? Seriously? Stanislaw Burzynski uses the same argument about the patients he treats.

In any case, as I pointed out three weeks ago, the federal right-to-try bill is even worse than state right-to-try bills because (1) it would actually do something and (2) what it would do would be very, very bad for patients indeed. For example, it would forbid the FDA from considering adverse events suffered by patients utilizing experimental drugs under right-to-try when considering a drug for approval. Seriously, it says that. A patient could die, clearly as a result of an experimental drug, and the FDA would be explicitly barred from considering that information when deciding whether to approve the drug or not.

Speaking of Stanislaw Burzynski, I couldn’t help but note the testimony of Peter Lurie, MD, MPH. He repeated the same points about how the FDA’s Expanded Access Program rarely rejects requests and then notes:

However, even patients with serious or life-threatening diseases and conditions require protection from unnecessary risks, particularly as, in general, the products they are seeking through expanded access are unapproved – and may never be approved. Moreover, FDA is concerned about the ability of unscrupulous individuals to exploit such desperate patients. Thus, with every request, FDA must determine that the potential patient benefit from the investigational drug justifies the potential risks and that the potential risks are not unreasonable in the context of the disease or condition to be treated.

“Unscrupulous individuals”? That would well describe Stanislaw Burzynski. It could also describe pharmaceutical companies willing to profit off of drugs that made it through phase I studies but are not approved yet.

Fortunately, for now at least, the federal right-to-try bill has been blocked:

Republican U.S. Sen. Ron Johnson’s push for a right-to-try bill ran up against the reality of hardball politics Wednesday.

Johnson’s measure to allow terminally ill patients to receive experimental drugs not approved by the Food and Drug Administration was blocked by Senate Minority Leader Harry Reid (D-Nev.).

Johnson sought to move the bill through unanimous consent, meaning one senator could halt its progress. And that’s what Reid did, blunting a Johnson initiative for the second time in recent months. In July, Reid blocked Johnson’s bill to protect federal whistleblowers from retaliation.

Johnson faces a tough re-election fight against Democrat Russ Feingold, so any move to get legislation through by a parliamentary maneuver was always going to be difficult.

And it’s even harder since Democrats are still upset that Republicans have blocked President Barack Obama’s nominee to the U.S. Supreme Court, Merrick Garland.

Reid said he understood the “seriousness” of the Johnson proposal and acknowledged “the urgency that patients and their families feel when they’re desperate for new treatments.”

In objecting to the measure, Reid said Johnson’s bill didn’t have bipartisan support — there were 40 Republican co-sponsors and two Democrats. He said the bill didn’t go through the hearing process where all the major players on the issue have voice. The Johnson-chaired Homeland Security & Governmental Affairs panel held two hearings on the subject.

“I think we should have had a hearing on Merrick Garland,” Reid said on the Senate floor.

This is what I would call doing the right thing for the wrong reason. Unfortunately, that’s what happens in politics a lot. I’ll take it, though. If state right-to-try bills are basically symbolic rants against the FDA, the passage of a federal right-to-try bill would be a disaster for patients and the clinical trial process.



from ScienceBlogs http://ift.tt/2dwd1gJ

I’ve referred to so-called “right to try” laws as a cruel sham.on more than one occasion. Since 2014, these laws, all based on a template provided by the libertarian Goldwater Institute, have been proliferating at the state level with the help of lobbying by the aforementioned Goldwater Institute and a concept that makes it pitifully easy to caricature opposition to these laws as wanting to heartlessly snatch away from terminally ill patients the last chance at life while laughing and twirling one’s mustache like Snidely Whiplash. Not surprisingly, state legislatures all over the country have found such laws irresistible, leading to their passage in over 30 states in just two and a half years. Over the last week, right-to-try has had a major victory in that, contrary to what he did last year and contrary to the hope of science advocates, California Governor Jerry Brown signed the right-to-try bill (AB-1668) that was passed earlier this month. However, it has also suffered a major defeat in that a couple of days ago the federal right-to-try bill was blocked in the Senate.

The basic premise behind right-to-try laws is that people are dying in droves because the FDA is too slow and too hidebound to allow dying patients access to experimental drugs that are still undergoing clinical trials to be approved by the FDA. No, really, that’s the argument libertarians make, that the FDA is literally (yes, I mean literally—just ask Nick Gillespie and Ronald Bailey) “killing” people. Enter right-to-try, laws that purport to allow terminally ill patients (or, in some cases, patients with life-threatening but not necessarily terminal illnesses) to access experimental therapeutics in a desperate bid to save their lives. Sounds reasonable on the surface, right? What is assiduously not mentioned are other libertarian-based aspects of these laws. For instance, there is no mechanism in most right-to-try laws to help patients seeking to access experimental therapeutics financially. Indeed, pointedly, such bills go out of their way to state that health insurance companies do not have to pay for suc treatments and can be interpreted to state that they don’t have to pay for treating complications arising from the use of right-to-try drugs or devices. Given that such bills also allow pharmaceutical companies to charge for experimental therapeutics and such expenses can be very high, this effectively means that only the rich or those skilled (or whose families are skilled) at using social media to raise a lot of money fast could potentially access right-to-try.

These laws also explicitly remove patient protections in that most of them state that doctors recommending right-to-try can’t be sued for malpractice or disciplined by their state medical boards, seemingly no matter how inappropriate or incompetently executed such a request might be. Nor can drug manufacturers be sued. Basically, these laws tell terminally ill patients: Good luck. You’re on your own. And don’t sue if things go bad, no matter what. Given that right-to-try laws also only require that experimental therapeutics have passed phase I trials and still be in clinical trials to be eligible, there’s a high probability of adverse events and harm. Indeed, I not uncommonly laugh derisively and contemptuously whenever I hear a Goldwater Institute flack claim with a straight face that right-to-try only allows drugs that have been shown to be safe to be used, because phase I trials generally only have a few dozen patients followed briefly. Let’s just put it this way: No one who knows what he’s talking about views drugs that have passed phase I trials as having been shown to be safe. At best, such drugs have been shown not to have high levels of life-threatening toxicity.

Of course, the biggest flaw in these laws is that it is federal law, not state law, that controls drug approval. Right-to-try laws can say that terminally ill patients have the “right” to access experimental therapeutics, but it is the FDA that determines whether they, in fact, do. Companies are understandably reluctant to grant access to experimental therapeutics without the FDA’s prior approval because (1) the FDA will not look kindly upon it and they want FDA approval and (2) if there are any adverse events it could harm their chances of winning approval for their drugs. Also, the FDA does have what it calls its Expanded Access Program (sometimes referred to as Compassionate Use) already to allow terminally ill patients to access experimental therapeutics, and it does it without removing patient protections under Institutional Review Board (IRB) supervision. Moreover, the FDA already grants the overwhelming majority of Expanded Access requests. Indeed, as I pointed out, thus far, after two and a half years of existence, right-to-try has been a miserable failure. The Goldwater Institute can’t identify a single patient who has received an experimental drug under a state right-to-try law, although it claims to know of 40-60. Meanwhile a quack like Stanislaw Burzynski has abused right-to-try. Meanwhile, the only patient I’ve been able to find who actually used right-to-try died.

So it was that I was very disappointed to learn that Governor Jerry Brown had betrayed the citizens of the State of California by buckling under this time:

Terminally ill patients in California will be able to try potentially life-saving medication before it passes FDA final review thanks to a new piece of legislation inspired by the movie “Dallas Buyers Club.”

The bill, dubbed the “Right to Try” law, makes California the 32nd state to allow patients with terminal illnesses to try drugs that have passed the FDA’s Phase 1, but haven’t been fully approved.

Phase 1 is the first stage of drug testing in human patients. Drugmakers earn approval to conduct clinical trials on people after presenting the results of successful trials on animals, according to the FDA. After presenting the data — and a plan for human trials — the FDA determines whether drug companies can go forward with additional testing.

Patients can try experimental treatments only after exhausting all other options, according to the libertarian think tank Goldwater Institute, and the patients’ treatments with Phase 1 drugs cannot be included as data in ongoing clinical trials.

Of course, as I pointed out before when I discussed the California bill, this description is utter bollocks. In a way, the California bill (now law) is worse than the average state right-to-try law. It doesn’t actually require that the patient be terminally ill, only that he has an “immediately life-threatening disease or condition.” As I put it at the time, that’s incredibly broad. A severe case of pneumonia could be “immediately life-threatening.” A heart attack is definitely “immediately life-threatening.” A stroke is “immediately life-threatening.” “Immediately life-threatening” is not the same thing as “terminal.” Yet AB-1668 tries to have it both ways, as it defines “immediately life-threatening disease or condition” as “a stage of disease in which there is a reasonable likelihood that death will occur within a matter of months.” That implies something less acute, but “within a matter of months” encompasses more immediately life-threatening diseases as well.

You could ask, quite reasonably: Why does this matter? One reason is that it’s California, the most populous state in the country. Any law passed in California matters. California is always the biggest prize, and right-to-try advocates were bitterly disappointed when Gov. Brown vetoed a previous right-to-try bill last year.

Unfortunately, the Goldwater Institute has been very effective in co-opting terminally ill patients to use the considerable justified sympathy voters and legislators feel for them to lobby for right-to-try:

In a guest column for the Washington Post, 32-year-old Matthew Bellina says Right to Try laws are an improvement on the FDA’s Expanded Access program because they stipulate that the FDA won’t shut down or delay clinical trials if an experimental treatment goes wrong.

Because derailing clinical trials can deal a significant blow to drug companies that have poured millions of dollars into research and development, drug companies are less likely to sponsor terminally ill patients without guarantees that the FDA won’t retaliate for failed treatment.

Bellina, a military veteran and father who has terminal ALS — amyotrophic lateral sclerosis, also called Lou Gehrig’s disease — testified before the Senate on a federal version of Right to Try.

Turning down someone dying of Lou Gehrig’s disease is pretty much close to impossible for a politician, even if the legislation being proposed is profoundly anti-patient, as right-to-try is. Also, this is one of the most pernicious aspects of right-to-try, as you will see. So let’s segue to the federal right-to-try bill. Gov. Brown might have buckled and signed what I like to refer to as “placebo legislation,” which makes legislators feel good and believe that they’ve done something when in reality they’ve done nothing, but the federal bill is where the action is because that’s the real goal of the Goldwater Institute, to weaken and then ultimately neuter the FDA. The Goldwater Institute is politically savvy enough not to come right out and explicitly say this, but other libertarians are not.

A week ago, hearings were held on a federal right-to-try bill, S.2912, known as The Trickett Wendler Right to Try Act of 2016. This is a bill being pushed by Republican U.S. Sen. Ron Johnson. Interestingly, as the bill has been US Senate Committee on Health, Education, Labor, and Pensions, Johnson used his position as committee chair of the US Senate Committee on Homeland Security and Government Affairs to hold hearings on the bill, even though it has nothing to do with his committee’s purview. Sen. Johnson’s opening statement is basically a rehash of Goldwater Institute talking points, complete with the usual anecdotes about patients with terminal illnesses who might have been saved:

Despite the legal uncertainty there are doctors willing to jeopardize their practice to give patients needed, but unfortunately unapproved, treatments. One of them is Houston oncologist Dr. Ebrahim Delpassand. Even though the FDA has told him no, he bravely continues to treat patients under his state’s law. Now nearly 80 patients, whose chance of survival would be, as he puts it, “close to none,” are alive thanks to his treatment.

This caught my attention, as this is a potentially verifiable claim. There is a video of Dr. Delpassand giving a statement included in the testimony:

Whoa. He’s with Excel Diagnostics, the very same company that I discussed when I looked into the one patient in Texas whom I could find who had accessed right-to-try and who had not been saved. Contrary to Sen. Johnson’s claims, he is not an oncologist; he is a radiologist. Of course, nothing Dr. Delpassand claims in his video statement shows that 80 patients who would have died have been saved, thanks to his being a brave maverick doctor willing to buck the FDA. One notes a highly one-sided account designed to make Dr. Delpassand look as good as possible. In any case, the therapy discussed by Dr. Delpassand does have potential, as I mentioned before. However, one thing that stood out to me was how the Goldwater Institute reached out to Dr. Delpassand. So basically, right-to-try allowed Dr. Delpassand to charge for the use of his treatment, even though it is not FDA-approved. Not surprisingly, the Goldwater Institute is painting this example as the nefarious FDA preventing patients from saving their lives, even though this treatment is not curative, as I described. He’s also a flack for the Goldwater Institute, having participated in a promotional video touting right-to-try:

So his evidence is that “many of these patients were given three or six months to live” and are alive a year later? Seriously? Stanislaw Burzynski uses the same argument about the patients he treats.

In any case, as I pointed out three weeks ago, the federal right-to-try bill is even worse than state right-to-try bills because (1) it would actually do something and (2) what it would do would be very, very bad for patients indeed. For example, it would forbid the FDA from considering adverse events suffered by patients utilizing experimental drugs under right-to-try when considering a drug for approval. Seriously, it says that. A patient could die, clearly as a result of an experimental drug, and the FDA would be explicitly barred from considering that information when deciding whether to approve the drug or not.

Speaking of Stanislaw Burzynski, I couldn’t help but note the testimony of Peter Lurie, MD, MPH. He repeated the same points about how the FDA’s Expanded Access Program rarely rejects requests and then notes:

However, even patients with serious or life-threatening diseases and conditions require protection from unnecessary risks, particularly as, in general, the products they are seeking through expanded access are unapproved – and may never be approved. Moreover, FDA is concerned about the ability of unscrupulous individuals to exploit such desperate patients. Thus, with every request, FDA must determine that the potential patient benefit from the investigational drug justifies the potential risks and that the potential risks are not unreasonable in the context of the disease or condition to be treated.

“Unscrupulous individuals”? That would well describe Stanislaw Burzynski. It could also describe pharmaceutical companies willing to profit off of drugs that made it through phase I studies but are not approved yet.

Fortunately, for now at least, the federal right-to-try bill has been blocked:

Republican U.S. Sen. Ron Johnson’s push for a right-to-try bill ran up against the reality of hardball politics Wednesday.

Johnson’s measure to allow terminally ill patients to receive experimental drugs not approved by the Food and Drug Administration was blocked by Senate Minority Leader Harry Reid (D-Nev.).

Johnson sought to move the bill through unanimous consent, meaning one senator could halt its progress. And that’s what Reid did, blunting a Johnson initiative for the second time in recent months. In July, Reid blocked Johnson’s bill to protect federal whistleblowers from retaliation.

Johnson faces a tough re-election fight against Democrat Russ Feingold, so any move to get legislation through by a parliamentary maneuver was always going to be difficult.

And it’s even harder since Democrats are still upset that Republicans have blocked President Barack Obama’s nominee to the U.S. Supreme Court, Merrick Garland.

Reid said he understood the “seriousness” of the Johnson proposal and acknowledged “the urgency that patients and their families feel when they’re desperate for new treatments.”

In objecting to the measure, Reid said Johnson’s bill didn’t have bipartisan support — there were 40 Republican co-sponsors and two Democrats. He said the bill didn’t go through the hearing process where all the major players on the issue have voice. The Johnson-chaired Homeland Security & Governmental Affairs panel held two hearings on the subject.

“I think we should have had a hearing on Merrick Garland,” Reid said on the Senate floor.

This is what I would call doing the right thing for the wrong reason. Unfortunately, that’s what happens in politics a lot. I’ll take it, though. If state right-to-try bills are basically symbolic rants against the FDA, the passage of a federal right-to-try bill would be a disaster for patients and the clinical trial process.



from ScienceBlogs http://ift.tt/2dwd1gJ