Declare energy independence with carbon dividends

Taking action on climate is about a lot more than our energy economy. Climate disruption is the leading threat to our built environment, an accelerant of armed conflict, and a leading cause of mass migration. Its effects intensify and prolong storms, droughts, wildfires, and floods — resulting in the US spending as much on disaster management in 2017 as in the three decades from 1980 to 2010.

fire

 Out of control wildfire approaching Estreito da Calheta, Portugal. September 2017. Photograph: Michael Held

Fiscal conservatism and national security require a smart, focused, effective solution that protects our economy and our values.

Political division between the major parties in Washington has left the burden of achieving that solution largely on Democratic administrations using regulatory measures that — for all their smart design and ambition — cannot be transformational enough to carry us through to a livable future.

Conservatives say the nation needs an insurance policy. Business leaders want to future-proof their operations and investments. Young people are demanding intervention on the scale of the Allies’ efforts to rebuild Europe after World War II. 

The International Monetary Fund — whose mission is to ensure national dysfunction doesn’t undermine the solvency of public budgets and lead to failed states — warns that nations that depend heavily on publicly subsidized fossil fuels are endangering their future solvency by investing in a way that destroys future economic resilience. Resilience intelligence requires diversification and innovation on a massive scale. 

The rapid expansion of green bonds is making clear the deep need for clean economy holdings among major banks and institutional investors. Climate-smart finance, still a new concept, is expected to be the standard for both public and private-sector actors at all levels within 10 to 20 years. 

Republican former Secretaries of the Treasury James Baker and George Shultz have called for a carbon dividends strategy, because: 

  1. it avoids new regulation,
  2. it abides by conservative principles of market efficiency, and
  3. it leverages improvements to the Main Street economy to ensure a future of real energy freedom.
main street

 Main Street economies suffer when too much of the money in circulation flows to finance, without clear incentives to lend to small businesses. The steadily rising monthly carbon dividend makes sure more of the money in circulation flows through small businesses, locking in that incentive and making the whole economy more efficient at creating wealth for the average household. Photograph: Joseph Robertson

Unpaid-for pollution and climate disruption limit our personal freedom and then, by adding cost and risk to the whole economy, undermine our collective ability to defend our freedom and secure future prosperity. Even with record oil and gas production, the US still depends heavily on foreign regimes hostile to democracy that manipulate supply and undermine the efficiency of our everyday economy. 

Energy freedom means reliable, everywhere-active low-cost clean energy, answering the call of expanded Main Street economic activity.

REMI

 A study by Regional Economic Models, Inc., which modeled the interacting economy-wide impacts of monthly household carbon dividends found real disposable personal income rising for at least 20 years after the first dividends show up in the mail. Details at https://ift.tt/2lUxIs1 Illustration: Regional Economic Models, Inc.

Ask any small business owner if they would rather have higher or lower hidden business costs built into everything they buy from their suppliers. Of course, they would prefer lower hidden costs and risks, and for consumers to have more money in their pockets. 

That is how carbon dividends work. 

  • A simple, upstream fee, paid at the source by any entity that wants to sell polluting fuels that carry such hidden costs and risk. This is administratively simple, light-touch, economy-wide, and fair to all. 
  • 100% of the revenues from that fee are returned to households in equal shares, every month. This ensures the Main Street economy keeps humming along. 
  • Because both the fee and the dividend steadily rise, pollution-dependent businesses — and the banks that finance them — can see the optimal rate of innovation and diversification to liberate themselves from the subsidized pollution trap. The whole economy becomes more competitive and more efficient at delivering real-world value to Main Street. 
  • To ensure energy intensive trade-exposed industries are not drawn away by other nations keeping carbon fuels artificially cheap, a simple border carbon adjustment ensures a level playing field, while adding negotiating power to US diplomatic efforts, on every issue everywhere.

Click here to read the rest



from Skeptical Science https://ift.tt/2zdfBat

Taking action on climate is about a lot more than our energy economy. Climate disruption is the leading threat to our built environment, an accelerant of armed conflict, and a leading cause of mass migration. Its effects intensify and prolong storms, droughts, wildfires, and floods — resulting in the US spending as much on disaster management in 2017 as in the three decades from 1980 to 2010.

fire

 Out of control wildfire approaching Estreito da Calheta, Portugal. September 2017. Photograph: Michael Held

Fiscal conservatism and national security require a smart, focused, effective solution that protects our economy and our values.

Political division between the major parties in Washington has left the burden of achieving that solution largely on Democratic administrations using regulatory measures that — for all their smart design and ambition — cannot be transformational enough to carry us through to a livable future.

Conservatives say the nation needs an insurance policy. Business leaders want to future-proof their operations and investments. Young people are demanding intervention on the scale of the Allies’ efforts to rebuild Europe after World War II. 

The International Monetary Fund — whose mission is to ensure national dysfunction doesn’t undermine the solvency of public budgets and lead to failed states — warns that nations that depend heavily on publicly subsidized fossil fuels are endangering their future solvency by investing in a way that destroys future economic resilience. Resilience intelligence requires diversification and innovation on a massive scale. 

The rapid expansion of green bonds is making clear the deep need for clean economy holdings among major banks and institutional investors. Climate-smart finance, still a new concept, is expected to be the standard for both public and private-sector actors at all levels within 10 to 20 years. 

Republican former Secretaries of the Treasury James Baker and George Shultz have called for a carbon dividends strategy, because: 

  1. it avoids new regulation,
  2. it abides by conservative principles of market efficiency, and
  3. it leverages improvements to the Main Street economy to ensure a future of real energy freedom.
main street

 Main Street economies suffer when too much of the money in circulation flows to finance, without clear incentives to lend to small businesses. The steadily rising monthly carbon dividend makes sure more of the money in circulation flows through small businesses, locking in that incentive and making the whole economy more efficient at creating wealth for the average household. Photograph: Joseph Robertson

Unpaid-for pollution and climate disruption limit our personal freedom and then, by adding cost and risk to the whole economy, undermine our collective ability to defend our freedom and secure future prosperity. Even with record oil and gas production, the US still depends heavily on foreign regimes hostile to democracy that manipulate supply and undermine the efficiency of our everyday economy. 

Energy freedom means reliable, everywhere-active low-cost clean energy, answering the call of expanded Main Street economic activity.

REMI

 A study by Regional Economic Models, Inc., which modeled the interacting economy-wide impacts of monthly household carbon dividends found real disposable personal income rising for at least 20 years after the first dividends show up in the mail. Details at https://ift.tt/2lUxIs1 Illustration: Regional Economic Models, Inc.

Ask any small business owner if they would rather have higher or lower hidden business costs built into everything they buy from their suppliers. Of course, they would prefer lower hidden costs and risks, and for consumers to have more money in their pockets. 

That is how carbon dividends work. 

  • A simple, upstream fee, paid at the source by any entity that wants to sell polluting fuels that carry such hidden costs and risk. This is administratively simple, light-touch, economy-wide, and fair to all. 
  • 100% of the revenues from that fee are returned to households in equal shares, every month. This ensures the Main Street economy keeps humming along. 
  • Because both the fee and the dividend steadily rise, pollution-dependent businesses — and the banks that finance them — can see the optimal rate of innovation and diversification to liberate themselves from the subsidized pollution trap. The whole economy becomes more competitive and more efficient at delivering real-world value to Main Street. 
  • To ensure energy intensive trade-exposed industries are not drawn away by other nations keeping carbon fuels artificially cheap, a simple border carbon adjustment ensures a level playing field, while adding negotiating power to US diplomatic efforts, on every issue everywhere.

Click here to read the rest



from Skeptical Science https://ift.tt/2zdfBat

Future projections of Antarctic ice shelf melting

This is a re-post from ClimateSight

Climate change will increase ice shelf melt rates around Antarctica. That’s the not-very-surprising conclusion of my latest modelling study, done in collaboration with both Australian and German researchers, which was just published in Journal of Climate. Here’s the less intuitive result: much of the projected increase in melt rates is actually linked to a decrease in sea ice formation.

That’s a lot of different kinds of ice, so let’s back up a bit. Sea ice is just frozen seawater. But ice shelves (as well as ice sheets and icebergs) are originally formed of snow. Snow falls on the Antarctic continent, and over many years compacts into a system of interconnected glaciers that we call an ice sheet. These glaciers flow downhill towards the coast. If they hit the coast and keep going, floating on the ocean surface, the floating bits are called ice shelves. Sometimes the edges of ice shelves will break off and form icebergs, but they don’t really come into this story.

Climate models don’t typically include ice sheets, or ice shelves, or icebergs. This is one reason why projections of sea level rise are so uncertain. But some standalone ocean models do include ice shelves. At least, they include the little pockets of ocean beneath the ice shelves – we call them ice shelf cavities – and can simulate the melting and refreezing that happens on the ice shelf base.

We took one of these ocean/ice-shelf models and forced it with the atmospheric output of regular climate models, which periodically make projections of climate change from now until the end of the century. We completed four different simulations, consisting of two different greenhouse gas emissions scenarios (“Representative Concentration Pathways” or RCPs) and two different choices of climate model (“ACCESS 1.0”, or “MMM” for the multi-model mean). Each simulation required 896 processors on the supercomputer in Canberra. By comparison, your laptop or desktop computer probably has about 4 processors. These are pretty sizable models!

In every simulation, and in every region of Antarctica, ice shelf melting increased over the 21st century. The total increase ranged from 41% to 129% depending on the scenario. The largest increases occurred in the Amundsen Sea region, marked with red circles in the maps below, which happens to be the region exhibiting the most severe melting in recent observations. In the most extreme scenario, ice shelf melting in this region nearly quadrupled.

Percent change in ice shelf melting, caused by the ocean, during the four future projections. The values are shown for all of Antarctica (written on the centre of the continent) as well as split up into eight sectors (colour-coded, written inside the circles). Figure 3 of Naughten et al., 2018, © American Meteorological Society.

So what processes were causing this melting? This is where the sea ice comes in. When sea ice forms, it spits out most of the salt from the seawater (brine rejection), leaving the remaining water saltier than before. Salty water is denser than fresh water, so it sinks. This drives a lot of vertical mixing, and the heat from warmer, deeper water is lost to the atmosphere. The ocean surrounding Antarctica is unusual in that the deep water is generally warmer than the surface water. We call this warm, deep water Circumpolar Deep Water, and it’s currently the biggest threat to the Antarctic Ice Sheet. (I say “warm” – it’s only about 1°C, so you wouldn’t want to go swimming in it, but it’s plenty warm enough to melt ice.)

In our simulations, warming winters caused a decrease in sea ice formation. So there was less brine rejection, causing fresher surface waters, causing less vertical mixing, and the warmth of Circumpolar Deep Water was no longer lost to the atmosphere. As a result, ocean temperatures near the bottom of the Amundsen Sea increased. This better-preserved Circumpolar Deep Water found its way into ice shelf cavities, causing large increases in melting.

Slices through the Amundsen Sea – you’re looking at the ocean sideways, like a slice of birthday cake, so you can see the vertical structure. Temperature is shown on the top row (blue is cold, red is warm); salinity is shown on the bottom row (blue is fresh, red is salty). Conditions at the beginning of the simulation are shown in the left 2 panels, and conditions at the end of the simulation are shown in the right 2 panels. At the beginning of the simulation, notice how the warm, salty Circumpolar Deep Water rises onto the continental shelf from the north (right side of each panel), but it gets cooler and fresher as it travels south (towards the left) due to vertical mixing. At the end of the simulation, the surface water has freshened and the vertical mixing has weakened, so the warmth of the Circumpolar Deep Water is preserved. Figure 8 of Naughten et al., 2018, © American Meteorological Society.

This link between weakened sea ice formation and increased ice shelf melting has troubling implications for sea level rise. Unfortunately, models like the one we used for this study can’t actually be used to simulate sea level rise, as they have to assume that ice shelf geometry stays constant. No matter how much ice shelf melting the model simulates, the ice shelves aren’t allowed to thin or collapse. Basically, this design assumes that any ocean-driven melting is exactly compensated by the flow of the upstream glacier such that ice shelf geometry remains constant.

Of course this is not a good assumption, because we’re observing ice shelves thinning all over the place, and a few have even collapsed. But removing this assumption would necessitate coupling with an ice sheet model, which presents major engineering challenges. We’re working on it – at least ten different research groups around the world – and over the next few years, fully coupled ice-sheet/ocean models should be ready to use for the most reliable sea level rise projections yet.



from Skeptical Science https://ift.tt/2lZDUPr

This is a re-post from ClimateSight

Climate change will increase ice shelf melt rates around Antarctica. That’s the not-very-surprising conclusion of my latest modelling study, done in collaboration with both Australian and German researchers, which was just published in Journal of Climate. Here’s the less intuitive result: much of the projected increase in melt rates is actually linked to a decrease in sea ice formation.

That’s a lot of different kinds of ice, so let’s back up a bit. Sea ice is just frozen seawater. But ice shelves (as well as ice sheets and icebergs) are originally formed of snow. Snow falls on the Antarctic continent, and over many years compacts into a system of interconnected glaciers that we call an ice sheet. These glaciers flow downhill towards the coast. If they hit the coast and keep going, floating on the ocean surface, the floating bits are called ice shelves. Sometimes the edges of ice shelves will break off and form icebergs, but they don’t really come into this story.

Climate models don’t typically include ice sheets, or ice shelves, or icebergs. This is one reason why projections of sea level rise are so uncertain. But some standalone ocean models do include ice shelves. At least, they include the little pockets of ocean beneath the ice shelves – we call them ice shelf cavities – and can simulate the melting and refreezing that happens on the ice shelf base.

We took one of these ocean/ice-shelf models and forced it with the atmospheric output of regular climate models, which periodically make projections of climate change from now until the end of the century. We completed four different simulations, consisting of two different greenhouse gas emissions scenarios (“Representative Concentration Pathways” or RCPs) and two different choices of climate model (“ACCESS 1.0”, or “MMM” for the multi-model mean). Each simulation required 896 processors on the supercomputer in Canberra. By comparison, your laptop or desktop computer probably has about 4 processors. These are pretty sizable models!

In every simulation, and in every region of Antarctica, ice shelf melting increased over the 21st century. The total increase ranged from 41% to 129% depending on the scenario. The largest increases occurred in the Amundsen Sea region, marked with red circles in the maps below, which happens to be the region exhibiting the most severe melting in recent observations. In the most extreme scenario, ice shelf melting in this region nearly quadrupled.

Percent change in ice shelf melting, caused by the ocean, during the four future projections. The values are shown for all of Antarctica (written on the centre of the continent) as well as split up into eight sectors (colour-coded, written inside the circles). Figure 3 of Naughten et al., 2018, © American Meteorological Society.

So what processes were causing this melting? This is where the sea ice comes in. When sea ice forms, it spits out most of the salt from the seawater (brine rejection), leaving the remaining water saltier than before. Salty water is denser than fresh water, so it sinks. This drives a lot of vertical mixing, and the heat from warmer, deeper water is lost to the atmosphere. The ocean surrounding Antarctica is unusual in that the deep water is generally warmer than the surface water. We call this warm, deep water Circumpolar Deep Water, and it’s currently the biggest threat to the Antarctic Ice Sheet. (I say “warm” – it’s only about 1°C, so you wouldn’t want to go swimming in it, but it’s plenty warm enough to melt ice.)

In our simulations, warming winters caused a decrease in sea ice formation. So there was less brine rejection, causing fresher surface waters, causing less vertical mixing, and the warmth of Circumpolar Deep Water was no longer lost to the atmosphere. As a result, ocean temperatures near the bottom of the Amundsen Sea increased. This better-preserved Circumpolar Deep Water found its way into ice shelf cavities, causing large increases in melting.

Slices through the Amundsen Sea – you’re looking at the ocean sideways, like a slice of birthday cake, so you can see the vertical structure. Temperature is shown on the top row (blue is cold, red is warm); salinity is shown on the bottom row (blue is fresh, red is salty). Conditions at the beginning of the simulation are shown in the left 2 panels, and conditions at the end of the simulation are shown in the right 2 panels. At the beginning of the simulation, notice how the warm, salty Circumpolar Deep Water rises onto the continental shelf from the north (right side of each panel), but it gets cooler and fresher as it travels south (towards the left) due to vertical mixing. At the end of the simulation, the surface water has freshened and the vertical mixing has weakened, so the warmth of the Circumpolar Deep Water is preserved. Figure 8 of Naughten et al., 2018, © American Meteorological Society.

This link between weakened sea ice formation and increased ice shelf melting has troubling implications for sea level rise. Unfortunately, models like the one we used for this study can’t actually be used to simulate sea level rise, as they have to assume that ice shelf geometry stays constant. No matter how much ice shelf melting the model simulates, the ice shelves aren’t allowed to thin or collapse. Basically, this design assumes that any ocean-driven melting is exactly compensated by the flow of the upstream glacier such that ice shelf geometry remains constant.

Of course this is not a good assumption, because we’re observing ice shelves thinning all over the place, and a few have even collapsed. But removing this assumption would necessitate coupling with an ice sheet model, which presents major engineering challenges. We’re working on it – at least ten different research groups around the world – and over the next few years, fully coupled ice-sheet/ocean models should be ready to use for the most reliable sea level rise projections yet.



from Skeptical Science https://ift.tt/2lZDUPr

Republicans try to save their deteriorating party with another push for a carbon tax


The Republican Party is rotting away
. The problem is that GOP policies just aren’t popular. Most Americans unsurprisingly oppose climate denial, tax cuts for the wealthy, and putting children (including toddlers) in concentration camps, for example.

The Republican Party has thus far managed to continue winning elections by creating “a coalition between racists and plutocrats,” as Paul Krugman put it. The party’s economic policies are aimed at benefitting wealthy individuals and corporations, but that’s a slim segment of the American electorate. The plutocrats can fund political campaigns, but to capture enough votes to win elections, the GOP has resorted to identity politics. Research has consistently shown that Trump won because of racial resentment among white voters.

While that strategy has worked in the short-term, some Republicans recognize that it can’t work in the long-term, and they’re fighting to save their party from extinction.

Can a carbon tax save the GOP?

Climate change is one of many issues that divides the Republican Party. Like racial resentment, climate denial is a position held mostly by old, white, male conservatives. There’s a climate change generational, ethnic, and gender gap. 61% of Republicans under the age of 50 support government climate policies, compared to just 44% of Republicans over 50. Similarly, a majority of Hispanic- and African-Americans accept human-caused global warming and 70% express concern about it, as compared to just 41% of whites who accept the scientific reality and 50% who worry about it.

But the plutocratic wing of the GOP loves fossil fuels. Republican politicians rely on campaign donations from the fossil fuel industry, and quid pro quo requires them to do the industry’s bidding. It might as well be called the Grand Oil Party.

There is no other reason why the GOP should not unify behind a revenue-neutral carbon tax. This free market, small government climate policy – which taxes carbon pollution and returns all the revenue to American households – is indeed supported by many conservatives. A group of Republican elder statesmen created a coalition called the Climate Leadership Council to build conservative support for a revenue-neutral carbon tax. They’re now backed by Americans for Carbon Dividends (AfCD), led in part by former Republican Senate Majority Leader Trent Lott with a renewed effort to build support for this policy.

AfCD recently released polling results showing that 55% of Americans believe US environmental policy is headed in the wrong direction (29% say it’s on the right track), 81% of likely voters including 58% of Strong Republicans agree the government should take action to limit carbon emissions, and by a 56% to 26% margin (including a 55% to 32% margin among Strong Republicans), Americans support a revenue-neutral carbon tax.

It’s not a wildly popular policy proposal, but it does have broad bipartisan support. It’s also a smart way to curb climate change with minimal economic impact, and in fact with a massive net economic benefit compared to unchecked climate change. That’s why economists overwhelmingly support a carbon tax.

The GOP was on the wrong side of history on civil rights and gay marriage and has paid the price, having largely become the party of old, straight, white men. Climate change is a similarly critical historical issue, but one that will directly impact every single American. Some smart Republicans recognize that the party can’t afford to be on the wrong side of history again on this issue.

Racial politics slapped a band-aid on the GOP’s gaping wound

Donald Trump managed to win the presidency in 2016 by stoking racial resentment among white Americans, but still lost the popular vote by a margin of nearly 3 million, and Republicans have only won the presidential popular vote once in the past two decades. They’re winning elections by relying on structural advantages (gerrymandering and weighting of rural votes), voter suppression, and mobilizing older white voters.

Trump seems to be doubling down on the latter strategy ahead of the 2018 midterm elections, for example by claiming that illegal immigrants are “infesting” America and by putting immigrant children in concentration camps. While only 25% of Americans support separating immigrant children from their parents, 49% of Republicans favor the policy. It’s a recipe for turning out the racist base (who also tend to be climate deniers), but not for winning a general election. Especially over the long-term as America becomes less white and as younger, more tolerant Americans become a larger proportion of the electorate.

When asked about the child concentration camps at a press conference, Senator David Perdue (R-GA) made the connection between the GOP coalition of plutocrats and racists, telling reporters:

Click here to read the rest



from Skeptical Science https://ift.tt/2zd9NNV


The Republican Party is rotting away
. The problem is that GOP policies just aren’t popular. Most Americans unsurprisingly oppose climate denial, tax cuts for the wealthy, and putting children (including toddlers) in concentration camps, for example.

The Republican Party has thus far managed to continue winning elections by creating “a coalition between racists and plutocrats,” as Paul Krugman put it. The party’s economic policies are aimed at benefitting wealthy individuals and corporations, but that’s a slim segment of the American electorate. The plutocrats can fund political campaigns, but to capture enough votes to win elections, the GOP has resorted to identity politics. Research has consistently shown that Trump won because of racial resentment among white voters.

While that strategy has worked in the short-term, some Republicans recognize that it can’t work in the long-term, and they’re fighting to save their party from extinction.

Can a carbon tax save the GOP?

Climate change is one of many issues that divides the Republican Party. Like racial resentment, climate denial is a position held mostly by old, white, male conservatives. There’s a climate change generational, ethnic, and gender gap. 61% of Republicans under the age of 50 support government climate policies, compared to just 44% of Republicans over 50. Similarly, a majority of Hispanic- and African-Americans accept human-caused global warming and 70% express concern about it, as compared to just 41% of whites who accept the scientific reality and 50% who worry about it.

But the plutocratic wing of the GOP loves fossil fuels. Republican politicians rely on campaign donations from the fossil fuel industry, and quid pro quo requires them to do the industry’s bidding. It might as well be called the Grand Oil Party.

There is no other reason why the GOP should not unify behind a revenue-neutral carbon tax. This free market, small government climate policy – which taxes carbon pollution and returns all the revenue to American households – is indeed supported by many conservatives. A group of Republican elder statesmen created a coalition called the Climate Leadership Council to build conservative support for a revenue-neutral carbon tax. They’re now backed by Americans for Carbon Dividends (AfCD), led in part by former Republican Senate Majority Leader Trent Lott with a renewed effort to build support for this policy.

AfCD recently released polling results showing that 55% of Americans believe US environmental policy is headed in the wrong direction (29% say it’s on the right track), 81% of likely voters including 58% of Strong Republicans agree the government should take action to limit carbon emissions, and by a 56% to 26% margin (including a 55% to 32% margin among Strong Republicans), Americans support a revenue-neutral carbon tax.

It’s not a wildly popular policy proposal, but it does have broad bipartisan support. It’s also a smart way to curb climate change with minimal economic impact, and in fact with a massive net economic benefit compared to unchecked climate change. That’s why economists overwhelmingly support a carbon tax.

The GOP was on the wrong side of history on civil rights and gay marriage and has paid the price, having largely become the party of old, straight, white men. Climate change is a similarly critical historical issue, but one that will directly impact every single American. Some smart Republicans recognize that the party can’t afford to be on the wrong side of history again on this issue.

Racial politics slapped a band-aid on the GOP’s gaping wound

Donald Trump managed to win the presidency in 2016 by stoking racial resentment among white Americans, but still lost the popular vote by a margin of nearly 3 million, and Republicans have only won the presidential popular vote once in the past two decades. They’re winning elections by relying on structural advantages (gerrymandering and weighting of rural votes), voter suppression, and mobilizing older white voters.

Trump seems to be doubling down on the latter strategy ahead of the 2018 midterm elections, for example by claiming that illegal immigrants are “infesting” America and by putting immigrant children in concentration camps. While only 25% of Americans support separating immigrant children from their parents, 49% of Republicans favor the policy. It’s a recipe for turning out the racist base (who also tend to be climate deniers), but not for winning a general election. Especially over the long-term as America becomes less white and as younger, more tolerant Americans become a larger proportion of the electorate.

When asked about the child concentration camps at a press conference, Senator David Perdue (R-GA) made the connection between the GOP coalition of plutocrats and racists, telling reporters:

Click here to read the rest



from Skeptical Science https://ift.tt/2zd9NNV

First contact

Salma Fahmy, team member on the Solar Orbiter Project Office at ESTEC Credit: ESA/D. Lakey

Salma Fahmy, team member on the Solar Orbiter Project Office at ESTEC Credit: ESA/D. Lakey

ESA’s Solar Orbiter team have been busy for the last few months preparing for the first ‘Spacecraft Validation Test’ – referred to in engineering-speak as ‘SVT-0’ – which is the first opportunity the mission control team to establish a data link to the actual flight hardware and send commands to the spacecraft.

The mission controllers are working at ESA’s ESOC control centre in Darmstadt this week, joined by representatives from the mission’s two instrument teams, the ESA Project Team based at ESTEC in the Netherlands and the AirbusDS-UK industrial team. The spacecraft itself is located in Stevenage, UK.

Jose-Luis Pellon-Bailon & Matthias Eiblmaier Credit: ESA/D. Lakey

Jose-Luis Pellon-Bailon & Matthias Eiblmaier Credit: ESA/D. Lakey

Yesterday and today, the team will validate flight control procedures and the database that describes the commands and telemetry of the spacecraft. It’s a lot of work but at the end of it, a real milestone will have been passed.

Spacecraft Operations Engineer Daniel Lakey explains, “This is the culmination of months of work by us, our colleagues across ESA and, of course, the teams at AirbusDS-UK, who are leading the build of the spacecraft and are supporting these test connections from the cleanroom in Stevenage.”

“We have a list of over 250 procedures that we will methodically go through, to ensure they are ready for flight. This first contact with the real spacecraft is an exciting step after having spent years working on paper!”

More tests are planned over the coming months, and next year.

#Solo

#ESOC



from Rocket Science https://ift.tt/2NrqlVJ
v
Salma Fahmy, team member on the Solar Orbiter Project Office at ESTEC Credit: ESA/D. Lakey

Salma Fahmy, team member on the Solar Orbiter Project Office at ESTEC Credit: ESA/D. Lakey

ESA’s Solar Orbiter team have been busy for the last few months preparing for the first ‘Spacecraft Validation Test’ – referred to in engineering-speak as ‘SVT-0’ – which is the first opportunity the mission control team to establish a data link to the actual flight hardware and send commands to the spacecraft.

The mission controllers are working at ESA’s ESOC control centre in Darmstadt this week, joined by representatives from the mission’s two instrument teams, the ESA Project Team based at ESTEC in the Netherlands and the AirbusDS-UK industrial team. The spacecraft itself is located in Stevenage, UK.

Jose-Luis Pellon-Bailon & Matthias Eiblmaier Credit: ESA/D. Lakey

Jose-Luis Pellon-Bailon & Matthias Eiblmaier Credit: ESA/D. Lakey

Yesterday and today, the team will validate flight control procedures and the database that describes the commands and telemetry of the spacecraft. It’s a lot of work but at the end of it, a real milestone will have been passed.

Spacecraft Operations Engineer Daniel Lakey explains, “This is the culmination of months of work by us, our colleagues across ESA and, of course, the teams at AirbusDS-UK, who are leading the build of the spacecraft and are supporting these test connections from the cleanroom in Stevenage.”

“We have a list of over 250 procedures that we will methodically go through, to ensure they are ready for flight. This first contact with the real spacecraft is an exciting step after having spent years working on paper!”

More tests are planned over the coming months, and next year.

#Solo

#ESOC



from Rocket Science https://ift.tt/2NrqlVJ
v

A grave tale: The case of the corpse-eating flies


Dozens of ceramic vessels from West Mexico, part of the collection of Emory's Michael C. Carlos Museum, were believed to be "grave goods," traditionally placed near bodies in underground burial chambers almost 1,500 years before the Aztecs. The compact figures depict humans and animals engaged in everyday activities, vividly capturing a place and time. Residue and wear patterns suggested that the vessels had once been filled with food and drink, perhaps to accompany the departed along their journey.

But were the figures authentic?

Seeking answers, the museum invited forensic anthropologist Robert Pickering — who uses entomology, among other techniques – to examine the vessels with the help of Emory scholars.

His quest? Locate telltale insect casings likely left by coffin flies, corpse-eating insects that fed on decomposing bodies interred in the ancient underground shaft tombs of Western Mexico.

"Not to be impolite, but where you have dead people, you have bugs," Pickering explains.

Read more about the project here.

from eScienceCommons https://ift.tt/2KFnoCp

Dozens of ceramic vessels from West Mexico, part of the collection of Emory's Michael C. Carlos Museum, were believed to be "grave goods," traditionally placed near bodies in underground burial chambers almost 1,500 years before the Aztecs. The compact figures depict humans and animals engaged in everyday activities, vividly capturing a place and time. Residue and wear patterns suggested that the vessels had once been filled with food and drink, perhaps to accompany the departed along their journey.

But were the figures authentic?

Seeking answers, the museum invited forensic anthropologist Robert Pickering — who uses entomology, among other techniques – to examine the vessels with the help of Emory scholars.

His quest? Locate telltale insect casings likely left by coffin flies, corpse-eating insects that fed on decomposing bodies interred in the ancient underground shaft tombs of Western Mexico.

"Not to be impolite, but where you have dead people, you have bugs," Pickering explains.

Read more about the project here.

from eScienceCommons https://ift.tt/2KFnoCp

New Atlanta NMR Consortium links resources of Emory, Georgia Tech and Georgia State

The Atlanta Nuclear Magnetic Resonance Consortium "lowers the activation energy to take advantage of partners’ expertise," says Emory chemist David Lynn.

NMR – nuclear magnetic resonance – is a powerful tool to investigate matter. It is based on measuring the interaction between the nuclei of atoms in molecules in the presence of an external magnetic field; the higher the field strength, the more sensitive the instrument.

For example, high magnetic fields enable measurement of analytes at low concentrations, such as the compounds in the urine of blue crabs, opening doors to understanding how chemicals invisibly regulate marine life. High-field NMR also allows scientists to “see” the structure and dynamics of complex molecules, such as proteins, nucleic acids, and their complexes.

NMR is used widely in many fields, from biochemistry, biology, chemistry, and physics, to geology engineering, pharmaceutical sciences, medicine, food science, and many others.
David Lynn

NMR instruments, however, are a major investment. The most advanced units can cost up to up to millions of dollars per piece. Maintenance can cost tens of thousands of dollars a year. The investment in people is also significant. It can take years of training before a user can perform some of the most advanced techniques.

For these and other reasons, Emory University, Georgia Institute of Technology, and Georgia State University have formed the Atlanta NMR Consortium. The aim is to maximize use of institutional NMR equipment by sharing facilities and expertise with consortium partners.

Through the consortium, students, faculty, and staff of a consortium member can use the NMR facilities of their partners. The cost to a consortium member is the same as what the facility charges its own constituents.

“NMR continues to grow and develop because of technological advances,” says David Lynn, a chemistry professor at Emory University. To keep up, institutions need to keep buying new, improved instruments. Such a never-ending commitment is becoming untenable and redundant across Atlanta, Lynn says. Combining forces is the way to go.


Immediately, the consortium offers access to the most sensitive instruments now in Atlanta – the 700- and 800-MHz units at Georgia Tech. Georgia Tech invested more than $5 million to install the two high-field units, as well as special capabilities, in 2016.

Partners can gain access to Georgia State’s large variety of NMR probes. Solid-state capability, which is well established in Emory and advancing at Georgia Tech, will be available to partners.

Needless to say, the consortium offers alternatives when an instrument at a member institution malfunctions.

Beyond maximizing use of facilities, the consortium offers other potential benefits.
Anant Paravastu

“The biggest benefit is community,” says Anant Paravastu. Paravastu is an associate professor in the Georgia Tech School of Chemical and Biomolecular Engineering. He is also a member of the Parker H. Petit Institute for Bioengineering and Bioscience (IBB).

“Each of us specializes the hardware and software for our experiments,” Paravastu says. “As we go in different directions, we will benefit from a cohesive community of people who know how to use NMR for a wide range of problems.”

Paravastu previously worked at the National High Magnetic Field Laboratory, in Florida State University. That national facility sustains a large community of NMR researchers who help each other build expertise, he says. “We Atlanta researchers would benefit from a similar community, and not only for the scientific advantage.”

Both Lynn and Paravastu believe the consortium will help the partners jointly compete for federal grants for instrumentation. “A large user group will make us more competitive,” Lynn says. “The federal government would much rather pay for an instrument that will benefit many scientists rather than just one research group in one university,” Paravastu says.

“The most important goal for us is the sharing of our expertise,” says Markus Germann, a professor of chemistry at Georgia State. A particular expertise there is the study of nucleic acids. More broadly, Georgia State has wide experience in solution NMR. Researchers there have developed NMR applications to study complex structures of biological and clinical importance.

Germann offers some examples:
Structure and dynamics of damaged and unusual DNA
Structure and dynamics of protein—DNA and protein—RNA complexes
Structural integrity of protein mutants
Small ligand-DNA and -RNA binding for gene control
Protein-based contrast agents for magnetic resonance imaging

“For me, there’s a direct benefit in learning from people at Georgia State about soluble-protein structure,” Paravastu says. He studies the structures of peptides; of particular interest are certain water-soluble states of beta-amyloid peptide, in Alzheimer’s disease. These forms, Paravastu says, have special toxicity to neurons.
Markus Germann

Paravastu also studies proteins that self-assemble. “People at Emory have a different approach to studying self-assembling proteins,” he says. “We have a lot of incentive to strengthen our relationships with other groups.”

“Different labs do different things and have different expertise,” Lynn says. “The consortium lowers the activation energy to take advantage of partners’ expertise.”

Even before the consortium, Germann notes, his lab has worked with Georgia Tech’s Francesca Storici on studies of the impact of ribonucleotides on DNA structure and properties. Storici is a professor in the School of Biological Sciences and a member of IBB.

Germann has also worked with Georgia Tech’s Nicholas Hud on the binding of small molecules to duplex DNA. Hud is a professor in the School of Chemistry and Biochemistry and a member of IBB.

“While collaborations between researchers in Atlanta Universities is not new,” Paravastu says, “the consortium will help facilitate ongoing and new collaborations."

What will now be tested is whether the students, faculty, and staff of the partners will take advantage of the consortium.

Travel from one institution to another is a barrier, Lynn says. “Are people going to travel, or will they find another way to solve the problem? How do you know that the expertise over there will really help you?” he asks.

“The intellectual barrier is very critical,” Lynn says. “We address that through the web portal.”

The website defines the capabilities, terms of use, training for access, and institutional fees, among others. Eventually, Lynn says, it will be a place to share papers from the consortium partners.

“Like many things in life, the consortium is about breaking barriers,” Paravastu says. It’s about students meeting and working with students and professors outside their home institutions.

Already some partners share a graduate-level NMR course. For the long-term, Paravastu suggests, the partners could work together on training users to harmonize best practices and ease the certification to gain access to facilities.

“We can think of students being trained by the consortium rather than just by Georgia Tech, or Emory, or Georgia State,” Paravastu says. “By teaming up, we can create things that are bigger than the sum of the parts.”

Written by Maureen Rouhi, Georgia Tech 

Related:
How protein misfolding may kickstart chemical evolution
Peptides may hold 'missing link' to life


from eScienceCommons https://ift.tt/2z4ywUO
The Atlanta Nuclear Magnetic Resonance Consortium "lowers the activation energy to take advantage of partners’ expertise," says Emory chemist David Lynn.

NMR – nuclear magnetic resonance – is a powerful tool to investigate matter. It is based on measuring the interaction between the nuclei of atoms in molecules in the presence of an external magnetic field; the higher the field strength, the more sensitive the instrument.

For example, high magnetic fields enable measurement of analytes at low concentrations, such as the compounds in the urine of blue crabs, opening doors to understanding how chemicals invisibly regulate marine life. High-field NMR also allows scientists to “see” the structure and dynamics of complex molecules, such as proteins, nucleic acids, and their complexes.

NMR is used widely in many fields, from biochemistry, biology, chemistry, and physics, to geology engineering, pharmaceutical sciences, medicine, food science, and many others.
David Lynn

NMR instruments, however, are a major investment. The most advanced units can cost up to up to millions of dollars per piece. Maintenance can cost tens of thousands of dollars a year. The investment in people is also significant. It can take years of training before a user can perform some of the most advanced techniques.

For these and other reasons, Emory University, Georgia Institute of Technology, and Georgia State University have formed the Atlanta NMR Consortium. The aim is to maximize use of institutional NMR equipment by sharing facilities and expertise with consortium partners.

Through the consortium, students, faculty, and staff of a consortium member can use the NMR facilities of their partners. The cost to a consortium member is the same as what the facility charges its own constituents.

“NMR continues to grow and develop because of technological advances,” says David Lynn, a chemistry professor at Emory University. To keep up, institutions need to keep buying new, improved instruments. Such a never-ending commitment is becoming untenable and redundant across Atlanta, Lynn says. Combining forces is the way to go.


Immediately, the consortium offers access to the most sensitive instruments now in Atlanta – the 700- and 800-MHz units at Georgia Tech. Georgia Tech invested more than $5 million to install the two high-field units, as well as special capabilities, in 2016.

Partners can gain access to Georgia State’s large variety of NMR probes. Solid-state capability, which is well established in Emory and advancing at Georgia Tech, will be available to partners.

Needless to say, the consortium offers alternatives when an instrument at a member institution malfunctions.

Beyond maximizing use of facilities, the consortium offers other potential benefits.
Anant Paravastu

“The biggest benefit is community,” says Anant Paravastu. Paravastu is an associate professor in the Georgia Tech School of Chemical and Biomolecular Engineering. He is also a member of the Parker H. Petit Institute for Bioengineering and Bioscience (IBB).

“Each of us specializes the hardware and software for our experiments,” Paravastu says. “As we go in different directions, we will benefit from a cohesive community of people who know how to use NMR for a wide range of problems.”

Paravastu previously worked at the National High Magnetic Field Laboratory, in Florida State University. That national facility sustains a large community of NMR researchers who help each other build expertise, he says. “We Atlanta researchers would benefit from a similar community, and not only for the scientific advantage.”

Both Lynn and Paravastu believe the consortium will help the partners jointly compete for federal grants for instrumentation. “A large user group will make us more competitive,” Lynn says. “The federal government would much rather pay for an instrument that will benefit many scientists rather than just one research group in one university,” Paravastu says.

“The most important goal for us is the sharing of our expertise,” says Markus Germann, a professor of chemistry at Georgia State. A particular expertise there is the study of nucleic acids. More broadly, Georgia State has wide experience in solution NMR. Researchers there have developed NMR applications to study complex structures of biological and clinical importance.

Germann offers some examples:
Structure and dynamics of damaged and unusual DNA
Structure and dynamics of protein—DNA and protein—RNA complexes
Structural integrity of protein mutants
Small ligand-DNA and -RNA binding for gene control
Protein-based contrast agents for magnetic resonance imaging

“For me, there’s a direct benefit in learning from people at Georgia State about soluble-protein structure,” Paravastu says. He studies the structures of peptides; of particular interest are certain water-soluble states of beta-amyloid peptide, in Alzheimer’s disease. These forms, Paravastu says, have special toxicity to neurons.
Markus Germann

Paravastu also studies proteins that self-assemble. “People at Emory have a different approach to studying self-assembling proteins,” he says. “We have a lot of incentive to strengthen our relationships with other groups.”

“Different labs do different things and have different expertise,” Lynn says. “The consortium lowers the activation energy to take advantage of partners’ expertise.”

Even before the consortium, Germann notes, his lab has worked with Georgia Tech’s Francesca Storici on studies of the impact of ribonucleotides on DNA structure and properties. Storici is a professor in the School of Biological Sciences and a member of IBB.

Germann has also worked with Georgia Tech’s Nicholas Hud on the binding of small molecules to duplex DNA. Hud is a professor in the School of Chemistry and Biochemistry and a member of IBB.

“While collaborations between researchers in Atlanta Universities is not new,” Paravastu says, “the consortium will help facilitate ongoing and new collaborations."

What will now be tested is whether the students, faculty, and staff of the partners will take advantage of the consortium.

Travel from one institution to another is a barrier, Lynn says. “Are people going to travel, or will they find another way to solve the problem? How do you know that the expertise over there will really help you?” he asks.

“The intellectual barrier is very critical,” Lynn says. “We address that through the web portal.”

The website defines the capabilities, terms of use, training for access, and institutional fees, among others. Eventually, Lynn says, it will be a place to share papers from the consortium partners.

“Like many things in life, the consortium is about breaking barriers,” Paravastu says. It’s about students meeting and working with students and professors outside their home institutions.

Already some partners share a graduate-level NMR course. For the long-term, Paravastu suggests, the partners could work together on training users to harmonize best practices and ease the certification to gain access to facilities.

“We can think of students being trained by the consortium rather than just by Georgia Tech, or Emory, or Georgia State,” Paravastu says. “By teaming up, we can create things that are bigger than the sum of the parts.”

Written by Maureen Rouhi, Georgia Tech 

Related:
How protein misfolding may kickstart chemical evolution
Peptides may hold 'missing link' to life


from eScienceCommons https://ift.tt/2z4ywUO

Cell ‘chatter’ discovery could open clinical trial opportunity for fatal childhood brain tumour

Brain tumours are hard to treat. But even this is a harrowing understatement for some forms of the disease.

Diffuse intrinsic pontine glioma (DIPG) is one such example. These rare brain tumours almost exclusively affect children, and they’re invariably fatal.

“Almost all children with DIPG sadly die within a couple of years of diagnosis,” says Professor Chris Jones from the Institute of Cancer Research, London, a Cancer Research UK-funded expert on the disease.

Prof Chris Jones and his team are finding the key gene faults driving childhood brain tumours.

“There aren’t any effective treatments.”

One of the main reasons that the outlook for DIPG is so poor is down to where it grows in the brain. These tumours start in the brainstem, which lies at the base of the brain and hooks up the spinal cord with deeper brain regions. This crucial piece of machinery controls many of the body’s vital processes, such as breathing and our heart beat.

That means surgery – a cornerstone treatment for many cancers – is out of the question. Drugs are also notoriously ineffective for brain tumours, because most are shut out by the protective blood brain barrier. DIPG is no exception, and Jones says that no chemotherapies have convincingly shown a beneficial effect, despite many different clinical trials testing a variety of drugs. This leaves radiotherapy as the only option, but it isn’t a cure.

“Radiotherapy is the only treatment that’s been shown to have any effect on DIPG,” he says.

“Usually patients will be given a drug as well in an attempt to find something that works, but the cancer usually comes back within 6-9 months.”

Difficult by name and by nature

This situation leaves a pressing need for new treatments. Behind every cancer treatment is research, but that’s where the nature of DIPG presents scientists with yet another challenge.

Studying samples of patients’ tumours in the lab helps scientists understand the biology of the disease and leads them towards new treatments. But for many years biopsy samples weren’t taken from children with DIPG, because the procedure was too dangerous due to the tumours’ delicate position. That left scientists with a shortage of tissue to work with and learn from.

“DIPG is diagnosed by imaging, so questions were raised over the need for invasive and risky biopsies. That set back the collection of tissue for study,” Jones says.

But the field was reawakened in 2012 when a new way of taking biopsy samples with a thin needle was shown to be safe. Using this brain tissue, and also samples taken from children who have died from the diseases, scientists can now grow DIPG cells in the lab and in mice, boosting research efforts and uncovering the genes and molecules that may fuel the disease.

And Jones’ latest research, published in Nature Medicine, is testament to how important these samples are.

More than meets the eye

Scientists already knew that DIPG doesn’t grow as a uniform bundle of cells. Instead, these tumours resemble a diverse patchwork of cells with distinct genetic and molecular fingerprints.

“Down the microscope it looks like adult glioblastoma,” says Jones. “So, a variety of drugs designed against the biology of this tumour type have been tried in DIPG patients, but none of them have worked.”

The tumour isn’t limited to the brainstem either; it spreads throughout the brain, seeding new patchworks of cancer cells in distant regions.

We think these different populations of cells are cooperating, helping one another to grow or spread.

– Prof Chris Jones

Armed with this knowledge, Jones and his team studied the brains of children who had died of DIPG, comparing the genetic features of different populations of cells. By creating a map of their DNA faults, the scientists showed that spreading cells move early in the tumour’s development, although they tended to grow slower than those in the original tumour.

Next, they grew up samples taken from the brains of children with DIPG into balls of cells in the lab, observing their behaviour and characteristics.

“We found that they were very different; some grew very fast while others didn’t, and some could spread extensively when others couldn’t,” Jones says.

But when they mixed cells together, those that previously had weaker characteristics became more aggressive. “We think these different populations are cooperating, helping one another to grow or spread,” he adds.

This helping hand seems to come from molecular signals that the cancer cells send out, since bathing cells in the liquid that more aggressive cells had been grown in also boosted their ability to divide and spread.

Trials and tribulations

Alongside revealing the intricacies of the disease, Jones hopes that his research brings new, smarter ways to treat DIPG.

“This work opens up a new way of thinking about how we may treat tumours,” he says. “If we can better understand what these different populations of cells are doing, and how they’re interacting, maybe we can identify which ones are the key to go after with drugs.”

This work opens up a new way of thinking about how we may treat tumours.

– Prof Chris Jones

With support from Cancer Research UK, the next stage of this research aims to find out precisely that. Hopefully, discoveries that emerge could make their way towards patients sooner, as Jones is also part of a Cancer Research UK-funded clinical trial that’s treating children with DIPG based on the biology of their disease. Because the study is designed to be adaptive, meaning the treatment a child receives on the trial isn’t set in stone, promising new treatments being developed could be added in and tested out in the trial as it progresses.

Supporting this type of research is exactly why we’ve made brain tumours a top priority, and why we’re committing an extra £25 million over the next 5 years specifically for research in this area.

“New, targeted drugs are now starting to make their way into clinical trials for DIPG,” says Jones.

“We don’t yet know whether they’ll work, but ultimately we want to combine targeted drugs with other treatments, such as radiotherapy or immunotherapy.

“For the first time, these kinds of trials are now opening for DIPG.”

And it’s research like this that hopefully means cancers like DIPG will no longer be defined by how hard they are to treat.

Justine 

Vinci, M. et al. (2018). Functional diversity and cooperativity between subclonal populations of pediatric glioblastoma and diffuse intrinsic pontine glioma cells. Nature Medicine. https://ift.tt/2z3UJSP.



from Cancer Research UK – Science blog https://ift.tt/2MJhiOT

Brain tumours are hard to treat. But even this is a harrowing understatement for some forms of the disease.

Diffuse intrinsic pontine glioma (DIPG) is one such example. These rare brain tumours almost exclusively affect children, and they’re invariably fatal.

“Almost all children with DIPG sadly die within a couple of years of diagnosis,” says Professor Chris Jones from the Institute of Cancer Research, London, a Cancer Research UK-funded expert on the disease.

Prof Chris Jones and his team are finding the key gene faults driving childhood brain tumours.

“There aren’t any effective treatments.”

One of the main reasons that the outlook for DIPG is so poor is down to where it grows in the brain. These tumours start in the brainstem, which lies at the base of the brain and hooks up the spinal cord with deeper brain regions. This crucial piece of machinery controls many of the body’s vital processes, such as breathing and our heart beat.

That means surgery – a cornerstone treatment for many cancers – is out of the question. Drugs are also notoriously ineffective for brain tumours, because most are shut out by the protective blood brain barrier. DIPG is no exception, and Jones says that no chemotherapies have convincingly shown a beneficial effect, despite many different clinical trials testing a variety of drugs. This leaves radiotherapy as the only option, but it isn’t a cure.

“Radiotherapy is the only treatment that’s been shown to have any effect on DIPG,” he says.

“Usually patients will be given a drug as well in an attempt to find something that works, but the cancer usually comes back within 6-9 months.”

Difficult by name and by nature

This situation leaves a pressing need for new treatments. Behind every cancer treatment is research, but that’s where the nature of DIPG presents scientists with yet another challenge.

Studying samples of patients’ tumours in the lab helps scientists understand the biology of the disease and leads them towards new treatments. But for many years biopsy samples weren’t taken from children with DIPG, because the procedure was too dangerous due to the tumours’ delicate position. That left scientists with a shortage of tissue to work with and learn from.

“DIPG is diagnosed by imaging, so questions were raised over the need for invasive and risky biopsies. That set back the collection of tissue for study,” Jones says.

But the field was reawakened in 2012 when a new way of taking biopsy samples with a thin needle was shown to be safe. Using this brain tissue, and also samples taken from children who have died from the diseases, scientists can now grow DIPG cells in the lab and in mice, boosting research efforts and uncovering the genes and molecules that may fuel the disease.

And Jones’ latest research, published in Nature Medicine, is testament to how important these samples are.

More than meets the eye

Scientists already knew that DIPG doesn’t grow as a uniform bundle of cells. Instead, these tumours resemble a diverse patchwork of cells with distinct genetic and molecular fingerprints.

“Down the microscope it looks like adult glioblastoma,” says Jones. “So, a variety of drugs designed against the biology of this tumour type have been tried in DIPG patients, but none of them have worked.”

The tumour isn’t limited to the brainstem either; it spreads throughout the brain, seeding new patchworks of cancer cells in distant regions.

We think these different populations of cells are cooperating, helping one another to grow or spread.

– Prof Chris Jones

Armed with this knowledge, Jones and his team studied the brains of children who had died of DIPG, comparing the genetic features of different populations of cells. By creating a map of their DNA faults, the scientists showed that spreading cells move early in the tumour’s development, although they tended to grow slower than those in the original tumour.

Next, they grew up samples taken from the brains of children with DIPG into balls of cells in the lab, observing their behaviour and characteristics.

“We found that they were very different; some grew very fast while others didn’t, and some could spread extensively when others couldn’t,” Jones says.

But when they mixed cells together, those that previously had weaker characteristics became more aggressive. “We think these different populations are cooperating, helping one another to grow or spread,” he adds.

This helping hand seems to come from molecular signals that the cancer cells send out, since bathing cells in the liquid that more aggressive cells had been grown in also boosted their ability to divide and spread.

Trials and tribulations

Alongside revealing the intricacies of the disease, Jones hopes that his research brings new, smarter ways to treat DIPG.

“This work opens up a new way of thinking about how we may treat tumours,” he says. “If we can better understand what these different populations of cells are doing, and how they’re interacting, maybe we can identify which ones are the key to go after with drugs.”

This work opens up a new way of thinking about how we may treat tumours.

– Prof Chris Jones

With support from Cancer Research UK, the next stage of this research aims to find out precisely that. Hopefully, discoveries that emerge could make their way towards patients sooner, as Jones is also part of a Cancer Research UK-funded clinical trial that’s treating children with DIPG based on the biology of their disease. Because the study is designed to be adaptive, meaning the treatment a child receives on the trial isn’t set in stone, promising new treatments being developed could be added in and tested out in the trial as it progresses.

Supporting this type of research is exactly why we’ve made brain tumours a top priority, and why we’re committing an extra £25 million over the next 5 years specifically for research in this area.

“New, targeted drugs are now starting to make their way into clinical trials for DIPG,” says Jones.

“We don’t yet know whether they’ll work, but ultimately we want to combine targeted drugs with other treatments, such as radiotherapy or immunotherapy.

“For the first time, these kinds of trials are now opening for DIPG.”

And it’s research like this that hopefully means cancers like DIPG will no longer be defined by how hard they are to treat.

Justine 

Vinci, M. et al. (2018). Functional diversity and cooperativity between subclonal populations of pediatric glioblastoma and diffuse intrinsic pontine glioma cells. Nature Medicine. https://ift.tt/2z3UJSP.



from Cancer Research UK – Science blog https://ift.tt/2MJhiOT