The Most Wanted Particle (Synopsis) [Starts With A Bang]


“Innovation is taking two things that already exist and putting them together in a new way.” -Tom Freston



Yes, the Universe can be considered the ultimate innovator, taking the fundamental particles and forces of the Universe, and assembling them into the entirety of what we know, interact with and observe today.


Illustration credit: NASA / CXC / M.Weiss.

Illustration credit: NASA / CXC / M.Weiss.



But what is it all made out of, at a fundamental level? And how did we figure it all out? ATLAS physicist and University College London professor Jon Butterworth is all set to give a free public lecture (live-streamed, online) tomorrow, and I’ll be live-blogging it here!


Image credit: Perimeter Institute.

Image credit: Perimeter Institute.



Check it out: 7 PM EDT / 4 PM PDT or, if that’s inconvenient, come back after its over and watch the permanent version. See you then!






from ScienceBlogs http://ift.tt/19zrMMC

“Innovation is taking two things that already exist and putting them together in a new way.” -Tom Freston



Yes, the Universe can be considered the ultimate innovator, taking the fundamental particles and forces of the Universe, and assembling them into the entirety of what we know, interact with and observe today.


Illustration credit: NASA / CXC / M.Weiss.

Illustration credit: NASA / CXC / M.Weiss.



But what is it all made out of, at a fundamental level? And how did we figure it all out? ATLAS physicist and University College London professor Jon Butterworth is all set to give a free public lecture (live-streamed, online) tomorrow, and I’ll be live-blogging it here!


Image credit: Perimeter Institute.

Image credit: Perimeter Institute.



Check it out: 7 PM EDT / 4 PM PDT or, if that’s inconvenient, come back after its over and watch the permanent version. See you then!






from ScienceBlogs http://ift.tt/19zrMMC

With Global Warming, Will Cold Outbreaks Be Less Common? [Greg Laden's Blog]

Maybe, maybe not. There is a new paper that looks at what climate scientists call “synoptic midlatitude temperature variability” and the rest of us call “cold snaps” and “heat waves.” The term “synoptic” simply means over a reasonably large area like you might expect a cold snap or heat wave to be. Specifically, the paper (Physics of Changes in Synoptic Midlatitude Temperature Variability, by Tapio Schneider, Tobias Bischoff and Hanna Plotka, published in Journal of Climate) concludes that as human-caused greenhouse gas pollution increases, the frequency of cold snaps in the northern hemisphere will go down. Naturally, as temperatures warm up we would expect the highs to get higher, the averages to be higher, and the lows to be higher as well (and thus fewer cold spells). But the new research actually argues that the cold spells (the cold extremes at synoptic spacial scales) will become even less common. This is potentially controversial and conflicts with other recently published research.


The paper is rather technical so, I’ll give you the abstract so you can go take a class in climate science, then come back and read it:



This paper examines the physical processes controlling how synoptic midlatitude temperature variability near the surface changes with climate. Because synoptic temperature variability is primarily generated by advection, it can be related to mean potential temperature gradients and mixing lengths near the surface. Scaling arguments show that the reduction of meridional potential temperature gradients that accompanies polar amplification of global warming leads to a reduction of the synoptic temperature variance near the surface. This is confirmed in simulations of a wide range of climates with an idealized GCM. In comprehensive climate simulations (CMIP5), Arctic amplification of global warming similarly entails a large-scale reduction of the near-surface temperature variance in Northern Hemisphere midlatitudes, especially in winter. The probability density functions of synoptic near-surface temperature variations in midlatitudes are statistically indistinguishable from Gaussian, both in reanalysis data and in a range of climates simulated with idealized and comprehensive GCMs. This indicates that changes in mean values and variances suffice to account for changes even in extreme synoptic temperature variations. Taken together, the results indicate that Arctic amplification of global warming leads to even less frequent cold outbreaks in Northern Hemisphere winter than a shift toward a warmer mean climate implies by itself.



Why is this controversial? Because we have seen research in recent years indicating that with Arctic Amplification (the Arctic getting relatively warmer than the rest of the planet as global warming commences) the manner in which warm air is redistributed from sun-facing Equatorial regions towards the poles changes, which in turn changes the behavior of the Polar jet stream. Rather than being relatively straight as it rushes around the globe, separating temperate and sub-polar regions (and defining the boundaries of trade winds, and moving along storms) it is thought that the jet stream has become more often very curvy, forming what are called Rossby waves. These waves, recent research has suggested, can become stationary and the wind within the waves moves relatively slowly. A curvy jet stream forms meteorological features such as the “ridiculously resilient ridge” which has brought California nearly continuous dry conditions for at least two years now, resulting in an unprecedented drought. A curvy jet stream also forms meteorological features called “troughs” such as the excursion known last year (incorrectly) as the Polar Vortex, which also returned in less severe form this year; a bend in the jet stream that brings polar air farther south than usual, causing a synoptic cold spell of extensive duration. These changes in the jet stream also seem to have brought some unusual winter weather to the American Southeast last year, and have been implicated in steering Super Storm Sandy into the US Northeast a few years ago. And that flood in Boulder, and the flood in Calgary, and the June Of All Rain here in Minnesota last year, and so on. This is the main global warming caused change in weather systems responsible for what has been termed “Weather Whiplash” and may rank up there with increased sea surface temperatures as factors underlying the observable, day to day effects of human caused climate disruption.


I’ve talked about jet streams, Rossby waves, and such in a few places:



Even more recently was a paper by Dim Coumou, Jascha Lehmann, and Johanna Beckmann, “The weakening summer circulation in the Northern Hemisphere mid-latitudes” that argued:



Rapid warming in the Arctic could influence mid-latitude circulation by reducing the poleward temperature gradient. The largest changes are generally expected in autumn or winter but whether significant changes have occurred is debated. Here we report significant weakening of summer circulation detected in three key dynamical quantities: (i) the zonal-mean zonal wind, (ii) the eddy kinetic energy (EKE) and (iii) the amplitude of fast-moving Rossby waves. Weakening of the zonal wind is explained by a reduction in poleward temperature gradient. Changes in Rossby waves and EKE are consistent with regression analyses of climate model projections and changes over the seasonal cycle. Monthly heat extremes are associated with low EKE and thus the observed weakening might have contributed to more persistent heat waves in recent summers.



Coumou notes that “when the great air streams in the sky above us get disturbed by climate change, this can have severe effects on the ground. While you might expect reduced storm activity to be something good, it turns out that this reduction leads to a greater persistence of weather systems in the Northern hemisphere mid-latitudes. In summer, storms transport moist and cool air from the oceans to the continents bringing relief after periods of oppressive heat. Slack periods, in contrast, make warm weather conditions endure, resulting in the buildup of heat and drought.” Co-author Jascha Lehmann adds, “Unabated climate change will probably further weaken summer circulation patterns which could thus aggravate the risk of heat waves. Remarkably, climate simulations for the next decades, the CMIP5, show the same link that we found in observations. So the warm temperature extremes we’ve experienced in recent years might be just a beginning.”


These seem to be conflicting views.


So, how do the scientists who have published the recent paper that stands in stark contrast with these other recent findings explain the difference? I asked lead author Tapio Schneider to comment.


He told me that yes, there is a tension between the other work (the Comou et al paper) and his work, but there is also overlap and similarity. “Coumou et al. state that amplified warming of the Arctic should lead to reduced zonal jet speeds at fixed levels in the troposphere. This is an uncontroversial and well known consequence of thermal wind balance. Then they say that the reduced zonal jet speeds may lead to reductions in eddy kinetic energy (EKE), which is a measure of Rossby wave amplitude. That this can happen is likewise well documented. What affects eddy kinetic energies is a quantity known as the mean available potential energy (MAPE), which depends on temperature gradients (which also affect jet speeds) and other quantities, such as the vertical temperature stratification. Coumou et al. focus only on one factor influencing the EKE, the temperature gradient.”


The tension, he told me, is in what the other researchers (Coumou et al) draw from their results. “They show that warm summer months usually are associated with low EKE in the current climate, consistent with common knowledge: unusually warm conditions are associated with relatively stagnant air. They use this correlation in the current climate to suggest that reduced EKE in a future climate may also imply more (monthly) heat waves. While intuitive, this is not necessarily so. They say their suggestion is not in contradiction with our results because we considered temperature variability on shorter timescales (up to about two weeks), while their suggestion for more heat waves is made for monthly timescales. However, why the longer timescales should behave so differently is not made clear. “


As an onlooker, I take the following from this. First, there may be differences in time (and maybe space) scales of the analyses that might make them less comparable than ideal. Second, Schneider and Bischoff seem to be emphasizing synoptic cold outbreaks specifically. Schneider told me that they did look at temperature variability over longer time scales, but that did not make it into the paper. He said, “Even on monthly timescales, midlatitude temperature variance generally decreases as the climate warms, with a few regional exceptions (e.g., over Europe).”


Also, note that Schneider, Bischoff and Plotka, in this paper, do not address the specific problem of stationary Rossby waves, which probably has more to do with rainfall (lacking or heavy) than temperature, but is an important part of current changes in weather.


There has been some additional criticism of Schneider’s work on social media, etc. and perhaps the most significant one is this: Schneider, Bischoff and Plotka may have oversimplified the conditions in at least one of their models by leaving out continents. Also, Schneider et al has been picked up by a few of the usual suspects as saying that climate change will result in milder winters or less severe storms. This is not actually what the paper says. When people think “milder winter” they usually mean fewer severe storms, but various lines of evidence suggest that the notheastern US will experience more storms. For, example, see “Changes in U.S. East Coast Cyclone Dynamics with Climate Change” and “Global Warming Changing Weather in the US Northeast.”


Schneider, Bischoff and Plotka are well respected scientists and they are using methods that are generally accepted within climate science, yet have come to a conclusion different from what some of their colleagues have proposed. This is, in my opinion, a very good thing, and, certainly, interesting. I would worry if every climate scientist came up with the same result every time they tried something slightly different. The patterning (or changes in patterning) of air and sea currents under global warming has been the subject of a great deal of recent research, and there is strong evidence that changes are happening (such as in sea currents in the North Atlantic, and the jet stream effects discussed here) that have not been directly observed before. Because of the high level of internal (natural) variability, climate science works best when chunks of time 20 or 30 years long are considered. If we are seeing changes now that have really started to take off only five or ten years ago, and that are still dynamically reorganizing, how can the more ponderous, long term and large scale, thinking of climate science adjust and address those rapid changes? Well, we are seeing that process now in the climate change literature, and this paper is one example of it. I look forward to an honest, fair, and vigorous discussion in the peer reviewed literature.




Caption for the figure at the top of the post: FIG. 6. CMIP5 multimodel median values of 850-hPa potential temperature statistics for (left) DJF and (right) JJA. (a) Synoptic potential temperature variance u02 for the years 1980–99 of the historical simulations. (b) Per- centage change of the synoptic potential temperature variance u02 in the years 2080–99 of the RCP8.5 simulations relative to the years 1980–99 of the historical simulations shown in (a). (c) Percentage change of the squared meridional potential temperature gradient (›yu)2 in the years 2080–99 of the RCP8.5 simulations relative to the years 1980–99 of the historical simulations. (To calculate the gradients, mean potential temperatures were smoothed with a spherical harmonics filter that damped spherical wavenumbers greater than 6 and completely fil- tered out wavenumbers greater than 10.) (d) Percentage change of the squared mixing length L0 2 5 u0 2 /(›y u)2 implied by the variance and meridional potential temperature gradient, in the years 2080–99 of the RCP8.5 simulations relative to the years 1980–99 of the historical simulations. Synoptic potential temperature variations are bandpass filtered to 3–15 days. In the dark gray regions, topography extends above the mean 850-hPa isobar. The light gray bar blocks out the equatorial region, where potential temperature gradients are weak and their percentage changes become large.






from ScienceBlogs http://ift.tt/1IjFiiT

Maybe, maybe not. There is a new paper that looks at what climate scientists call “synoptic midlatitude temperature variability” and the rest of us call “cold snaps” and “heat waves.” The term “synoptic” simply means over a reasonably large area like you might expect a cold snap or heat wave to be. Specifically, the paper (Physics of Changes in Synoptic Midlatitude Temperature Variability, by Tapio Schneider, Tobias Bischoff and Hanna Plotka, published in Journal of Climate) concludes that as human-caused greenhouse gas pollution increases, the frequency of cold snaps in the northern hemisphere will go down. Naturally, as temperatures warm up we would expect the highs to get higher, the averages to be higher, and the lows to be higher as well (and thus fewer cold spells). But the new research actually argues that the cold spells (the cold extremes at synoptic spacial scales) will become even less common. This is potentially controversial and conflicts with other recently published research.


The paper is rather technical so, I’ll give you the abstract so you can go take a class in climate science, then come back and read it:



This paper examines the physical processes controlling how synoptic midlatitude temperature variability near the surface changes with climate. Because synoptic temperature variability is primarily generated by advection, it can be related to mean potential temperature gradients and mixing lengths near the surface. Scaling arguments show that the reduction of meridional potential temperature gradients that accompanies polar amplification of global warming leads to a reduction of the synoptic temperature variance near the surface. This is confirmed in simulations of a wide range of climates with an idealized GCM. In comprehensive climate simulations (CMIP5), Arctic amplification of global warming similarly entails a large-scale reduction of the near-surface temperature variance in Northern Hemisphere midlatitudes, especially in winter. The probability density functions of synoptic near-surface temperature variations in midlatitudes are statistically indistinguishable from Gaussian, both in reanalysis data and in a range of climates simulated with idealized and comprehensive GCMs. This indicates that changes in mean values and variances suffice to account for changes even in extreme synoptic temperature variations. Taken together, the results indicate that Arctic amplification of global warming leads to even less frequent cold outbreaks in Northern Hemisphere winter than a shift toward a warmer mean climate implies by itself.



Why is this controversial? Because we have seen research in recent years indicating that with Arctic Amplification (the Arctic getting relatively warmer than the rest of the planet as global warming commences) the manner in which warm air is redistributed from sun-facing Equatorial regions towards the poles changes, which in turn changes the behavior of the Polar jet stream. Rather than being relatively straight as it rushes around the globe, separating temperate and sub-polar regions (and defining the boundaries of trade winds, and moving along storms) it is thought that the jet stream has become more often very curvy, forming what are called Rossby waves. These waves, recent research has suggested, can become stationary and the wind within the waves moves relatively slowly. A curvy jet stream forms meteorological features such as the “ridiculously resilient ridge” which has brought California nearly continuous dry conditions for at least two years now, resulting in an unprecedented drought. A curvy jet stream also forms meteorological features called “troughs” such as the excursion known last year (incorrectly) as the Polar Vortex, which also returned in less severe form this year; a bend in the jet stream that brings polar air farther south than usual, causing a synoptic cold spell of extensive duration. These changes in the jet stream also seem to have brought some unusual winter weather to the American Southeast last year, and have been implicated in steering Super Storm Sandy into the US Northeast a few years ago. And that flood in Boulder, and the flood in Calgary, and the June Of All Rain here in Minnesota last year, and so on. This is the main global warming caused change in weather systems responsible for what has been termed “Weather Whiplash” and may rank up there with increased sea surface temperatures as factors underlying the observable, day to day effects of human caused climate disruption.


I’ve talked about jet streams, Rossby waves, and such in a few places:



Even more recently was a paper by Dim Coumou, Jascha Lehmann, and Johanna Beckmann, “The weakening summer circulation in the Northern Hemisphere mid-latitudes” that argued:



Rapid warming in the Arctic could influence mid-latitude circulation by reducing the poleward temperature gradient. The largest changes are generally expected in autumn or winter but whether significant changes have occurred is debated. Here we report significant weakening of summer circulation detected in three key dynamical quantities: (i) the zonal-mean zonal wind, (ii) the eddy kinetic energy (EKE) and (iii) the amplitude of fast-moving Rossby waves. Weakening of the zonal wind is explained by a reduction in poleward temperature gradient. Changes in Rossby waves and EKE are consistent with regression analyses of climate model projections and changes over the seasonal cycle. Monthly heat extremes are associated with low EKE and thus the observed weakening might have contributed to more persistent heat waves in recent summers.



Coumou notes that “when the great air streams in the sky above us get disturbed by climate change, this can have severe effects on the ground. While you might expect reduced storm activity to be something good, it turns out that this reduction leads to a greater persistence of weather systems in the Northern hemisphere mid-latitudes. In summer, storms transport moist and cool air from the oceans to the continents bringing relief after periods of oppressive heat. Slack periods, in contrast, make warm weather conditions endure, resulting in the buildup of heat and drought.” Co-author Jascha Lehmann adds, “Unabated climate change will probably further weaken summer circulation patterns which could thus aggravate the risk of heat waves. Remarkably, climate simulations for the next decades, the CMIP5, show the same link that we found in observations. So the warm temperature extremes we’ve experienced in recent years might be just a beginning.”


These seem to be conflicting views.


So, how do the scientists who have published the recent paper that stands in stark contrast with these other recent findings explain the difference? I asked lead author Tapio Schneider to comment.


He told me that yes, there is a tension between the other work (the Comou et al paper) and his work, but there is also overlap and similarity. “Coumou et al. state that amplified warming of the Arctic should lead to reduced zonal jet speeds at fixed levels in the troposphere. This is an uncontroversial and well known consequence of thermal wind balance. Then they say that the reduced zonal jet speeds may lead to reductions in eddy kinetic energy (EKE), which is a measure of Rossby wave amplitude. That this can happen is likewise well documented. What affects eddy kinetic energies is a quantity known as the mean available potential energy (MAPE), which depends on temperature gradients (which also affect jet speeds) and other quantities, such as the vertical temperature stratification. Coumou et al. focus only on one factor influencing the EKE, the temperature gradient.”


The tension, he told me, is in what the other researchers (Coumou et al) draw from their results. “They show that warm summer months usually are associated with low EKE in the current climate, consistent with common knowledge: unusually warm conditions are associated with relatively stagnant air. They use this correlation in the current climate to suggest that reduced EKE in a future climate may also imply more (monthly) heat waves. While intuitive, this is not necessarily so. They say their suggestion is not in contradiction with our results because we considered temperature variability on shorter timescales (up to about two weeks), while their suggestion for more heat waves is made for monthly timescales. However, why the longer timescales should behave so differently is not made clear. “


As an onlooker, I take the following from this. First, there may be differences in time (and maybe space) scales of the analyses that might make them less comparable than ideal. Second, Schneider and Bischoff seem to be emphasizing synoptic cold outbreaks specifically. Schneider told me that they did look at temperature variability over longer time scales, but that did not make it into the paper. He said, “Even on monthly timescales, midlatitude temperature variance generally decreases as the climate warms, with a few regional exceptions (e.g., over Europe).”


Also, note that Schneider, Bischoff and Plotka, in this paper, do not address the specific problem of stationary Rossby waves, which probably has more to do with rainfall (lacking or heavy) than temperature, but is an important part of current changes in weather.


There has been some additional criticism of Schneider’s work on social media, etc. and perhaps the most significant one is this: Schneider, Bischoff and Plotka may have oversimplified the conditions in at least one of their models by leaving out continents. Also, Schneider et al has been picked up by a few of the usual suspects as saying that climate change will result in milder winters or less severe storms. This is not actually what the paper says. When people think “milder winter” they usually mean fewer severe storms, but various lines of evidence suggest that the notheastern US will experience more storms. For, example, see “Changes in U.S. East Coast Cyclone Dynamics with Climate Change” and “Global Warming Changing Weather in the US Northeast.”


Schneider, Bischoff and Plotka are well respected scientists and they are using methods that are generally accepted within climate science, yet have come to a conclusion different from what some of their colleagues have proposed. This is, in my opinion, a very good thing, and, certainly, interesting. I would worry if every climate scientist came up with the same result every time they tried something slightly different. The patterning (or changes in patterning) of air and sea currents under global warming has been the subject of a great deal of recent research, and there is strong evidence that changes are happening (such as in sea currents in the North Atlantic, and the jet stream effects discussed here) that have not been directly observed before. Because of the high level of internal (natural) variability, climate science works best when chunks of time 20 or 30 years long are considered. If we are seeing changes now that have really started to take off only five or ten years ago, and that are still dynamically reorganizing, how can the more ponderous, long term and large scale, thinking of climate science adjust and address those rapid changes? Well, we are seeing that process now in the climate change literature, and this paper is one example of it. I look forward to an honest, fair, and vigorous discussion in the peer reviewed literature.




Caption for the figure at the top of the post: FIG. 6. CMIP5 multimodel median values of 850-hPa potential temperature statistics for (left) DJF and (right) JJA. (a) Synoptic potential temperature variance u02 for the years 1980–99 of the historical simulations. (b) Per- centage change of the synoptic potential temperature variance u02 in the years 2080–99 of the RCP8.5 simulations relative to the years 1980–99 of the historical simulations shown in (a). (c) Percentage change of the squared meridional potential temperature gradient (›yu)2 in the years 2080–99 of the RCP8.5 simulations relative to the years 1980–99 of the historical simulations. (To calculate the gradients, mean potential temperatures were smoothed with a spherical harmonics filter that damped spherical wavenumbers greater than 6 and completely fil- tered out wavenumbers greater than 10.) (d) Percentage change of the squared mixing length L0 2 5 u0 2 /(›y u)2 implied by the variance and meridional potential temperature gradient, in the years 2080–99 of the RCP8.5 simulations relative to the years 1980–99 of the historical simulations. Synoptic potential temperature variations are bandpass filtered to 3–15 days. In the dark gray regions, topography extends above the mean 850-hPa isobar. The light gray bar blocks out the equatorial region, where potential temperature gradients are weak and their percentage changes become large.






from ScienceBlogs http://ift.tt/1IjFiiT

To What Extent Should Organisms Be Collected from the Wild?


Source: DoNow Science




Tags: , , , ,





from QUEST http://ift.tt/19zhKuY

Source: DoNow Science




Tags: , , , ,





from QUEST http://ift.tt/19zhKuY

Bubbly Soda Science: Weekly Science Activity




Making your own carbonated beverage can be a lot of fun. How much fizz do you like? What flavor? How sweet? The process of carbonating water and serving up a custom beverage is easier than ever before thanks to commonly available household devices like Sodastream®. But a pressurized approach to creating a carbonated beverage is not the only way to prepare a refreshing soda-style drink.



With a few simple ingredients, students can experiment with mixing up their own soda-style beverages at home using sodium bicarbonate and citric acid mixed with water. Experimenting with the quantity and ratio of these ingredients lets them observe the chemical reaction that occurs. But taste testing different ratios of the ingredients makes the whole process even more fun. Mix in a sweetener or natural flavor (like lemon juice), and see if you can find the perfect balance of ingredients for your taste buds, not too fizzy, not too gritty, not too sweet. Can you find the "just right" combination? Does everyone in your house agree? Find out with this easy kitchen chemistry family science experiment.


You and your kids can explore this hands-on science activity using either the full project directions from Science Buddies or the shorter activity version:





For some non-edible fizzy science fun, try the Making Homemade Bath Bombs family science activity!


Note: The food coloring is just for fun. For the purists out there, no color is necessary!














from Science Buddies Blog http://ift.tt/1CJeGYD



Making your own carbonated beverage can be a lot of fun. How much fizz do you like? What flavor? How sweet? The process of carbonating water and serving up a custom beverage is easier than ever before thanks to commonly available household devices like Sodastream®. But a pressurized approach to creating a carbonated beverage is not the only way to prepare a refreshing soda-style drink.



With a few simple ingredients, students can experiment with mixing up their own soda-style beverages at home using sodium bicarbonate and citric acid mixed with water. Experimenting with the quantity and ratio of these ingredients lets them observe the chemical reaction that occurs. But taste testing different ratios of the ingredients makes the whole process even more fun. Mix in a sweetener or natural flavor (like lemon juice), and see if you can find the perfect balance of ingredients for your taste buds, not too fizzy, not too gritty, not too sweet. Can you find the "just right" combination? Does everyone in your house agree? Find out with this easy kitchen chemistry family science experiment.


You and your kids can explore this hands-on science activity using either the full project directions from Science Buddies or the shorter activity version:





For some non-edible fizzy science fun, try the Making Homemade Bath Bombs family science activity!


Note: The food coloring is just for fun. For the purists out there, no color is necessary!














from Science Buddies Blog http://ift.tt/1CJeGYD

Auroras in motion



Northern Lights from Sergio Garcia Rill on Vimeo.


Sergio Garcia Rill made this video from over 4,450 individual photos taken in the course of two nights at Chena Hot Springs in Alaska. Sergio said:



This was my first time shooting the northern lights, and I quickly discover that unlike shooting regular nightscapes one set of settings won’t work for all night. It depends on the intensity and speed of the aurora. Therefore the settings on the photos range from 1/3 of a second to 8 seconds, and from ISO 4,000 to 10,000. But everything was shot at 14mm and f/2.8; using Nikon D750 and D600 cameras.


Please excuse the sudden flashes of light, there were a lot of tourist nearby trying to get aurora photos with flash, or just afraid of stepping out without a flashlight, I tried to remove most of it but some happened in the middle of a nice aurora sequence so I decided to keep them.



Thank you so much Sergio for sharing this with us!


See more of Sergio’s work at his website


Have you donated yet in EarthSky’s annual fund-raising campaign? Help EarthSky keep going. We need you!






from EarthSky http://ift.tt/1EZQ0Ig


Northern Lights from Sergio Garcia Rill on Vimeo.


Sergio Garcia Rill made this video from over 4,450 individual photos taken in the course of two nights at Chena Hot Springs in Alaska. Sergio said:



This was my first time shooting the northern lights, and I quickly discover that unlike shooting regular nightscapes one set of settings won’t work for all night. It depends on the intensity and speed of the aurora. Therefore the settings on the photos range from 1/3 of a second to 8 seconds, and from ISO 4,000 to 10,000. But everything was shot at 14mm and f/2.8; using Nikon D750 and D600 cameras.


Please excuse the sudden flashes of light, there were a lot of tourist nearby trying to get aurora photos with flash, or just afraid of stepping out without a flashlight, I tried to remove most of it but some happened in the middle of a nice aurora sequence so I decided to keep them.



Thank you so much Sergio for sharing this with us!


See more of Sergio’s work at his website


Have you donated yet in EarthSky’s annual fund-raising campaign? Help EarthSky keep going. We need you!






from EarthSky http://ift.tt/1EZQ0Ig

Anti-Vaxx Loses its Edge [Page 3.14]

It’s getting harder and harder to hate vaccines in America. The trend will only continue as diseases like measles re-emerge because of some parents’ paranoia. Much of the anti-vaccine sentiment of the last twenty years resulted directly from scientific fraud—and most anti-vaccine propaganda likewise employs scientific terminology to sound credible. But more people are waking up to the fact that vaccines simply do not cause autism or other mental ‘disorders,’ and public figures are altering their stances accordingly. Some Republicans are embracing the right to withhold vaccines from a child based solely on the principle of parental sovereignty. Meanwhile celebrity Bill Maher says he is really only against the flu vaccine despite arguing for the basic infallibility of an ‘all-natural’ lifestyle. Actress Mayim Bialik said on facebook “I am not anti-vaccine. my children are vaccinated” despite her reputation for anti-vaccine attitudes. Watch as public opinion continues to shift: anti-vaxxers make indefensible decisions based on implausible explanations, endangering their children and other community members in the process.






from ScienceBlogs http://ift.tt/1EyQmLi

It’s getting harder and harder to hate vaccines in America. The trend will only continue as diseases like measles re-emerge because of some parents’ paranoia. Much of the anti-vaccine sentiment of the last twenty years resulted directly from scientific fraud—and most anti-vaccine propaganda likewise employs scientific terminology to sound credible. But more people are waking up to the fact that vaccines simply do not cause autism or other mental ‘disorders,’ and public figures are altering their stances accordingly. Some Republicans are embracing the right to withhold vaccines from a child based solely on the principle of parental sovereignty. Meanwhile celebrity Bill Maher says he is really only against the flu vaccine despite arguing for the basic infallibility of an ‘all-natural’ lifestyle. Actress Mayim Bialik said on facebook “I am not anti-vaccine. my children are vaccinated” despite her reputation for anti-vaccine attitudes. Watch as public opinion continues to shift: anti-vaxxers make indefensible decisions based on implausible explanations, endangering their children and other community members in the process.






from ScienceBlogs http://ift.tt/1EyQmLi

STEM Is Not an Alien Menace [Uncertain Principles]

Everybody and their extended families has been sharing around the Fareed Zakaria piece on liberal education. This, as you might imagine, is relevant to my interests. So I wrote up a response over at Forbes.


The basic argument of the response is the same thing I’ve been relentlessly flogging around here for a few years: that while I’m all for a broad education, the notion that studying a STEM subject and studying “the human condition” are in opposition or even cleanly separable is just foolish. But it’s a great excuse to start that argument at Forbes, so…






from ScienceBlogs http://ift.tt/1CIb2OK

Everybody and their extended families has been sharing around the Fareed Zakaria piece on liberal education. This, as you might imagine, is relevant to my interests. So I wrote up a response over at Forbes.


The basic argument of the response is the same thing I’ve been relentlessly flogging around here for a few years: that while I’m all for a broad education, the notion that studying a STEM subject and studying “the human condition” are in opposition or even cleanly separable is just foolish. But it’s a great excuse to start that argument at Forbes, so…






from ScienceBlogs http://ift.tt/1CIb2OK

Like coastlines? You’ll like this video



If you like science, and you like spending a day along a coastline – and if you live in the Pacific Northwest or Alaska – this citizen science project might be for you. It’s called COASST, and it’s a group of scientists and volunteers who monitor beach-cast seabird carcasses to learn more about bird populations in their local ecosystems.


Julia K. Parrish, the Executive Director of COASST, explains it all in this video.


It looks really worthwhile, plus like a lot of fun!


Visit COASST’s website.






from EarthSky http://ift.tt/1Muc0Vh


If you like science, and you like spending a day along a coastline – and if you live in the Pacific Northwest or Alaska – this citizen science project might be for you. It’s called COASST, and it’s a group of scientists and volunteers who monitor beach-cast seabird carcasses to learn more about bird populations in their local ecosystems.


Julia K. Parrish, the Executive Director of COASST, explains it all in this video.


It looks really worthwhile, plus like a lot of fun!


Visit COASST’s website.






from EarthSky http://ift.tt/1Muc0Vh

How Sea Floor Ecosystems Are Damaged By, And Recover From, Abrupt Climate Change [Greg Laden's Blog]

A new study by Sarah Moffitt, Tessa Hill, Peter Roopnarine, and James Kennett (Response of seafloor ecosystems to abrubt global climate change) gets a handle on the effects of relatively rapid warming and associated Oxygen loss in the sea on invertebrate communities. The study looked at a recent warming event (the end of the last glacial) in order to understand the present warming event, which is the result of human-caused greenhouse gas pollution.


Here is what is unique about the study. A 30 foot deep core representing the time period from 3,400 to 16,100 year ago, was raised from a site in the pacific, and the researchers tried to identify and characterize all of the complex invertebrate remains in the core. That is not usually how it is done. Typically a limited number of species, and usually microscopic surface invertebrates (Foraminifera) only, are identified and counted. There are good reasons it is done that way. But the new study looks instead at non-single-celled invertebrates (i.e., clams and such) typically found at the bottom, not top, of the water column. This study identified over 5,400 fossils and trace fossils from Mollusca, Echinodermata, Arthropoda, and Annelida (clams, worms, etc.).


Complex invertebrates are important because of their high degree of connectivity in an ecosystem. In the sea, a clam, crab, or sea cucumber may be the canary in the proverbial coal mine. Study co-author Peter Roopnarine says, “The complexity and diversity of a community depends on how much energy is available. To truly understand the health of an ecosystem and the food webs within, we have to look at the simple and small as well as the complex. In this case, marine invertebrates give us a better understanding of the health of ecosystems as a whole.”


The most important finding of the study is this: the marine ecosystem sampled by this core underwent dramatic changes, including local extinctions, and took up to something like 1,000 years to recover from that. The amount of change in bottom ecosystems under these conditions was previously not well known, and the recovery rate was previously assumed to be much shorter, on the order of a century.


From the abstract of the paper:



Anthropogenic climate change is predicted to decrease oceanic oxygen (O2) concentrations, with potentially significant effects on marine ecosystems. Geologically recent episodes of abrupt climatic warming provide opportunities to assess the effects of changing oxygenation on marine communities. Thus far, this knowledge has been largely restricted to investigations using Foraminifera, with little being known about ecosystem-scale responses to abrupt, climate-forced deoxygenation. We here present high-resolution records based on the first comprehensive quantitative analysis, to our knowledge, of changes in marine metazoans … in response to the global warming associated with the last glacial to interglacial episode. The molluscan archive is dominated by extremophile taxa, including those containing endosymbiotic sulfur-oxidizing bacteria (Lucinoma aequizonatum) and those that graze on filamentous sulfur-oxidizing benthic bacterial mats (Alia permodesta). This record … demonstrates that seafloor invertebrate communities are subject to major turnover in response to relatively minor inferred changes in oxygenation (>1.5 to <0.5 mL·L−1 [O2]) associated with abrupt (<100 y) warming of the eastern Pacific. The biotic turnover and recovery events within the record expand known rates of marine biological recovery by an order of magnitude, from <100 to >1,000 y, and illustrate the crucial role of climate and oceanographic change in driving long-term successional changes in ocean ecosystems.



Lead author Sarah Moffitt, of the UC Davis Bodega Marine Laboratory and Coastal and Marine Sciences Institute notes, “In this study, we used the past to forecast the future. Tracing changes in marine biodiversity during historical episodes of warming and cooling tells us what might happen in years to come. We don’t want to hear that ecosystems need thousands of years to recover from disruption, but it’s critical that we understand the global need to combat modern climate impacts.”


There is a video:





Caption from the figure at the top of the post: Fig. 1. Core MV0811–15JC’s (SBB; 418 m water depth; 9.2 m core length; 34.37°N, 120.13°W) oxygen isotopic, foraminiferal, and metazoan deglacial record of the latest Quaternary. Timescale (ka) is in thousands of years before present, and major climatic events include the Last Glacial Maximum (LGM), the Bølling and Allerød (B/A), the Younger Dryas (YD), and the Holocene. (A) GISP2 ice core δ18O values (46). (B) Planktonic Foraminifera Globigerina bulloides δ18O values for core MV0811–15JC, which reflects both deglacial temperature changes in Eastern Pacific surface waters and changes in global ice volume. (C) Benthic foraminiferal density (individuals/cm3). (D) Relative frequency (%) of benthic Foraminifera with faunal oxygen-tolerance categories including oxic–mildly hypoxic (>1.5 mL·L−1 O2; N. labradorica, Quinqueloculina spp., Pyrgo spp.), intermediate hypoxia (1.5–0.5 mL·L−1 O2; Epistominella spp., Bolivina spp., Uvigerina spp.), and severe hypoxia (<0.5 mL·L−1 O2; N. stella, B. tumida) (19). (E) Log mollusc density (individuals/cm3). (F) Ophiuroids (brittle star) presence (presence = 1, absence = 0, 5-cm moving average). (G) Ostracod valve density (circles, valves/cm3) and 5-cm moving average.






from ScienceBlogs http://ift.tt/1aeXqjj

A new study by Sarah Moffitt, Tessa Hill, Peter Roopnarine, and James Kennett (Response of seafloor ecosystems to abrubt global climate change) gets a handle on the effects of relatively rapid warming and associated Oxygen loss in the sea on invertebrate communities. The study looked at a recent warming event (the end of the last glacial) in order to understand the present warming event, which is the result of human-caused greenhouse gas pollution.


Here is what is unique about the study. A 30 foot deep core representing the time period from 3,400 to 16,100 year ago, was raised from a site in the pacific, and the researchers tried to identify and characterize all of the complex invertebrate remains in the core. That is not usually how it is done. Typically a limited number of species, and usually microscopic surface invertebrates (Foraminifera) only, are identified and counted. There are good reasons it is done that way. But the new study looks instead at non-single-celled invertebrates (i.e., clams and such) typically found at the bottom, not top, of the water column. This study identified over 5,400 fossils and trace fossils from Mollusca, Echinodermata, Arthropoda, and Annelida (clams, worms, etc.).


Complex invertebrates are important because of their high degree of connectivity in an ecosystem. In the sea, a clam, crab, or sea cucumber may be the canary in the proverbial coal mine. Study co-author Peter Roopnarine says, “The complexity and diversity of a community depends on how much energy is available. To truly understand the health of an ecosystem and the food webs within, we have to look at the simple and small as well as the complex. In this case, marine invertebrates give us a better understanding of the health of ecosystems as a whole.”


The most important finding of the study is this: the marine ecosystem sampled by this core underwent dramatic changes, including local extinctions, and took up to something like 1,000 years to recover from that. The amount of change in bottom ecosystems under these conditions was previously not well known, and the recovery rate was previously assumed to be much shorter, on the order of a century.


From the abstract of the paper:



Anthropogenic climate change is predicted to decrease oceanic oxygen (O2) concentrations, with potentially significant effects on marine ecosystems. Geologically recent episodes of abrupt climatic warming provide opportunities to assess the effects of changing oxygenation on marine communities. Thus far, this knowledge has been largely restricted to investigations using Foraminifera, with little being known about ecosystem-scale responses to abrupt, climate-forced deoxygenation. We here present high-resolution records based on the first comprehensive quantitative analysis, to our knowledge, of changes in marine metazoans … in response to the global warming associated with the last glacial to interglacial episode. The molluscan archive is dominated by extremophile taxa, including those containing endosymbiotic sulfur-oxidizing bacteria (Lucinoma aequizonatum) and those that graze on filamentous sulfur-oxidizing benthic bacterial mats (Alia permodesta). This record … demonstrates that seafloor invertebrate communities are subject to major turnover in response to relatively minor inferred changes in oxygenation (>1.5 to <0.5 mL·L−1 [O2]) associated with abrupt (<100 y) warming of the eastern Pacific. The biotic turnover and recovery events within the record expand known rates of marine biological recovery by an order of magnitude, from <100 to >1,000 y, and illustrate the crucial role of climate and oceanographic change in driving long-term successional changes in ocean ecosystems.



Lead author Sarah Moffitt, of the UC Davis Bodega Marine Laboratory and Coastal and Marine Sciences Institute notes, “In this study, we used the past to forecast the future. Tracing changes in marine biodiversity during historical episodes of warming and cooling tells us what might happen in years to come. We don’t want to hear that ecosystems need thousands of years to recover from disruption, but it’s critical that we understand the global need to combat modern climate impacts.”


There is a video:





Caption from the figure at the top of the post: Fig. 1. Core MV0811–15JC’s (SBB; 418 m water depth; 9.2 m core length; 34.37°N, 120.13°W) oxygen isotopic, foraminiferal, and metazoan deglacial record of the latest Quaternary. Timescale (ka) is in thousands of years before present, and major climatic events include the Last Glacial Maximum (LGM), the Bølling and Allerød (B/A), the Younger Dryas (YD), and the Holocene. (A) GISP2 ice core δ18O values (46). (B) Planktonic Foraminifera Globigerina bulloides δ18O values for core MV0811–15JC, which reflects both deglacial temperature changes in Eastern Pacific surface waters and changes in global ice volume. (C) Benthic foraminiferal density (individuals/cm3). (D) Relative frequency (%) of benthic Foraminifera with faunal oxygen-tolerance categories including oxic–mildly hypoxic (>1.5 mL·L−1 O2; N. labradorica, Quinqueloculina spp., Pyrgo spp.), intermediate hypoxia (1.5–0.5 mL·L−1 O2; Epistominella spp., Bolivina spp., Uvigerina spp.), and severe hypoxia (<0.5 mL·L−1 O2; N. stella, B. tumida) (19). (E) Log mollusc density (individuals/cm3). (F) Ophiuroids (brittle star) presence (presence = 1, absence = 0, 5-cm moving average). (G) Ostracod valve density (circles, valves/cm3) and 5-cm moving average.






from ScienceBlogs http://ift.tt/1aeXqjj

Shrinking of Antarctic ice shelves is accelerating

Antarctica’s Brunt Ice Shelf photographed in October 2011 from NASA’s DC-8 research aircraft during an Operation IceBridge flight. Michael Studinger/NASA

Antarctica’s Brunt Ice Shelf photographed in October 2011 from NASA’s DC-8 research aircraft during an Operation IceBridge flight. Michael Studinger/NASA



By Laurence Padman, Earth and Space Research ; Fernando Paolo, University of California, San Diego , and Helen Amanda Fricker, University of California, San Diego


Ask people what they know about Antarctica and they usually mention cold, snow and ice. In fact, there’s so much ice on Antarctica that if it all melted into the ocean, average sea level around the entire world would rise about 200 feet, roughly the height of a 20-story building.


Could this happen? There’s evidence that at various times in the past there was much less ice on Antarctica than there is today. For example, during an extended warm period called the Eemian interglacial about 100,000 years ago, Antarctica probably lost enough ice to raise sea level by several meters.


Scientists think that global average temperature back then was only about two degrees Fahrenheit warmer than today. Assuming we continue to burn fossil fuels and add greenhouse gases to the atmosphere, global temperature is expected to rise by at least two degrees Fahrenheit by 2100. What will that do to Antarctica’s ice sheet? Even one meter of worldwide sea level rise – that is, melting only a fiftieth of the ice sheet – would cause massive displacements of coastal populations and require major investments to protect or relocate cities, ports and other coastal infrastructure.


Ice leaving Antarctica enters the ocean through ice shelves, which are the floating edges of the ice sheet. We expect that any changes to the ice sheet caused by changes in the ocean will be felt first by the ice shelves. Using satellite data, we analyzed how Antarctica’s ice shelves have changed over nearly two decades. Our paper published in Science shows that not only has ice shelf volume gone down, but losses have accelerated over the past decade, a result that provides insight into how our future climate will affect the ice sheet and sea level.


Cork in a champagne bottle


The link between changing global temperature and ice loss from Antarctica’s ice sheet is not straightforward. By itself, air temperature has a fairly small influence on the ice sheet, since most of it is already well below freezing.


It turns out that, to understand ice loss, we need to know about changes in winds, snowfall, ocean temperature and currents, sea ice, and the geology under the ice sheets. We don’t yet have enough information on any of these to build reliable models for predicting ice sheet response to climate changes.


We do know that one important control on ice loss from Antarctica is what happens where the ice sheet meets the ocean. The Antarctic Ice Sheet gains ice by snowfall. The ice sheet spreads under its own weight forming glaciers and ice streams that flow slowly downhill towards the ocean. Once they lift off the bedrock and begin to float, they become ice shelves. To stay in balance, ice shelves have to shed the ice they gained from glacier flow and local snowfall. Chunks break off to form icebergs and ice is also lost from the bottom by melting as warm ocean water flows under it.


View larger |

Schematic diagram of an Antarctic ice shelf showing the processes causing the volume changes measured by satellites. Ice is added to the ice shelf by glaciers flowing off the continent and by snowfall that compresses to form ice. Ice is lost when icebergs break off the ice front, and by melting in some regions as warm water flows into the ocean cavity under the ice shelf. Under some ice shelves, cold and fresh meltwater rises to a point where it refreezes onto the ice shelf. View larger | Image credit: Helen Amanda Fricker, Professor, Scripps Institution of Oceanography, UC San Diego



An ice shelf acts a bit like a cork in a champagne bottle, slowing down the glaciers flowing from the ground into it; scientists call this the buttressing effect. Recent observations show that when ice shelves thin or collapse, the glacier flow from the land into the ocean speeds up, which contributes to sea level rise. So understanding what makes ice shelves change size is an important scientific question.


Building an ice shelves map


The first step towards understanding ice shelves is to work out just how much and how quickly they have changed in the past. In our paper, we show detailed maps of changes in ice shelves all around Antarctica based on the 18 years from 1994 to 2012. The data came from continuous measurements of surface height collected by three European Space Agency radar altimeter satellites. By comparing surface heights at the same point on the ice shelf at different times, we can build a record of ice height changes. We can then convert that to thickness changes using ice density and the fact that ice shelves float.


Prior studies of changes in ice shelf thickness and volume have given averages for individual ice shelves or approximated the changes in time as straight-line fits over short periods. In contrast, our new study presents high-resolution (about 30 km by 30 km) maps of thickness changes at three-month time steps for the 18-year period. This data set allows us to see how the rate of thinning varies between different parts of the same ice shelf, and between different years.


This map shows eighteen years of change in thickness and volume of Antarctic ice shelves. Rates of thickness change (meters/decade) are color-coded from -25 (thinning) to +10 (thickening). Circles represent percentage of thickness lost (red) or gained (blue) in 18 years. The central circle demarcates the area not surveyed by the satellites (south of 81.5ºS). Original data were interpolated for mapping purposes. Image credit: Scripps Institution of Oceanography, UC San Diego

This map shows eighteen years of change in thickness and volume of Antarctic ice shelves. Rates of thickness change (meters/decade) are color-coded from -25 (thinning) to +10 (thickening). Circles represent percentage of thickness lost (red) or gained (blue) in 18 years. The central circle demarcates the area not surveyed by the satellites (south of 81.5ºS). Original data were interpolated for mapping purposes. Image credit: Scripps Institution of Oceanography, UC San Diego



We find that, if recent trends continue, some ice shelves will thin dramatically within centuries, reducing their ability to buttress the ice sheet. Other ice shelves are gaining ice, and so could slow down the loss of ice from the ground.


When we sum up losses around Antarctica, we find that the change in volume of all the ice shelves was almost zero in the first decade of our record (1994-2003) but, on average, over 300 cubic kilometers per year were lost between 2003 and 2012.


The pattern of acceleration in ice loss varies between regions. During the first half of the record, ice losses from West Antarctica were almost balanced by gains in East Antarctica. After about 2003, East Antarctic ice shelf volume stabilized, and West Antarctic losses increased slightly.


Changes in climate factors like snowfall, wind speed and ocean circulation will lead to different patterns of ice shelf thickness change in time and space. We can compare the “fingerprints” of these factors with our new, much clearer maps to identify the primary causes, which might be different in different regions around Antarctica.


Our 18-year data set has demonstrated the value of long and continuous observations of the ice shelves, showing that shorter records cannot capture the true variability. We expect that our results will inspire new ways of thinking about how the ocean and atmosphere can affect ice shelves and, through them, ice loss from Antarctica.


The Conversation


This article was originally published on The Conversation.


Read the original article.






from EarthSky http://ift.tt/1C1e2AH
Antarctica’s Brunt Ice Shelf photographed in October 2011 from NASA’s DC-8 research aircraft during an Operation IceBridge flight. Michael Studinger/NASA

Antarctica’s Brunt Ice Shelf photographed in October 2011 from NASA’s DC-8 research aircraft during an Operation IceBridge flight. Michael Studinger/NASA



By Laurence Padman, Earth and Space Research ; Fernando Paolo, University of California, San Diego , and Helen Amanda Fricker, University of California, San Diego


Ask people what they know about Antarctica and they usually mention cold, snow and ice. In fact, there’s so much ice on Antarctica that if it all melted into the ocean, average sea level around the entire world would rise about 200 feet, roughly the height of a 20-story building.


Could this happen? There’s evidence that at various times in the past there was much less ice on Antarctica than there is today. For example, during an extended warm period called the Eemian interglacial about 100,000 years ago, Antarctica probably lost enough ice to raise sea level by several meters.


Scientists think that global average temperature back then was only about two degrees Fahrenheit warmer than today. Assuming we continue to burn fossil fuels and add greenhouse gases to the atmosphere, global temperature is expected to rise by at least two degrees Fahrenheit by 2100. What will that do to Antarctica’s ice sheet? Even one meter of worldwide sea level rise – that is, melting only a fiftieth of the ice sheet – would cause massive displacements of coastal populations and require major investments to protect or relocate cities, ports and other coastal infrastructure.


Ice leaving Antarctica enters the ocean through ice shelves, which are the floating edges of the ice sheet. We expect that any changes to the ice sheet caused by changes in the ocean will be felt first by the ice shelves. Using satellite data, we analyzed how Antarctica’s ice shelves have changed over nearly two decades. Our paper published in Science shows that not only has ice shelf volume gone down, but losses have accelerated over the past decade, a result that provides insight into how our future climate will affect the ice sheet and sea level.


Cork in a champagne bottle


The link between changing global temperature and ice loss from Antarctica’s ice sheet is not straightforward. By itself, air temperature has a fairly small influence on the ice sheet, since most of it is already well below freezing.


It turns out that, to understand ice loss, we need to know about changes in winds, snowfall, ocean temperature and currents, sea ice, and the geology under the ice sheets. We don’t yet have enough information on any of these to build reliable models for predicting ice sheet response to climate changes.


We do know that one important control on ice loss from Antarctica is what happens where the ice sheet meets the ocean. The Antarctic Ice Sheet gains ice by snowfall. The ice sheet spreads under its own weight forming glaciers and ice streams that flow slowly downhill towards the ocean. Once they lift off the bedrock and begin to float, they become ice shelves. To stay in balance, ice shelves have to shed the ice they gained from glacier flow and local snowfall. Chunks break off to form icebergs and ice is also lost from the bottom by melting as warm ocean water flows under it.


View larger |

Schematic diagram of an Antarctic ice shelf showing the processes causing the volume changes measured by satellites. Ice is added to the ice shelf by glaciers flowing off the continent and by snowfall that compresses to form ice. Ice is lost when icebergs break off the ice front, and by melting in some regions as warm water flows into the ocean cavity under the ice shelf. Under some ice shelves, cold and fresh meltwater rises to a point where it refreezes onto the ice shelf. View larger | Image credit: Helen Amanda Fricker, Professor, Scripps Institution of Oceanography, UC San Diego



An ice shelf acts a bit like a cork in a champagne bottle, slowing down the glaciers flowing from the ground into it; scientists call this the buttressing effect. Recent observations show that when ice shelves thin or collapse, the glacier flow from the land into the ocean speeds up, which contributes to sea level rise. So understanding what makes ice shelves change size is an important scientific question.


Building an ice shelves map


The first step towards understanding ice shelves is to work out just how much and how quickly they have changed in the past. In our paper, we show detailed maps of changes in ice shelves all around Antarctica based on the 18 years from 1994 to 2012. The data came from continuous measurements of surface height collected by three European Space Agency radar altimeter satellites. By comparing surface heights at the same point on the ice shelf at different times, we can build a record of ice height changes. We can then convert that to thickness changes using ice density and the fact that ice shelves float.


Prior studies of changes in ice shelf thickness and volume have given averages for individual ice shelves or approximated the changes in time as straight-line fits over short periods. In contrast, our new study presents high-resolution (about 30 km by 30 km) maps of thickness changes at three-month time steps for the 18-year period. This data set allows us to see how the rate of thinning varies between different parts of the same ice shelf, and between different years.


This map shows eighteen years of change in thickness and volume of Antarctic ice shelves. Rates of thickness change (meters/decade) are color-coded from -25 (thinning) to +10 (thickening). Circles represent percentage of thickness lost (red) or gained (blue) in 18 years. The central circle demarcates the area not surveyed by the satellites (south of 81.5ºS). Original data were interpolated for mapping purposes. Image credit: Scripps Institution of Oceanography, UC San Diego

This map shows eighteen years of change in thickness and volume of Antarctic ice shelves. Rates of thickness change (meters/decade) are color-coded from -25 (thinning) to +10 (thickening). Circles represent percentage of thickness lost (red) or gained (blue) in 18 years. The central circle demarcates the area not surveyed by the satellites (south of 81.5ºS). Original data were interpolated for mapping purposes. Image credit: Scripps Institution of Oceanography, UC San Diego



We find that, if recent trends continue, some ice shelves will thin dramatically within centuries, reducing their ability to buttress the ice sheet. Other ice shelves are gaining ice, and so could slow down the loss of ice from the ground.


When we sum up losses around Antarctica, we find that the change in volume of all the ice shelves was almost zero in the first decade of our record (1994-2003) but, on average, over 300 cubic kilometers per year were lost between 2003 and 2012.


The pattern of acceleration in ice loss varies between regions. During the first half of the record, ice losses from West Antarctica were almost balanced by gains in East Antarctica. After about 2003, East Antarctic ice shelf volume stabilized, and West Antarctic losses increased slightly.


Changes in climate factors like snowfall, wind speed and ocean circulation will lead to different patterns of ice shelf thickness change in time and space. We can compare the “fingerprints” of these factors with our new, much clearer maps to identify the primary causes, which might be different in different regions around Antarctica.


Our 18-year data set has demonstrated the value of long and continuous observations of the ice shelves, showing that shorter records cannot capture the true variability. We expect that our results will inspire new ways of thinking about how the ocean and atmosphere can affect ice shelves and, through them, ice loss from Antarctica.


The Conversation


This article was originally published on The Conversation.


Read the original article.






from EarthSky http://ift.tt/1C1e2AH

New System Watches for Things that Go Bump in the Night

Imagine taking the world’s most powerful radio telescope, used by scientists around the globe, and piping a nearly continuous data stream into your research laboratory.


That is exactly what scientists at the Naval Research Laboratory (NRL) in Washington, D.C. have done in collaboration with the National Radio Astronomy Observatory’s Karl G. Jansky Very Large Array (NRAO VLA). The newly-completed VLA Low Band Ionospheric and Transient Experiment (VLITE for short) has been built to piggyback on the $300 million dollar infrastructure of the VLA.


Radio (VLITE) and optical (SDSS) image showing the giant radio galaxy IC 711 and companions IC 708 and IC 712. All three systems are part of the distant galaxy cluster Abell 1314 and were serendipitously located in a field pointed at an unrelated low redshift galaxy. The radio data were fully processed through the VLITE pipeline and show the power of this new instrument. The field shown is the size of a full moon. (Credit: Radio (blue) from VLA Low Band Ionospheric and Transient Experiment on the NRAO VLA. Optical (red and green) from the Sloan Digital Sky Survey. U.S. Naval Research Laboratory/Dr. Tracy Clarke/Released)

Radio (VLITE) and optical (SDSS) image showing the giant radio galaxy IC 711 and companions IC 708 and IC 712. All three systems are part of the distant galaxy cluster Abell 1314 and were serendipitously located in a field pointed at an unrelated low redshift galaxy. The radio data were fully processed through the VLITE pipeline and show the power of this new instrument. The field shown is the size of a full moon.

(Photo: Radio (blue) from VLA Low Band Ionospheric and Transient Experiment on the NRAO VLA. Optical (red and green) from the Sloan Digital Sky Survey. U.S. Naval Research Laboratory/Dr. Tracy Clarke/Released)



The primary scientific driver for VLITE is real-time monitoring of ionospheric weather conditions over the U.S. southwest.



“This new system allows for continuous specification of ionospheric disturbances with remarkable precision. VLITE can detect and characterize density fluctuations as small as 30 parts per million within the total electron content along the line of sight to a cosmic source. This is akin to being at the bottom of Lake Superior and watching waves as small as 1-cm in height pass overhead. This will have a substantial impact on our understanding of ionospheric dynamics, especially the coupling between fine-scale irregularities within the lower ionosphere and larger disturbances higher up,” says NRL ionospheric lead scientist Dr. Joseph Helmboldt.



Ionospheric disturbances represent one of the most significant limitations to the performance of many radio-frequency applications like satellite-based communication and navigation (including the GPS in your phone) as well as ground-based, over-the-horizon systems (think ham radio or AM radio). While the fine-scale irregularities that VLITE is especially sensitive to aren’t large enough to make your smart phone think you are at your neighbor’s house when you’re really at home, they are quite problematic for vital remote sensing surveillance systems like over-the-horizon radar. The additional insights provided by VLITE into the nature of these ionospheric ripples will help us to better understand how to cope with their effects on such systems.


“VLITE is also a powerful new tool in our arsenal for astrophysical research” says VLITE principle investigator Dr. Namir Kassim. He points out that “We know the Universe has many secrets including mysterious blips (so-called transients) that appear and vanish like fireflies in the night. Limited observing time at classical observatories hampers our ability to understand these intriguing objects. The power of VLITE is the nearly continual data stream over a large region of the sky. This opens up a new window on the transient Universe.” At any given time, the region of the sky that VLITE peers at is so large that nearly 20 full moons would fit inside it.


Astrophysics lead scientist Dr. Tracy Clarke of NRL describes VLITE as “a symbiotic instrument that piggybacks on world-class science at the VLA. It operates as a stand-alone tool for ionospheric and astrophysical studies while at the same time VLITE provides the opportunity for enhanced science in the research program running on the VLA.”


VLITE operations started with first light on July 17, 2014 but the real fun began two days before Thanksgiving, on November 25, 2014, when VLITE moved from a commissioning phase into full scientific operations. The system operates in real-time on 10 VLA antennas and provides 64 MHz of bandwidth centered on 352 MHz with a temporal resolution of 2s and a spectral resolution of 100 kHz.


This powerful new instrument operates in parallel with the VLA and is essentially ‘driven’ around the sky by the primary science observer. Data streams off the telescope through dedicated systems that bypass normal VLA operations. The data then take two roads, one through real-time processing on computers located at the VLA site, and the other through off-line processing at NRL’s facility in Washington.


Due to the large volume of nearly continuous incoming data, all data must be analyzed by an automated pipeline that was custom designed for VLITE. Pipeline designer Dr. Wendy Lane Peters of NRL describes this process as being like “sitting in the passenger seat of a Google car and not knowing where it is taking you. VLITE is along for the ride wherever the primary science program takes us. We have to anticipate what they might do so that our pipeline is smart enough to understand the incoming data.”


Professor Bryan Gaensler, Director of the Dunlap Institute for Astronomy and Astrophysics at the University of Toronto, says that this is going to become the new way of doing astronomy.



“It’s a tragedy and a travesty that most of the information our telescopes gather from the sky is ignored and discarded. VLITE is part of a new generation of experiments that fully utilize the massive data torrents collected by the world’s most powerful observatories.”



Over the first two months of science operations, VLITE has recorded observations of sources ranging from the Sun, nearby stars and galaxies, to some of the most distant sources in the Universe. NRL astronomers and their colleagues have been poring over the pipeline images, improving their analysis pipeline and exploring the scientific potential of the instrument.


Story and information provided by the Naval Research Laboratory

Follow Armed with Science on Facebook and Twitter!


———-


Disclaimer: The appearance of hyperlinks does not constitute endorsement by the Department of Defense of this website or the information, products or services contained therein. For other than authorized activities such as military exchanges and Morale, Welfare and Recreation sites, the Department of Defense does not exercise any editorial control over the information you may find at these locations. Such links are provided consistent with the stated purpose of this DOD website.






from Armed with Science http://ift.tt/19w9vQf

Imagine taking the world’s most powerful radio telescope, used by scientists around the globe, and piping a nearly continuous data stream into your research laboratory.


That is exactly what scientists at the Naval Research Laboratory (NRL) in Washington, D.C. have done in collaboration with the National Radio Astronomy Observatory’s Karl G. Jansky Very Large Array (NRAO VLA). The newly-completed VLA Low Band Ionospheric and Transient Experiment (VLITE for short) has been built to piggyback on the $300 million dollar infrastructure of the VLA.


Radio (VLITE) and optical (SDSS) image showing the giant radio galaxy IC 711 and companions IC 708 and IC 712. All three systems are part of the distant galaxy cluster Abell 1314 and were serendipitously located in a field pointed at an unrelated low redshift galaxy. The radio data were fully processed through the VLITE pipeline and show the power of this new instrument. The field shown is the size of a full moon. (Credit: Radio (blue) from VLA Low Band Ionospheric and Transient Experiment on the NRAO VLA. Optical (red and green) from the Sloan Digital Sky Survey. U.S. Naval Research Laboratory/Dr. Tracy Clarke/Released)

Radio (VLITE) and optical (SDSS) image showing the giant radio galaxy IC 711 and companions IC 708 and IC 712. All three systems are part of the distant galaxy cluster Abell 1314 and were serendipitously located in a field pointed at an unrelated low redshift galaxy. The radio data were fully processed through the VLITE pipeline and show the power of this new instrument. The field shown is the size of a full moon.

(Photo: Radio (blue) from VLA Low Band Ionospheric and Transient Experiment on the NRAO VLA. Optical (red and green) from the Sloan Digital Sky Survey. U.S. Naval Research Laboratory/Dr. Tracy Clarke/Released)



The primary scientific driver for VLITE is real-time monitoring of ionospheric weather conditions over the U.S. southwest.



“This new system allows for continuous specification of ionospheric disturbances with remarkable precision. VLITE can detect and characterize density fluctuations as small as 30 parts per million within the total electron content along the line of sight to a cosmic source. This is akin to being at the bottom of Lake Superior and watching waves as small as 1-cm in height pass overhead. This will have a substantial impact on our understanding of ionospheric dynamics, especially the coupling between fine-scale irregularities within the lower ionosphere and larger disturbances higher up,” says NRL ionospheric lead scientist Dr. Joseph Helmboldt.



Ionospheric disturbances represent one of the most significant limitations to the performance of many radio-frequency applications like satellite-based communication and navigation (including the GPS in your phone) as well as ground-based, over-the-horizon systems (think ham radio or AM radio). While the fine-scale irregularities that VLITE is especially sensitive to aren’t large enough to make your smart phone think you are at your neighbor’s house when you’re really at home, they are quite problematic for vital remote sensing surveillance systems like over-the-horizon radar. The additional insights provided by VLITE into the nature of these ionospheric ripples will help us to better understand how to cope with their effects on such systems.


“VLITE is also a powerful new tool in our arsenal for astrophysical research” says VLITE principle investigator Dr. Namir Kassim. He points out that “We know the Universe has many secrets including mysterious blips (so-called transients) that appear and vanish like fireflies in the night. Limited observing time at classical observatories hampers our ability to understand these intriguing objects. The power of VLITE is the nearly continual data stream over a large region of the sky. This opens up a new window on the transient Universe.” At any given time, the region of the sky that VLITE peers at is so large that nearly 20 full moons would fit inside it.


Astrophysics lead scientist Dr. Tracy Clarke of NRL describes VLITE as “a symbiotic instrument that piggybacks on world-class science at the VLA. It operates as a stand-alone tool for ionospheric and astrophysical studies while at the same time VLITE provides the opportunity for enhanced science in the research program running on the VLA.”


VLITE operations started with first light on July 17, 2014 but the real fun began two days before Thanksgiving, on November 25, 2014, when VLITE moved from a commissioning phase into full scientific operations. The system operates in real-time on 10 VLA antennas and provides 64 MHz of bandwidth centered on 352 MHz with a temporal resolution of 2s and a spectral resolution of 100 kHz.


This powerful new instrument operates in parallel with the VLA and is essentially ‘driven’ around the sky by the primary science observer. Data streams off the telescope through dedicated systems that bypass normal VLA operations. The data then take two roads, one through real-time processing on computers located at the VLA site, and the other through off-line processing at NRL’s facility in Washington.


Due to the large volume of nearly continuous incoming data, all data must be analyzed by an automated pipeline that was custom designed for VLITE. Pipeline designer Dr. Wendy Lane Peters of NRL describes this process as being like “sitting in the passenger seat of a Google car and not knowing where it is taking you. VLITE is along for the ride wherever the primary science program takes us. We have to anticipate what they might do so that our pipeline is smart enough to understand the incoming data.”


Professor Bryan Gaensler, Director of the Dunlap Institute for Astronomy and Astrophysics at the University of Toronto, says that this is going to become the new way of doing astronomy.



“It’s a tragedy and a travesty that most of the information our telescopes gather from the sky is ignored and discarded. VLITE is part of a new generation of experiments that fully utilize the massive data torrents collected by the world’s most powerful observatories.”



Over the first two months of science operations, VLITE has recorded observations of sources ranging from the Sun, nearby stars and galaxies, to some of the most distant sources in the Universe. NRL astronomers and their colleagues have been poring over the pipeline images, improving their analysis pipeline and exploring the scientific potential of the instrument.


Story and information provided by the Naval Research Laboratory

Follow Armed with Science on Facebook and Twitter!


———-


Disclaimer: The appearance of hyperlinks does not constitute endorsement by the Department of Defense of this website or the information, products or services contained therein. For other than authorized activities such as military exchanges and Morale, Welfare and Recreation sites, the Department of Defense does not exercise any editorial control over the information you may find at these locations. Such links are provided consistent with the stated purpose of this DOD website.






from Armed with Science http://ift.tt/19w9vQf

Striped sunrises and the shadows they cast


View larger. | Photo by Peter Lowenstein.

View larger. | Striped sunrise by Peter Lowenstein.



Peter Lowenstein of Mutare, Zimbabwe – who recently contributed an interesting photo of straight lightning to these pages – has submitted another set of unusual photos for us. One is above, and the other is at the bottom of this post. The photos were taken a year apart, but might have been taken on the same day if two photographers had been standing back to back, one shooting a cloud-striped sunrise and the other shooting the sun’s first light – showing banded cloud shadow – shining on a nearby mountain slope. Peter wrote:



The first picture was taken almost a year ago from a high vantage point in the Bvumba Mountains looking east over Chikamba in Mozambique and shows a glorious sun striped by rising through thin layers of early morning cloud and mist on the horizon. I captured it at 5:57 a.m. using a Panasonic Lumix DMC-TZ10 compact camera in sunset mode and x16 zoom setting.


The second picture was taken at sunrise yesterday morning (March 29, 2015) from the verandah of my house and shows alternate stripes of bright orange sunlight and the dark shadows of a thin strip of cloud and the eastern horizon being projected by the sun onto Murawa Mountain a few kilometers to the west. This spectacle lasted less than a minute before being faded by larger clouds passing in front of the sun. It was captured at 6:10 a.m. using a Panasonic Lumix DMC-TZ60 compact camera in sunset mode and x2 zoom setting.



Thank you, Peter! Interesting indeed.


View larger. | Photo by Peter Lowenstein.

View larger. | Striped sunrise’s shadow by Peter Lowenstein.



Bottom line: Peter Lowenstein in Zimbabwe took these photos a year apart. One shows a sunrise striped with cloud, and the other shows cloud-striped sunrise’s cloud shadow.


Only two weeks left in our annual fund-raising campaign! Have you donated yet? Help EarthSky keep going.






from EarthSky http://ift.tt/1BGRC7Y

View larger. | Photo by Peter Lowenstein.

View larger. | Striped sunrise by Peter Lowenstein.



Peter Lowenstein of Mutare, Zimbabwe – who recently contributed an interesting photo of straight lightning to these pages – has submitted another set of unusual photos for us. One is above, and the other is at the bottom of this post. The photos were taken a year apart, but might have been taken on the same day if two photographers had been standing back to back, one shooting a cloud-striped sunrise and the other shooting the sun’s first light – showing banded cloud shadow – shining on a nearby mountain slope. Peter wrote:



The first picture was taken almost a year ago from a high vantage point in the Bvumba Mountains looking east over Chikamba in Mozambique and shows a glorious sun striped by rising through thin layers of early morning cloud and mist on the horizon. I captured it at 5:57 a.m. using a Panasonic Lumix DMC-TZ10 compact camera in sunset mode and x16 zoom setting.


The second picture was taken at sunrise yesterday morning (March 29, 2015) from the verandah of my house and shows alternate stripes of bright orange sunlight and the dark shadows of a thin strip of cloud and the eastern horizon being projected by the sun onto Murawa Mountain a few kilometers to the west. This spectacle lasted less than a minute before being faded by larger clouds passing in front of the sun. It was captured at 6:10 a.m. using a Panasonic Lumix DMC-TZ60 compact camera in sunset mode and x2 zoom setting.



Thank you, Peter! Interesting indeed.


View larger. | Photo by Peter Lowenstein.

View larger. | Striped sunrise’s shadow by Peter Lowenstein.



Bottom line: Peter Lowenstein in Zimbabwe took these photos a year apart. One shows a sunrise striped with cloud, and the other shows cloud-striped sunrise’s cloud shadow.


Only two weeks left in our annual fund-raising campaign! Have you donated yet? Help EarthSky keep going.






from EarthSky http://ift.tt/1BGRC7Y

Moon close to Regulus on March 31


Tonight – March 31, 2015 – can you find the star that’s shining close to the big and bright waxing gibbous moon? That’s Regulus, the brightest star in the constellation Leo the Lion. In sky lore, Regulus is considered to be the Lion’s Heart. Regulus is also the only first-magnitude star to sit almost exactly on the ecliptic – the Earth’s orbital plane projected outward onto the sphere of stars. We often show the ecliptic on our sky charts, because the moon and planets are always found on it, or near it.


Only two weeks left in our annual fund-raising campaign! Have you donated yet? Help EarthSky keep going.



Don’t mistake the planet Jupiter, that much brighter starlike object in the moon’s vicinity, for Regulus. Although Regulus ranks as a first-magnitude star, it pales next to Jupiter, the fourth-brightest celestial object to adorn the heavens, after the sun, moon and Venus. (Venus is seen in the west after sunset.) In fact, Jupiter shines about 30 times more brightly than Regulus does.



An imaginary line drawn between the pointer stars in the Big Dipper – the two outer stars in the Dipper’s bowl – points in one direction toward Polaris, the North Star, and in the opposite direction toward Leo.



Use the moon to find Regulus tonight. Then you can refer to the “pointer stars” of the Big Dipper to locate Regulus on any night. The two outer stars making up the bowl of the Big Dipper point northward to Polaris, the North Star, and southward to the constellation Leo and its brightest star, Regulus. See illustration at left.


Regulus and three other 1st-magnitude stars reside close enough to the ecliptic to be occulted – covered over – by the moon on occasion: Regulus, Spica, Antares, and Aldebaran. In fact, the last lunar occultation of Regulus happened on May 12, 2008, and the next one will be on December 18, 2016.


Lunar occultations of bright stars are not terribly uncommon. A series of monthly occultations of Aldebaran began on January 29, 2015, and will end on September 3, 2018. Then Regulus will present a series of monthly occultations from December 18, 2016 to April 24, 2018, followed by a series of Antares’ occultations from August 25, 2023 to August 27, 2028.


An occultation of a first-magnitude star by a planet is extremely rare. The last time a planet occulted a first-magnitude star was when Venus occulted Regulus on July 7, 1959. The next time will be when Venus occults Regulus on October 1, 2044.


Before 1959, the most recent planet/first-magnitude star occultation took place on November 10, 1783, when Venus occulted Spica. Venus will again occult Spica on September 2, 2197.


Bottom line: Tonight – March 31, 2015 – the moon shines close to Leo’s brightest star, Regulus, the only 1st-magnitude star to sit almost exactly on the ecliptic.


EarthSky astronomy kits are perfect for beginners. Order today from the EarthSky store


Donate: Your support means the world to us






from EarthSky http://ift.tt/1CrGjEe

Tonight – March 31, 2015 – can you find the star that’s shining close to the big and bright waxing gibbous moon? That’s Regulus, the brightest star in the constellation Leo the Lion. In sky lore, Regulus is considered to be the Lion’s Heart. Regulus is also the only first-magnitude star to sit almost exactly on the ecliptic – the Earth’s orbital plane projected outward onto the sphere of stars. We often show the ecliptic on our sky charts, because the moon and planets are always found on it, or near it.


Only two weeks left in our annual fund-raising campaign! Have you donated yet? Help EarthSky keep going.



Don’t mistake the planet Jupiter, that much brighter starlike object in the moon’s vicinity, for Regulus. Although Regulus ranks as a first-magnitude star, it pales next to Jupiter, the fourth-brightest celestial object to adorn the heavens, after the sun, moon and Venus. (Venus is seen in the west after sunset.) In fact, Jupiter shines about 30 times more brightly than Regulus does.



An imaginary line drawn between the pointer stars in the Big Dipper – the two outer stars in the Dipper’s bowl – points in one direction toward Polaris, the North Star, and in the opposite direction toward Leo.



Use the moon to find Regulus tonight. Then you can refer to the “pointer stars” of the Big Dipper to locate Regulus on any night. The two outer stars making up the bowl of the Big Dipper point northward to Polaris, the North Star, and southward to the constellation Leo and its brightest star, Regulus. See illustration at left.


Regulus and three other 1st-magnitude stars reside close enough to the ecliptic to be occulted – covered over – by the moon on occasion: Regulus, Spica, Antares, and Aldebaran. In fact, the last lunar occultation of Regulus happened on May 12, 2008, and the next one will be on December 18, 2016.


Lunar occultations of bright stars are not terribly uncommon. A series of monthly occultations of Aldebaran began on January 29, 2015, and will end on September 3, 2018. Then Regulus will present a series of monthly occultations from December 18, 2016 to April 24, 2018, followed by a series of Antares’ occultations from August 25, 2023 to August 27, 2028.


An occultation of a first-magnitude star by a planet is extremely rare. The last time a planet occulted a first-magnitude star was when Venus occulted Regulus on July 7, 1959. The next time will be when Venus occults Regulus on October 1, 2044.


Before 1959, the most recent planet/first-magnitude star occultation took place on November 10, 1783, when Venus occulted Spica. Venus will again occult Spica on September 2, 2197.


Bottom line: Tonight – March 31, 2015 – the moon shines close to Leo’s brightest star, Regulus, the only 1st-magnitude star to sit almost exactly on the ecliptic.


EarthSky astronomy kits are perfect for beginners. Order today from the EarthSky store


Donate: Your support means the world to us






from EarthSky http://ift.tt/1CrGjEe

Ancient cancer [Respectful Insolence]

As I sat on my couch last night, laptop sitting in front of me, I awaited the Ken Burns adaptation of Siddartha Mukherjee’s excellent book The Emperor of All Maladies into a three part television documentary to air on PBS. I’m not sure whether I’ll blog the show or not, but if I do I’ll probably wait until all three episodes have aired. In the meantime, this seems as good a time as any to go back to a story that I saw a week ago but somehow, thanks to grants, traveling to Houston, and other distractions that I wanted to blog about more, never got around to. Since The Emperor of All Maladies, the book at least, is billed as “a biography of cancer,” I’d indulge my interest in ancient medicine, including ancient Egyptian medicine starting when I first wrote about the Edwin Smith papyrus, which I saw at the Metropolitan Museum of Art in New York nearly ten years ago.


If there’s one claim that irritates me that various proponents of alternative medicine like to make, it’s that cancer is a “modern” disease, that it was rare (or even didn’t exist) before the rise of modern societies, particularly the industrial revolution. This viewpoint bubbled up five years ago, when a commentary in Nature Reviews Cancer (yep, the same journal in which I published my opinion piece on integrative oncology a few months ago) that argued strongly that cancer was almost unknown (or at least very rare) in the ancient world based on the lack of finding it in mummies in Egypt and South America. They also looked at ancient texts and literature from Egypt and Greece, and say that there’s little sign that cancer was a common ailment. After all, cancer is mainly a disease of the elderly, with three-quarters of cases being diagnosed in people over 60 and more than a third of cases diagnosed in people 75 or older. Life expectancy was much shorter in ancient times; so relatively few people made it to cancer-prone ages. Most probably didn’t make it past age 40.



In any case, what caught my attention was a story reporting the finding of the oldest example of breast cancer:



Researchers working in Egypt say they have found the oldest example of breast cancer in the 4,200-year-old remains of an Egyptian woman — a discovery that casts further doubt on the common perception of cancer as a modern disease associated with today’s lifestyles.



Here’s the announcement from the Egyptian Ministry of Antiquities:







Ministry of AntiquitiesPress Office——————————–Evidenced in Egypt : the oldest breast cancer in…


Posted by Ministry of Antiquities on Tuesday, March 24, 2015





See:



Antiquities Minister, Dr. Mamdouh el-Damaty announces the discovery of the oldest evidence of breast cancer in the world. This discovery was made along the seventh archaeological season carried out by University of Jaen (Spain) in the necropolis of Qubbet el-Hawa (West Asuan). Dr. Miguel Botella (University of Granada) and his team of anthropologists have identified on the bones of an adult woman an extraordinary deterioration in all her skeleton. The study of her remains shows the typical destructive damages provoked by the extension of a breast cancer as a metastasis in the bones.


The team from University of Jaen has confirmed that the woman lived at the end of the 6th Dynasty (2200 BCE) and was part of the élite of the southernmost town of Egypt, Elephantine. The virulence of the disease impeded her to carry out any kind of labor, but she was treated and taken care during a long period until her death.



It turns out that the apparently low incidence of cancer in mummies, skeletons, and other ancient remains might be an illusion. For example, investigators from the United Kingdom reported a year ago on the case of 3,000 year old skeleton found in Sudan of a man who appeared to have had metastatic prostate cancer, which they published in PLoS One by Michaela Binder and colleagues. It was the oldest complete example of a skeleton of an ancient human with cancer. The authors wrote:



The apparent absence of cancer in archaeological remains may also partly be an illusion created by issues of bone preservation, and due to the fact that methods of analysis are inadequate to detect initial changes within bone. Due to financial, time and logistical reasons, human remains are usually not systematically radiographed, and bone metastases originating in cancellous tissue only penetrate the bone surface in their advanced stages. If the immune system was already compromised by other negative influences in a person’s life, people may not have survived long enough to develop full skeletal metastases. Thus, evidence for a large proportion of tumours could be missed when skeletal remains are analysed [72]. Another challenge in detecting cancer in ancient human remains is the poor preservation of bone which often prevents the clear identification of lytic lesions and precludes the diagnosis of incomplete remains [27]. With increasing numbers of skeletal collections and more detailed analysis, as well as more readily available standard radiographic equipment, the evidence for cancer in antiquity could increase significantly.



In other words, for the most part, archaeologists haven’t been looking carefully for evidence of cancer in ancient remains, and if you don’t look for something you aren’t very likely to find it. Moreover, it’s not at all straightforward to find and confirm evidence of cancer in remains that are usually just skeletons and usually just fragments of skeletons at that. Comparatively speaking, there aren’t that many mummies and not that many remains with soft tissue that can be examined for evidence of cancer. In this case, the Binder et al studied the skeleton of a man aged 25-35 years recovered in 2013 from Amara West. It was found in a tomb with characteristics that suggest what the authors referred to as a “sub-elite” buried according to Egyptian customs.


The authors examined the bones and found lytic lesions (lesions that eat away bone tissue) affecting the ribs, vertebrae, clavicle, scapulae, pelvis, sternum, humeral and femoral heads of skeleton. Such lesions are very characteristic of some sorts of cancer metastases to bone. Radiographic, scanning electron microscope (SEM) images, and microscopic images were taken of the lesions, and a differential diagnosis constructed. The lesions were most consistent with bony metastases, but the authors had to rule out other potential causes of lytic lesions. These include metastatic carcinoma, multiple myeloma (a cancer of the plasma cells of the bone marrow that can produce lesions very similar to metastatic carcinoma), fungal infections, and taphonomic damage. Taphonomy is the study of what happens to an organism after its death and until its discovery as a fossil, including decomposition, post-mortem transport, burial, compaction, and other chemical, biologic, or physical activity which affects the remains of the organism. About this last possibility, the authors noted that “mall round holes similar to metastatic lesions can be caused by a variety of factors including roots, water, and termites [68] or dermatid beetles [69].” SEM examination, however, found characteristics more consistent with metastatic carcinoma than with any of the other things that can cause taphonomic damage.


The authors also reiterate:



The lack of evidence for cancer in antiquity may to a large extent, be the result of reduced life expectancy, and thus less time to develop skeletal lesions if the immune system is already compromised by an inadequate supply of nutrients and diseases. This represents one of the major problems in inferring the absence or presence of disease in the past in general [87]. The archaeological and historical record certainly provides plenty of evidence for possible causes of developing cancer. Despite recent advances, the genetic background for cancer predisposition is still far from being understood today [88], [89]. Even though it may perhaps remain unknown, there is no reason to assume that predisposing genetic factors were not present in the past. The man from Amara West does indicate that it was indeed possible to develop skeletal lesions of cancer, provides a glimpse into one individual’s life experience, and cautions against claims for the absence, or presence, of any disease based on skeletal evidence alone.



Besides, there’s also evidence that cancer has been with us since prehistoric times. For instance, there was Kanam Man, whose fossilized jawbone was found by Louis Leakey back in 1932, who called it, “Not only the oldest known human fragment from Africa, but the most ancient fragment of true Homo yet discovered anywhere in the world.” Kanam Man was controversial at the time, specifically whether it was what Leakey proclaimed it, but it also had an unusual feature:



At the time of the discovery, it had seemed like a bother, detracting from Leakey’s find. He was working in his rooms at St. John’s College at the University of Cambridge, carefully cleaning the specimen, when he felt a lump. He thought it was a rock. But as he kept picking, he could see that the lump was part of the fossilized jaw. He sent it to a specialist on mandibular abnormalities at the Royal College of Surgeons of England, who diagnosed it as osteosarcoma — a cancer of the bone.


Others have not been as certain. As recently as 2007, scientists scanning the mandible with an electron microscope concluded that this was indeed a case of “bone run amok” while remaining neutral on the nature of the pathology.



Of course, from a science-based perspective, none of this should be surprising. We might argue over how large a contribution of random chance there is to the development of cancer, but there’s little doubt that there is a large random component to it, a component of what can be called, for lack of a better term, “bad luck.” And, although cancer is primarily a disease of the elderly, young people can and do get it. As for cancer caused by the environment, there were also a lot of things that ancient humans encountered that could cause cancer: Sunlight leading to melanoma, infections that can cause cancer, radon, naturally occurring chemicals. The ancient world was hardly as pristine as it’s envisioned.


It’s not just cancer, either. Advocates of “paleo” diets, which, accurately or not, are designed to mimic what our paleolithic ancestors ate, frequently claim that heart disease would be virtually nonexistent if we all ate that way. Of course, as I’ve described on more than one occasion, ancient humans were prone to atherosclerotic heart disease as well. In fact, the evidence we have suggests that, for example, ancient Egyptians were prone to all manner of illnesses.


When it comes to cancer, in 1600 BC the Egyptian physician who wrote the Edwin Smith papyrus recommended cauterization of breast cancer with a tool called the “fire drill.” He also wrote about the disease, “There is no treatment.” If there’s one big difference between humans now and humans thousands of years ago, it was not biology or the factors that cause us to develop cancer. It was that there was no treatment. Now there is. What I’ve seen of The Emperor of All Maladies thus far demonstrates this.






from ScienceBlogs http://ift.tt/1GI3TQ5

As I sat on my couch last night, laptop sitting in front of me, I awaited the Ken Burns adaptation of Siddartha Mukherjee’s excellent book The Emperor of All Maladies into a three part television documentary to air on PBS. I’m not sure whether I’ll blog the show or not, but if I do I’ll probably wait until all three episodes have aired. In the meantime, this seems as good a time as any to go back to a story that I saw a week ago but somehow, thanks to grants, traveling to Houston, and other distractions that I wanted to blog about more, never got around to. Since The Emperor of All Maladies, the book at least, is billed as “a biography of cancer,” I’d indulge my interest in ancient medicine, including ancient Egyptian medicine starting when I first wrote about the Edwin Smith papyrus, which I saw at the Metropolitan Museum of Art in New York nearly ten years ago.


If there’s one claim that irritates me that various proponents of alternative medicine like to make, it’s that cancer is a “modern” disease, that it was rare (or even didn’t exist) before the rise of modern societies, particularly the industrial revolution. This viewpoint bubbled up five years ago, when a commentary in Nature Reviews Cancer (yep, the same journal in which I published my opinion piece on integrative oncology a few months ago) that argued strongly that cancer was almost unknown (or at least very rare) in the ancient world based on the lack of finding it in mummies in Egypt and South America. They also looked at ancient texts and literature from Egypt and Greece, and say that there’s little sign that cancer was a common ailment. After all, cancer is mainly a disease of the elderly, with three-quarters of cases being diagnosed in people over 60 and more than a third of cases diagnosed in people 75 or older. Life expectancy was much shorter in ancient times; so relatively few people made it to cancer-prone ages. Most probably didn’t make it past age 40.



In any case, what caught my attention was a story reporting the finding of the oldest example of breast cancer:



Researchers working in Egypt say they have found the oldest example of breast cancer in the 4,200-year-old remains of an Egyptian woman — a discovery that casts further doubt on the common perception of cancer as a modern disease associated with today’s lifestyles.



Here’s the announcement from the Egyptian Ministry of Antiquities:







Ministry of AntiquitiesPress Office——————————–Evidenced in Egypt : the oldest breast cancer in…


Posted by Ministry of Antiquities on Tuesday, March 24, 2015





See:



Antiquities Minister, Dr. Mamdouh el-Damaty announces the discovery of the oldest evidence of breast cancer in the world. This discovery was made along the seventh archaeological season carried out by University of Jaen (Spain) in the necropolis of Qubbet el-Hawa (West Asuan). Dr. Miguel Botella (University of Granada) and his team of anthropologists have identified on the bones of an adult woman an extraordinary deterioration in all her skeleton. The study of her remains shows the typical destructive damages provoked by the extension of a breast cancer as a metastasis in the bones.


The team from University of Jaen has confirmed that the woman lived at the end of the 6th Dynasty (2200 BCE) and was part of the élite of the southernmost town of Egypt, Elephantine. The virulence of the disease impeded her to carry out any kind of labor, but she was treated and taken care during a long period until her death.



It turns out that the apparently low incidence of cancer in mummies, skeletons, and other ancient remains might be an illusion. For example, investigators from the United Kingdom reported a year ago on the case of 3,000 year old skeleton found in Sudan of a man who appeared to have had metastatic prostate cancer, which they published in PLoS One by Michaela Binder and colleagues. It was the oldest complete example of a skeleton of an ancient human with cancer. The authors wrote:



The apparent absence of cancer in archaeological remains may also partly be an illusion created by issues of bone preservation, and due to the fact that methods of analysis are inadequate to detect initial changes within bone. Due to financial, time and logistical reasons, human remains are usually not systematically radiographed, and bone metastases originating in cancellous tissue only penetrate the bone surface in their advanced stages. If the immune system was already compromised by other negative influences in a person’s life, people may not have survived long enough to develop full skeletal metastases. Thus, evidence for a large proportion of tumours could be missed when skeletal remains are analysed [72]. Another challenge in detecting cancer in ancient human remains is the poor preservation of bone which often prevents the clear identification of lytic lesions and precludes the diagnosis of incomplete remains [27]. With increasing numbers of skeletal collections and more detailed analysis, as well as more readily available standard radiographic equipment, the evidence for cancer in antiquity could increase significantly.



In other words, for the most part, archaeologists haven’t been looking carefully for evidence of cancer in ancient remains, and if you don’t look for something you aren’t very likely to find it. Moreover, it’s not at all straightforward to find and confirm evidence of cancer in remains that are usually just skeletons and usually just fragments of skeletons at that. Comparatively speaking, there aren’t that many mummies and not that many remains with soft tissue that can be examined for evidence of cancer. In this case, the Binder et al studied the skeleton of a man aged 25-35 years recovered in 2013 from Amara West. It was found in a tomb with characteristics that suggest what the authors referred to as a “sub-elite” buried according to Egyptian customs.


The authors examined the bones and found lytic lesions (lesions that eat away bone tissue) affecting the ribs, vertebrae, clavicle, scapulae, pelvis, sternum, humeral and femoral heads of skeleton. Such lesions are very characteristic of some sorts of cancer metastases to bone. Radiographic, scanning electron microscope (SEM) images, and microscopic images were taken of the lesions, and a differential diagnosis constructed. The lesions were most consistent with bony metastases, but the authors had to rule out other potential causes of lytic lesions. These include metastatic carcinoma, multiple myeloma (a cancer of the plasma cells of the bone marrow that can produce lesions very similar to metastatic carcinoma), fungal infections, and taphonomic damage. Taphonomy is the study of what happens to an organism after its death and until its discovery as a fossil, including decomposition, post-mortem transport, burial, compaction, and other chemical, biologic, or physical activity which affects the remains of the organism. About this last possibility, the authors noted that “mall round holes similar to metastatic lesions can be caused by a variety of factors including roots, water, and termites [68] or dermatid beetles [69].” SEM examination, however, found characteristics more consistent with metastatic carcinoma than with any of the other things that can cause taphonomic damage.


The authors also reiterate:



The lack of evidence for cancer in antiquity may to a large extent, be the result of reduced life expectancy, and thus less time to develop skeletal lesions if the immune system is already compromised by an inadequate supply of nutrients and diseases. This represents one of the major problems in inferring the absence or presence of disease in the past in general [87]. The archaeological and historical record certainly provides plenty of evidence for possible causes of developing cancer. Despite recent advances, the genetic background for cancer predisposition is still far from being understood today [88], [89]. Even though it may perhaps remain unknown, there is no reason to assume that predisposing genetic factors were not present in the past. The man from Amara West does indicate that it was indeed possible to develop skeletal lesions of cancer, provides a glimpse into one individual’s life experience, and cautions against claims for the absence, or presence, of any disease based on skeletal evidence alone.



Besides, there’s also evidence that cancer has been with us since prehistoric times. For instance, there was Kanam Man, whose fossilized jawbone was found by Louis Leakey back in 1932, who called it, “Not only the oldest known human fragment from Africa, but the most ancient fragment of true Homo yet discovered anywhere in the world.” Kanam Man was controversial at the time, specifically whether it was what Leakey proclaimed it, but it also had an unusual feature:



At the time of the discovery, it had seemed like a bother, detracting from Leakey’s find. He was working in his rooms at St. John’s College at the University of Cambridge, carefully cleaning the specimen, when he felt a lump. He thought it was a rock. But as he kept picking, he could see that the lump was part of the fossilized jaw. He sent it to a specialist on mandibular abnormalities at the Royal College of Surgeons of England, who diagnosed it as osteosarcoma — a cancer of the bone.


Others have not been as certain. As recently as 2007, scientists scanning the mandible with an electron microscope concluded that this was indeed a case of “bone run amok” while remaining neutral on the nature of the pathology.



Of course, from a science-based perspective, none of this should be surprising. We might argue over how large a contribution of random chance there is to the development of cancer, but there’s little doubt that there is a large random component to it, a component of what can be called, for lack of a better term, “bad luck.” And, although cancer is primarily a disease of the elderly, young people can and do get it. As for cancer caused by the environment, there were also a lot of things that ancient humans encountered that could cause cancer: Sunlight leading to melanoma, infections that can cause cancer, radon, naturally occurring chemicals. The ancient world was hardly as pristine as it’s envisioned.


It’s not just cancer, either. Advocates of “paleo” diets, which, accurately or not, are designed to mimic what our paleolithic ancestors ate, frequently claim that heart disease would be virtually nonexistent if we all ate that way. Of course, as I’ve described on more than one occasion, ancient humans were prone to atherosclerotic heart disease as well. In fact, the evidence we have suggests that, for example, ancient Egyptians were prone to all manner of illnesses.


When it comes to cancer, in 1600 BC the Egyptian physician who wrote the Edwin Smith papyrus recommended cauterization of breast cancer with a tool called the “fire drill.” He also wrote about the disease, “There is no treatment.” If there’s one big difference between humans now and humans thousands of years ago, it was not biology or the factors that cause us to develop cancer. It was that there was no treatment. Now there is. What I’ve seen of The Emperor of All Maladies thus far demonstrates this.






from ScienceBlogs http://ift.tt/1GI3TQ5

A Farmer's Dilemma: To Till or Not To Till

As winter turns to spring, farmers are preparing to plant this year's crops. For some, tilling their fields is a thing of the past.



No-till farming
Photo: USDA Natural Resources Conservation Service






When you think of a farmer at work in the fields, do you picture a tractor pulling a plow and turning the soil? In my mind, it is a red tractor, and the soil is rich and dark.


For many people, turning the soil may be an obvious part of growing crops. Of course it is required! Isn't that what farmers do!?! It turns out that the answer to that question isn't easy. Yes, many farmers turn, or till, the soil. But increasing percentages of farmers are opting not to till some or all of their fields, for a variety of reasons.


As farmers prepare to plant new crops this spring, they must weigh the pros and cons of till and no-till farming. On the one hand, tilling a field in preparation for planting aerates and warms the soil, and also buries weeds, animal waste, and leftover crops. However, once the soil is turned, it is much more vulnerable to erosion from wind and water and is likely to have increased run-off of soil and chemicals into local waterways.


On the other hand, leaving a field untilled allows leftover crops to act as mulch and helps protect the soil from erosion and run-off. However, planting seeds through this layer of mulch is more difficult and requires expensive machinery. This method also may require more herbicide to control weeds, and, in some places, crop yields may be lower because the mulch keeps the soil cooler and seeds germinate later in the season.




Can you Dig It? Science and Farming


So what is a farmer to do? With no one right answer, farmers must experiment to learn what works best with their soil and the crops they choose to grow. Do you have an interest in the science behind farming? Try out these Science Buddies Project Ideas:



Getting Dirty in the Name of Science


Spring is a great time to talk with kids about plant life cycles. Dig in the dirt, plant a few seeds, or just head outside and observe how plant life is changing as the weather changes where you live.







from Science Buddies Blog http://ift.tt/1NA9B8D

As winter turns to spring, farmers are preparing to plant this year's crops. For some, tilling their fields is a thing of the past.



No-till farming
Photo: USDA Natural Resources Conservation Service






When you think of a farmer at work in the fields, do you picture a tractor pulling a plow and turning the soil? In my mind, it is a red tractor, and the soil is rich and dark.


For many people, turning the soil may be an obvious part of growing crops. Of course it is required! Isn't that what farmers do!?! It turns out that the answer to that question isn't easy. Yes, many farmers turn, or till, the soil. But increasing percentages of farmers are opting not to till some or all of their fields, for a variety of reasons.


As farmers prepare to plant new crops this spring, they must weigh the pros and cons of till and no-till farming. On the one hand, tilling a field in preparation for planting aerates and warms the soil, and also buries weeds, animal waste, and leftover crops. However, once the soil is turned, it is much more vulnerable to erosion from wind and water and is likely to have increased run-off of soil and chemicals into local waterways.


On the other hand, leaving a field untilled allows leftover crops to act as mulch and helps protect the soil from erosion and run-off. However, planting seeds through this layer of mulch is more difficult and requires expensive machinery. This method also may require more herbicide to control weeds, and, in some places, crop yields may be lower because the mulch keeps the soil cooler and seeds germinate later in the season.




Can you Dig It? Science and Farming


So what is a farmer to do? With no one right answer, farmers must experiment to learn what works best with their soil and the crops they choose to grow. Do you have an interest in the science behind farming? Try out these Science Buddies Project Ideas:



Getting Dirty in the Name of Science


Spring is a great time to talk with kids about plant life cycles. Dig in the dirt, plant a few seeds, or just head outside and observe how plant life is changing as the weather changes where you live.







from Science Buddies Blog http://ift.tt/1NA9B8D

Before and after cyclone Pam


Photo credit: William Dyer

Photo credit: William Dyer



Cyclone Pam was a Category 5 storm when it struck the island nation of Vanuatu in the South Pacific on March 13 and 14, 2015. Pam tore down miles of dense foliage, stripped vegetation, and coated leaves in damaging salt spray, turning lush green landscapes brown.


Two of the hardest hit islands were Tanna and Erromango. On March 17 — three days after the storm hit — NASA’s Landsat 8 satellite acquired images, which compared with earlier images of the same islands, show the widespread effects of the storm. According to scientists from Tropical Storm Risk, the island of Erromango likely faced the most severe winds. Their analysis suggests that Erromango saw gusts up to 320 kilometers (200 miles) per hour. Before Pam, Erromango appeared dark green due its lush tropical vegetation.


January 28, 2015. Image credit: NASA

January 28, 2015. Image credit: NASA



Here’s how Erromango looked after cyclone Pam.


March 17, 2015. Image credit; NASA

March 17, 2015. Image credit; NASA



While Erromango is home to just a few thousand people, about 30,000 people live on the island of Tanna. Here is Tanna before Pam.


January 28, 2015. Image credit: NASA

January 28, 2015. Image credit: NASA



With top gusts of 260 kilometers (160 miles) per hour, Tanna faired slightly better than Erromango. Here’s Tanna days after cyclone Pam.


March 17, 2015. Image credit: NASA

March 17, 2015. Image credit: NASA



Closer to the ground, 27-year-old pilot William Dyer, who helped conduct aerial assessments of Vanuatu’s many island chains, snapped photos of the islands in the days after Pam hit, and shared them alongside photos he’d taken before the storm.


Photo credit: William Dyer

Photo credit: William Dyer



Pam took a heavy human toll. The storm killed six people and seriously injured many more, according to media reports. Thousands of people have been left homeless Meanwhile, ongoing water and food shortages mean the humanitarian situation could worsen.


How you can help: Donate to the disaster relief effort.


The same spot on the island of Emae before and after cyclone Pam. Photo credit: William Dyer

The same spot on the island of Emae before and after cyclone Pam. Photo credit: William Dyer



Read more from NASA’s Earth Observatory






from EarthSky http://ift.tt/1Fau0gs

Photo credit: William Dyer

Photo credit: William Dyer



Cyclone Pam was a Category 5 storm when it struck the island nation of Vanuatu in the South Pacific on March 13 and 14, 2015. Pam tore down miles of dense foliage, stripped vegetation, and coated leaves in damaging salt spray, turning lush green landscapes brown.


Two of the hardest hit islands were Tanna and Erromango. On March 17 — three days after the storm hit — NASA’s Landsat 8 satellite acquired images, which compared with earlier images of the same islands, show the widespread effects of the storm. According to scientists from Tropical Storm Risk, the island of Erromango likely faced the most severe winds. Their analysis suggests that Erromango saw gusts up to 320 kilometers (200 miles) per hour. Before Pam, Erromango appeared dark green due its lush tropical vegetation.


January 28, 2015. Image credit: NASA

January 28, 2015. Image credit: NASA



Here’s how Erromango looked after cyclone Pam.


March 17, 2015. Image credit; NASA

March 17, 2015. Image credit; NASA



While Erromango is home to just a few thousand people, about 30,000 people live on the island of Tanna. Here is Tanna before Pam.


January 28, 2015. Image credit: NASA

January 28, 2015. Image credit: NASA



With top gusts of 260 kilometers (160 miles) per hour, Tanna faired slightly better than Erromango. Here’s Tanna days after cyclone Pam.


March 17, 2015. Image credit: NASA

March 17, 2015. Image credit: NASA



Closer to the ground, 27-year-old pilot William Dyer, who helped conduct aerial assessments of Vanuatu’s many island chains, snapped photos of the islands in the days after Pam hit, and shared them alongside photos he’d taken before the storm.


Photo credit: William Dyer

Photo credit: William Dyer



Pam took a heavy human toll. The storm killed six people and seriously injured many more, according to media reports. Thousands of people have been left homeless Meanwhile, ongoing water and food shortages mean the humanitarian situation could worsen.


How you can help: Donate to the disaster relief effort.


The same spot on the island of Emae before and after cyclone Pam. Photo credit: William Dyer

The same spot on the island of Emae before and after cyclone Pam. Photo credit: William Dyer



Read more from NASA’s Earth Observatory






from EarthSky http://ift.tt/1Fau0gs

Mostly Mute Monday: Volcanic Lightning (Synopsis) [Starts With A Bang]


“If you are caught on a golf course during a storm and are afraid of lightning, hold up a 1-iron. Not even God can hit a 1-iron.” -Lee Trevino



When it comes to lightning, you inevitably think of thunderstorms, rain, and the exchange of huge amounts of charge between the clouds above and the Earth. But there’s another sight that’s perhaps even more spectacular.


Image credit: Francisco Negroni / Associated Press, Agenci Uno / European Press Photo Agency.

Image credit: Francisco Negroni / Associated Press, Agenci Uno / European Press Photo Agency.



During volcanic eruptions, the high temperatures, volatile atoms-and-molecules and disrupted airflow can create an incredible separation of charge, leading to the remarkable phenomenon of volcanic lightning.


Image credit: Francisco Negroni / Associated Press, Agenci Uno / European Press Photo Agency.

Image credit: Francisco Negroni / Associated Press, Agenci Uno / European Press Photo Agency.



Come see some spectacular examples (and science) on today’s Mostly Mute Monday!






from ScienceBlogs http://ift.tt/1Dltkre

“If you are caught on a golf course during a storm and are afraid of lightning, hold up a 1-iron. Not even God can hit a 1-iron.” -Lee Trevino



When it comes to lightning, you inevitably think of thunderstorms, rain, and the exchange of huge amounts of charge between the clouds above and the Earth. But there’s another sight that’s perhaps even more spectacular.


Image credit: Francisco Negroni / Associated Press, Agenci Uno / European Press Photo Agency.

Image credit: Francisco Negroni / Associated Press, Agenci Uno / European Press Photo Agency.



During volcanic eruptions, the high temperatures, volatile atoms-and-molecules and disrupted airflow can create an incredible separation of charge, leading to the remarkable phenomenon of volcanic lightning.


Image credit: Francisco Negroni / Associated Press, Agenci Uno / European Press Photo Agency.

Image credit: Francisco Negroni / Associated Press, Agenci Uno / European Press Photo Agency.



Come see some spectacular examples (and science) on today’s Mostly Mute Monday!






from ScienceBlogs http://ift.tt/1Dltkre

Pine Beetle Caused Forest Death And Climate Change [Greg Laden's Blog]

There is some interesting new research on the relationship between the Mountain Pine Beetle, major die-offs of forests in North America, and climate change.


The Mountain Pine Beetle (Dendroctonus ponderosae) is a kind of “bark beetle” (they don’t bark, they live in bark) native to western North America. They inhabit a very wide range of habitats and are found from British Columbia all the way south to Mexico. In British Columbia alone, the pine beetle, though a fairly complex process, has managed to destroy 16 of 55 million acres of forest. This epidemic of tree death is seen in mountain forest regions all across the western United States. The beetles affect a number of species of pine trees.


The beetle lays its eggs under the pine tree bark, and in so doing, introduces a fungus that penetrates adjoining wood. This fungus has the effect of suppressing the the tree’s response to the Pine Beetle’s larvae, which proceed to eat part of the tree. This suppressive effect blocks water and nutrient transport, together with the larvae eating part of the tree, quickly kills the host tree. The process can take just a few weeks. It takes longer for the tree to actually look dead (note the evergreen tree you cut and put in your living room for Christmas is dead the whole time it is looking nice and green and cheery). By the time the tree looks dead, i.e., the needles turn brown and fall off, it has been a dead-tree-standing for months and the Pine Beetles have moved on to find other victims.


It has long been thought that climate change has contributed to the western epidemic of Pine Beetles, as well as a similar epidemic in the Southeastern US (different species of beetles). The primary mechanism would be increasing winter extreme low temperatures. The very low temperatures would kill off the larvae, removing the threat of the beetle’s spread locally after that winter. Extreme winter temperatures have warmed by around 4 degrees C since 1960 across much of the beetle’s range. The lack of killing colds itself does not cause a beetle epidemic, but simply allows it, or produces a “demographic release.” If the beetles are already there, they have the opportunity to spread.


A recent study, just out, (see reference below) confirms this basic model but also adds a considerable degree of complexity. The study shows that there is not as strong of a correlation between raising winter temperatures above typical killing levels and the spread of the beetle. The study indicates that demographic release form an increase in extreme winter lows is part of the equation, but the situation is more complex and likely warming in general enhances beetle spread and reproduction during the summer part of its lifecycle, and may weaken the trees to make them more vulnerable to attack. In addition, other non-climate related factors probably play a role.


The study looked at several regions and assembled data on beetle frequency and spread over time, and various climate related data. From the abstract:



We used climate data to analyze the history of minimum air temperatures and reconstruct physio- logical effects of cold on D. ponderosae. We evaluated relations between winter temperatures and beetle abundance using aerial detection survey data… At the broadest scale, D. ponderosae population dynamics between 1997 and 2010 were unrelated to variation in minimum temperatures, but relations between cold and D. ponderosae dynamics varied among regions. In the 11 coldest ecoregions, lethal winter temperatures have become less frequent since the 1980s and beetle-caused tree mortality increased—consistent with the climatic release hypothesis. However, in the 12 warmer regions, recent epidemics cannot be attributed to warming winters because earlier winters were not cold enough to kill D. ponderosae…There has been pronounced warming of winter temperatures throughout the western US, and this has reduced previous constraints on D. ponderosae abundance in some regions. However, other considerations are necessary to understand the broad extent of recent D. ponderosae epidemics in the western US.



“This amount of warming could be the difference between pests surviving in areas that were historically unfavorable and could permit more severe and prolonged pest outbreaks in regions where historical outbreaks were halted by more frequent cold bouts,” says first author Aaron Weed, an ecologist at the National Park Service.


In the 11 coldest regions, winter temperatures cold enough to e lethal to D. ponderosae have become less frequent since the 1980s, and this is associated with an increase in tree mortality, confirming the link between warming conditions and increased parasite caused tree death. However, in the 12 regions with the warmest climate, recent epidemics are not clearly linked to warming winters simply because the earlier, colder, winters were already not cold enough to repress the tree-killing mountain pine beetle. This suggests that other factors may play a role in the epidemics in the western United States.


Evens so, the pattern of warming (including increase of minimum winter temperature) correlates to the demographic release of the mountain pine beetle. The authors note that “warming year-round temperatures that influence generation time and adult emergence synchrony … and drought effects that can weaken tree defenses …” are plausible explanations, but further note that a simple single explanation is not likely to be sufficient to explain the overall phenomenon.


This is, in a sense, a numbers game. A cold winter does not kill off all of the beetles. However, no matter how cold the winter is, no beetles will be wiped out if they are not there to begin with. So, demographic release, which makes possible but does not cause an outbreak, could cause an abundance of beetles across a much larger area where, no matter what natural suppression may occur, they will then become more abundant over time.


As noted, the trees themselves matter. We can safely assume that generally changes in overall climate will mean that plant communities adapted to a given region might lose that adaptive edge and be subject to a number of problems which can then be exploited by a potentially spreading parasite. These changes in viability of plant communities are not all climate change related. Forest management, disturbance, and regional demographics (as forests age, they tend to change what they do) are also factors in this complex set of ecological relationships.


The bottom line. This study confirms the effects of warming, especially the increase of winter low temperatures, on the potential for D. ponderosae to spread rapidly locally and regionally. The study also calls into question the simplistic model that this is all that happens to explain the widespread epidemic of this beetle. Other factors, including other aspects of global warming, also contribute to the epidemics. In addition, and importantly, the study demonstrates a high degree of variability in the outcome of ecological and climate change.


This epidemic is probably the largest observed kill-off of forests caused by a parasite. So far it is much more severe in its effects than forest fires, but over the long to medium term, we will probably see increased frequency and severity of forest fires because of the abundance of fuel provided by the die-off.


Soucre:



Weed, A. S., Bentz, B. J., Ayres, M. P., & Holmes, T. P. (2015). Geographically variable response of Dendroctonus ponderosae to winter warming in the western United States. Landscape Ecology. doi:10.1007/s10980–015–0170-z


Text for the image at the top of the post, from the USDA:



The Mountain Pine Beetle is at epidemic levels throughout the western United States, including here in the Rocky Mountain Region … Forests affected here include several in Colorado, Wyoming, South Dakota and Nebraska. In northern Colorado and southeastern Wyoming, Mountain Pine Beetles have impacted more than 4 million acres since the first signs of outbreak in 1996. The majority of outbreaks have occurred in three forests: Arapaho-Roosevelt, White River and Medicine Bow/Routt.







from ScienceBlogs http://ift.tt/1EVj7wv

There is some interesting new research on the relationship between the Mountain Pine Beetle, major die-offs of forests in North America, and climate change.


The Mountain Pine Beetle (Dendroctonus ponderosae) is a kind of “bark beetle” (they don’t bark, they live in bark) native to western North America. They inhabit a very wide range of habitats and are found from British Columbia all the way south to Mexico. In British Columbia alone, the pine beetle, though a fairly complex process, has managed to destroy 16 of 55 million acres of forest. This epidemic of tree death is seen in mountain forest regions all across the western United States. The beetles affect a number of species of pine trees.


The beetle lays its eggs under the pine tree bark, and in so doing, introduces a fungus that penetrates adjoining wood. This fungus has the effect of suppressing the the tree’s response to the Pine Beetle’s larvae, which proceed to eat part of the tree. This suppressive effect blocks water and nutrient transport, together with the larvae eating part of the tree, quickly kills the host tree. The process can take just a few weeks. It takes longer for the tree to actually look dead (note the evergreen tree you cut and put in your living room for Christmas is dead the whole time it is looking nice and green and cheery). By the time the tree looks dead, i.e., the needles turn brown and fall off, it has been a dead-tree-standing for months and the Pine Beetles have moved on to find other victims.


It has long been thought that climate change has contributed to the western epidemic of Pine Beetles, as well as a similar epidemic in the Southeastern US (different species of beetles). The primary mechanism would be increasing winter extreme low temperatures. The very low temperatures would kill off the larvae, removing the threat of the beetle’s spread locally after that winter. Extreme winter temperatures have warmed by around 4 degrees C since 1960 across much of the beetle’s range. The lack of killing colds itself does not cause a beetle epidemic, but simply allows it, or produces a “demographic release.” If the beetles are already there, they have the opportunity to spread.


A recent study, just out, (see reference below) confirms this basic model but also adds a considerable degree of complexity. The study shows that there is not as strong of a correlation between raising winter temperatures above typical killing levels and the spread of the beetle. The study indicates that demographic release form an increase in extreme winter lows is part of the equation, but the situation is more complex and likely warming in general enhances beetle spread and reproduction during the summer part of its lifecycle, and may weaken the trees to make them more vulnerable to attack. In addition, other non-climate related factors probably play a role.


The study looked at several regions and assembled data on beetle frequency and spread over time, and various climate related data. From the abstract:



We used climate data to analyze the history of minimum air temperatures and reconstruct physio- logical effects of cold on D. ponderosae. We evaluated relations between winter temperatures and beetle abundance using aerial detection survey data… At the broadest scale, D. ponderosae population dynamics between 1997 and 2010 were unrelated to variation in minimum temperatures, but relations between cold and D. ponderosae dynamics varied among regions. In the 11 coldest ecoregions, lethal winter temperatures have become less frequent since the 1980s and beetle-caused tree mortality increased—consistent with the climatic release hypothesis. However, in the 12 warmer regions, recent epidemics cannot be attributed to warming winters because earlier winters were not cold enough to kill D. ponderosae…There has been pronounced warming of winter temperatures throughout the western US, and this has reduced previous constraints on D. ponderosae abundance in some regions. However, other considerations are necessary to understand the broad extent of recent D. ponderosae epidemics in the western US.



“This amount of warming could be the difference between pests surviving in areas that were historically unfavorable and could permit more severe and prolonged pest outbreaks in regions where historical outbreaks were halted by more frequent cold bouts,” says first author Aaron Weed, an ecologist at the National Park Service.


In the 11 coldest regions, winter temperatures cold enough to e lethal to D. ponderosae have become less frequent since the 1980s, and this is associated with an increase in tree mortality, confirming the link between warming conditions and increased parasite caused tree death. However, in the 12 regions with the warmest climate, recent epidemics are not clearly linked to warming winters simply because the earlier, colder, winters were already not cold enough to repress the tree-killing mountain pine beetle. This suggests that other factors may play a role in the epidemics in the western United States.


Evens so, the pattern of warming (including increase of minimum winter temperature) correlates to the demographic release of the mountain pine beetle. The authors note that “warming year-round temperatures that influence generation time and adult emergence synchrony … and drought effects that can weaken tree defenses …” are plausible explanations, but further note that a simple single explanation is not likely to be sufficient to explain the overall phenomenon.


This is, in a sense, a numbers game. A cold winter does not kill off all of the beetles. However, no matter how cold the winter is, no beetles will be wiped out if they are not there to begin with. So, demographic release, which makes possible but does not cause an outbreak, could cause an abundance of beetles across a much larger area where, no matter what natural suppression may occur, they will then become more abundant over time.


As noted, the trees themselves matter. We can safely assume that generally changes in overall climate will mean that plant communities adapted to a given region might lose that adaptive edge and be subject to a number of problems which can then be exploited by a potentially spreading parasite. These changes in viability of plant communities are not all climate change related. Forest management, disturbance, and regional demographics (as forests age, they tend to change what they do) are also factors in this complex set of ecological relationships.


The bottom line. This study confirms the effects of warming, especially the increase of winter low temperatures, on the potential for D. ponderosae to spread rapidly locally and regionally. The study also calls into question the simplistic model that this is all that happens to explain the widespread epidemic of this beetle. Other factors, including other aspects of global warming, also contribute to the epidemics. In addition, and importantly, the study demonstrates a high degree of variability in the outcome of ecological and climate change.


This epidemic is probably the largest observed kill-off of forests caused by a parasite. So far it is much more severe in its effects than forest fires, but over the long to medium term, we will probably see increased frequency and severity of forest fires because of the abundance of fuel provided by the die-off.


Soucre:



Weed, A. S., Bentz, B. J., Ayres, M. P., & Holmes, T. P. (2015). Geographically variable response of Dendroctonus ponderosae to winter warming in the western United States. Landscape Ecology. doi:10.1007/s10980–015–0170-z


Text for the image at the top of the post, from the USDA:



The Mountain Pine Beetle is at epidemic levels throughout the western United States, including here in the Rocky Mountain Region … Forests affected here include several in Colorado, Wyoming, South Dakota and Nebraska. In northern Colorado and southeastern Wyoming, Mountain Pine Beetles have impacted more than 4 million acres since the first signs of outbreak in 1996. The majority of outbreaks have occurred in three forests: Arapaho-Roosevelt, White River and Medicine Bow/Routt.







from ScienceBlogs http://ift.tt/1EVj7wv

Passing comets painted Mercury black


A limb mosaic of the planet Mercury as seen from MESSENGER’s Wide Angle Camera & Dual Imaging System. Credit: NASA/Johns Hopkins University/Applied Physics Laboratory/Carnegie Institution of Washington Read more at: http://ift.tt/1BWBRtp

A limb mosaic of the planet Mercury as seen from the MESSENGER spacecraft’s Wide Angle Camera & Dual Imaging System. Image via NASA/Johns Hopkins University/Applied Physics Laboratory/Carnegie Institution of Washington.



Scientists announced this morning (March 30, 2015) that Mercury’s dark, barely reflective surface may be the result of a steady dusting of carbon from passing comets. In other words, over billions of years, comets have slowly painted Mecury’s surface black. The scientists published in the journal Nature Geoscience.


A body’s reflectivity is called its albedo by astronomers. One of the most reflective worlds in our solar system – a world with a very high albedo – is Saturn’s moon Enceladus, whose surface is covered by highly reflective ice. Now think of the opposite end of the albedo scale, of a dark surface, like that of Mercury. What could make a planet’s surface so dark? In fact, the dark surface of the sun’s innermost world has long been a mystery to scientists. According to a statement from Brown University:



On average, Mercury is much darker than its closest airless neighbor, our moon. Airless bodies are known to be darkened by micrometeorite impacts and bombardment of solar wind, processes that create a thin coating of dark iron nanoparticles on the surface.


But spectral data from Mercury suggests its surface contains very little nanophase iron, certainly not enough to account for its dim appearance.



Megan Bruck Syal is a postdoctoral researcher at Lawrence Livermore National Laboratory. She performed this research while a graduate student at Brown. She said:



One thing that hadn’t been considered was that Mercury gets dumped on by a lot of material derived from comets.



If you ever see a total eclipse of the sun, like the one in this photo by Paul D. Maley, you might see a little dot near the sun - Mercury! That's because Mercury is the sun's innermost planet. It orbits in the same realm of the solar system as many comets, when they are at their closest to the sun and at their most active.

If you ever see a total eclipse of the sun, like the one in this photo by Paul D. Maley, you might see a little dot near the sun – Mercury! That’s because Mercury is the sun’s innermost planet. It orbits in the same realm of the solar system as many comets when they are at their closest to the sun, and hence at their most active.



Like planets, comets are bound in orbit by the sun. But unlike planets, many comets have highly elliptical orbits; that is, they swing out far from the sun at the outer part of their orbit, then dive in close to the sun, sometimes very close. Little Mercury is in the part of the solar system where many comets are coming nearest the sun in their orbits. At such times, comets are at their most active. The Brown University statement said:



As comets approach Mercury’s neighborhood near the sun, they often start to break apart. Cometary dust is composed of as much as 25 percent carbon by weight, so Mercury would be exposed to a steady bombardment of carbon from these crumbling comets.


Using a model of impact delivery and a known estimate of how many micrometeorites might be expected to strike Mercury, Megan Bruck Syal was able to estimate how often cometary material would impact Mercury, too. She also showed how much carbon would stick to Mercury’s surface, and how much would be thrown back into space.


Her calculations suggest that, after billions of years of bombardment, Mercury’s surface should be anywhere from 3 to 6 percent carbon.



How much would Mercury darken, via all that impacting carbon? To find out, the team turned to the NASA Ames Vertical Gun Range. The 14-foot canon simulates celestial impacts by firing projectiles at up to 16,000 miles (about 25,000 km) per hour. At the gun range, the team:



… launched projectiles in the presence of sugar, a complex organic compound that mimics the organics in comet material. The heat of an impact burns the sugar up, releasing carbon. Projectiles were fired into a material that mimics lunar basalt, the rock that makes up the dark patches on the nearside of the moon.



They fired toward a material like that on the moon’s surface, in order to see if they could make a dark material turn darker. And they did accomplish that. The experiments showed that tiny carbon particles did become deeply embedded in the target material. The process reduced the amount of light reflected by the target material to less than 5 percent – about the same as the darkest parts of Mercury.


Read more about this study from Brown University.


Bottom line: Scientists say that comets passing near the sun are painting the surface of the innermost planet Mercury black.






from EarthSky http://ift.tt/1bJoaJg

A limb mosaic of the planet Mercury as seen from MESSENGER’s Wide Angle Camera & Dual Imaging System. Credit: NASA/Johns Hopkins University/Applied Physics Laboratory/Carnegie Institution of Washington Read more at: http://ift.tt/1BWBRtp

A limb mosaic of the planet Mercury as seen from the MESSENGER spacecraft’s Wide Angle Camera & Dual Imaging System. Image via NASA/Johns Hopkins University/Applied Physics Laboratory/Carnegie Institution of Washington.



Scientists announced this morning (March 30, 2015) that Mercury’s dark, barely reflective surface may be the result of a steady dusting of carbon from passing comets. In other words, over billions of years, comets have slowly painted Mecury’s surface black. The scientists published in the journal Nature Geoscience.


A body’s reflectivity is called its albedo by astronomers. One of the most reflective worlds in our solar system – a world with a very high albedo – is Saturn’s moon Enceladus, whose surface is covered by highly reflective ice. Now think of the opposite end of the albedo scale, of a dark surface, like that of Mercury. What could make a planet’s surface so dark? In fact, the dark surface of the sun’s innermost world has long been a mystery to scientists. According to a statement from Brown University:



On average, Mercury is much darker than its closest airless neighbor, our moon. Airless bodies are known to be darkened by micrometeorite impacts and bombardment of solar wind, processes that create a thin coating of dark iron nanoparticles on the surface.


But spectral data from Mercury suggests its surface contains very little nanophase iron, certainly not enough to account for its dim appearance.



Megan Bruck Syal is a postdoctoral researcher at Lawrence Livermore National Laboratory. She performed this research while a graduate student at Brown. She said:



One thing that hadn’t been considered was that Mercury gets dumped on by a lot of material derived from comets.



If you ever see a total eclipse of the sun, like the one in this photo by Paul D. Maley, you might see a little dot near the sun - Mercury! That's because Mercury is the sun's innermost planet. It orbits in the same realm of the solar system as many comets, when they are at their closest to the sun and at their most active.

If you ever see a total eclipse of the sun, like the one in this photo by Paul D. Maley, you might see a little dot near the sun – Mercury! That’s because Mercury is the sun’s innermost planet. It orbits in the same realm of the solar system as many comets when they are at their closest to the sun, and hence at their most active.



Like planets, comets are bound in orbit by the sun. But unlike planets, many comets have highly elliptical orbits; that is, they swing out far from the sun at the outer part of their orbit, then dive in close to the sun, sometimes very close. Little Mercury is in the part of the solar system where many comets are coming nearest the sun in their orbits. At such times, comets are at their most active. The Brown University statement said:



As comets approach Mercury’s neighborhood near the sun, they often start to break apart. Cometary dust is composed of as much as 25 percent carbon by weight, so Mercury would be exposed to a steady bombardment of carbon from these crumbling comets.


Using a model of impact delivery and a known estimate of how many micrometeorites might be expected to strike Mercury, Megan Bruck Syal was able to estimate how often cometary material would impact Mercury, too. She also showed how much carbon would stick to Mercury’s surface, and how much would be thrown back into space.


Her calculations suggest that, after billions of years of bombardment, Mercury’s surface should be anywhere from 3 to 6 percent carbon.



How much would Mercury darken, via all that impacting carbon? To find out, the team turned to the NASA Ames Vertical Gun Range. The 14-foot canon simulates celestial impacts by firing projectiles at up to 16,000 miles (about 25,000 km) per hour. At the gun range, the team:



… launched projectiles in the presence of sugar, a complex organic compound that mimics the organics in comet material. The heat of an impact burns the sugar up, releasing carbon. Projectiles were fired into a material that mimics lunar basalt, the rock that makes up the dark patches on the nearside of the moon.



They fired toward a material like that on the moon’s surface, in order to see if they could make a dark material turn darker. And they did accomplish that. The experiments showed that tiny carbon particles did become deeply embedded in the target material. The process reduced the amount of light reflected by the target material to less than 5 percent – about the same as the darkest parts of Mercury.


Read more about this study from Brown University.


Bottom line: Scientists say that comets passing near the sun are painting the surface of the innermost planet Mercury black.






from EarthSky http://ift.tt/1bJoaJg

Big Blog News: I’m Now Also at Forbes [Uncertain Principles]

I hinted once or twice that I had news coming, and this is it: I’ve signed up to be a blog contributor at Forbes writing about, well, the sorts of things I usually write about. I’m pretty excited about the chance to connect with a new audience; the fact that they’re paying me doesn’t hurt, either…


The above link goes to my contributor page there, which will be your one-stop-shopping source for what I write at Forbes. There are two posts up this morning, a self-introduction, and an attempt to define physics and what makes it unique. The “Follow” button has an option for an RSS feed; this isn’t full-text, but that’s not my decision to make. I can’t do anything about the inspirational-quote splash pages, either, so don’t ask.


What does this mean for Uncertain Principles here at ScienceBlogs? Less than you might think– I’m not moving the whole operation, mostly because Forbes is interested in a specific set of things, and some of what I do is more appropriate for ScienceBlogs. In particular, more math-y physics education sorts of things will stay here (like last week’s angular momentum posts), and a lot of the inside-baseball stuff about academia. I’ll be sort of feeling out what goes where for a while, I’m sure, but you can expect new content in both places.


I have been and continue to be happy with ScienceBlogs and the folks who run it; they’ve done right by me over the years, and I’m happy to continue to support them. This move is a chance to write for a new platform, reaching a different audience than we get here at SB, and I’m excited to have that opportunity. And, of course, many thanks to Alex Knapp for inviting me to write for Forbes.


So, that’s the exciting news in Chateau Steelypips. The other big news is that today is the first day of Spring term classes, so I need to get back to my day job, now…






from ScienceBlogs http://ift.tt/1EqV65C

I hinted once or twice that I had news coming, and this is it: I’ve signed up to be a blog contributor at Forbes writing about, well, the sorts of things I usually write about. I’m pretty excited about the chance to connect with a new audience; the fact that they’re paying me doesn’t hurt, either…


The above link goes to my contributor page there, which will be your one-stop-shopping source for what I write at Forbes. There are two posts up this morning, a self-introduction, and an attempt to define physics and what makes it unique. The “Follow” button has an option for an RSS feed; this isn’t full-text, but that’s not my decision to make. I can’t do anything about the inspirational-quote splash pages, either, so don’t ask.


What does this mean for Uncertain Principles here at ScienceBlogs? Less than you might think– I’m not moving the whole operation, mostly because Forbes is interested in a specific set of things, and some of what I do is more appropriate for ScienceBlogs. In particular, more math-y physics education sorts of things will stay here (like last week’s angular momentum posts), and a lot of the inside-baseball stuff about academia. I’ll be sort of feeling out what goes where for a while, I’m sure, but you can expect new content in both places.


I have been and continue to be happy with ScienceBlogs and the folks who run it; they’ve done right by me over the years, and I’m happy to continue to support them. This move is a chance to write for a new platform, reaching a different audience than we get here at SB, and I’m excited to have that opportunity. And, of course, many thanks to Alex Knapp for inviting me to write for Forbes.


So, that’s the exciting news in Chateau Steelypips. The other big news is that today is the first day of Spring term classes, so I need to get back to my day job, now…






from ScienceBlogs http://ift.tt/1EqV65C

Greenland glacier melt increases mercury discharge


Zackenberg Research Station. Photo credit: Aarhus University, Department of Bioscience

Zackenberg Research Station. Photo credit: Aarhus University, Department of Bioscience



This article is republished with permission from GlacierHub. This post was written by Yunziyi Lang.


Mercury contamination has long been a threat to animal carnivores and human residents in the Arctic. Mercury exports from river basins to the ocean form a significant component of the Arctic mercury cycle, and are consequently of importance in understanding and addressing this contamination.


Jens Søndergaard of the Arctic Research Centre of Aarhus University, Denmark and his colleagues have been conducting research on this topic in Greenland for a number of years. They published results of their work in the journal Science of the Total Environment in February 2015. Søndergaard and his colleagues assessed the mercury concentrations in and exports from the Zackenberg River Basin in northeast Greenland for the period 2009 – 2013. This basin is about 514 square kilometers in area, of which 106 square kilometers are covered by glaciers. Glacial outburst floods have been regularly observed in Zackenberg River since 1996. This study hypothesized that the frequency, magnitude, and timing of the glacial outburst floods and associated meteorological conditions would significantly influence the riverine mercury budget. Indeed, they found significant variation from year to year, reflecting weather and floods. The total annual mercury release varied from 0.71 kg to over 1.57 kg. These are significant amounts of such a highly toxic substance.


Stream in Zackenberg drainage. Imge credit: Mikkel Tamstrof

Stream in Zackenberg drainage. Imge credit: Mikkel Tamstrof



Søndergaard and his colleagues found that sediment-bound mercury contributed more to total releases than mercury that was dissolved in the river. Initial snowmelt, sudden erosion events, and glacial lake outburst floods all influenced daily riverine mercury exports from Zackenberg River Basin during the summer, the major period of river flow. The glacial lake outburst floods were responsible for about 31 percent of the total annual riverine mercury release. Summer temperatures and the amount of snowfall from the previous winter also played important roles in affecting the annual levels of mercury release. The authors note that releases are likely to increase, because global warming is contributing to greater levels of permafrost thawing in the region; this process, in turn, destabilizes river banks, allowing mercury contained in them to be discharged into rivers.


Greenland Seal. Image credit: Greenland Travel/Flickr

Greenland Seal. Image credit: Greenland Travel/Flickr



Mercury produces adverse health effects even at low levels. It is commonly known that mercury is toxic to the nervous system. According to the U.S. Environmental Protection Agency (EPA), consuming mercury-contaminated fish accounts for the primary route of exposure for most human populations. Mercury can also threaten the health of the seabirds and marine mammals which consume fish—and which Greenlandic populations. The release of riverine mercury in Zackenberg might not have strong influence in this remote region of northeast Greenland, far from human settlements and with few fisheries to date. However, the total yearly released mercury from all the river basins in Greenland is more significant, and is growing. There is a significant risk of transport in marine ecosystems through food chains, causing mercury poisoning among humans and wildlife in Greenland and in adjacent coastal countries.






from EarthSky http://ift.tt/1G8aT8c

Zackenberg Research Station. Photo credit: Aarhus University, Department of Bioscience

Zackenberg Research Station. Photo credit: Aarhus University, Department of Bioscience



This article is republished with permission from GlacierHub. This post was written by Yunziyi Lang.


Mercury contamination has long been a threat to animal carnivores and human residents in the Arctic. Mercury exports from river basins to the ocean form a significant component of the Arctic mercury cycle, and are consequently of importance in understanding and addressing this contamination.


Jens Søndergaard of the Arctic Research Centre of Aarhus University, Denmark and his colleagues have been conducting research on this topic in Greenland for a number of years. They published results of their work in the journal Science of the Total Environment in February 2015. Søndergaard and his colleagues assessed the mercury concentrations in and exports from the Zackenberg River Basin in northeast Greenland for the period 2009 – 2013. This basin is about 514 square kilometers in area, of which 106 square kilometers are covered by glaciers. Glacial outburst floods have been regularly observed in Zackenberg River since 1996. This study hypothesized that the frequency, magnitude, and timing of the glacial outburst floods and associated meteorological conditions would significantly influence the riverine mercury budget. Indeed, they found significant variation from year to year, reflecting weather and floods. The total annual mercury release varied from 0.71 kg to over 1.57 kg. These are significant amounts of such a highly toxic substance.


Stream in Zackenberg drainage. Imge credit: Mikkel Tamstrof

Stream in Zackenberg drainage. Imge credit: Mikkel Tamstrof



Søndergaard and his colleagues found that sediment-bound mercury contributed more to total releases than mercury that was dissolved in the river. Initial snowmelt, sudden erosion events, and glacial lake outburst floods all influenced daily riverine mercury exports from Zackenberg River Basin during the summer, the major period of river flow. The glacial lake outburst floods were responsible for about 31 percent of the total annual riverine mercury release. Summer temperatures and the amount of snowfall from the previous winter also played important roles in affecting the annual levels of mercury release. The authors note that releases are likely to increase, because global warming is contributing to greater levels of permafrost thawing in the region; this process, in turn, destabilizes river banks, allowing mercury contained in them to be discharged into rivers.


Greenland Seal. Image credit: Greenland Travel/Flickr

Greenland Seal. Image credit: Greenland Travel/Flickr



Mercury produces adverse health effects even at low levels. It is commonly known that mercury is toxic to the nervous system. According to the U.S. Environmental Protection Agency (EPA), consuming mercury-contaminated fish accounts for the primary route of exposure for most human populations. Mercury can also threaten the health of the seabirds and marine mammals which consume fish—and which Greenlandic populations. The release of riverine mercury in Zackenberg might not have strong influence in this remote region of northeast Greenland, far from human settlements and with few fisheries to date. However, the total yearly released mercury from all the river basins in Greenland is more significant, and is growing. There is a significant risk of transport in marine ecosystems through food chains, causing mercury poisoning among humans and wildlife in Greenland and in adjacent coastal countries.






from EarthSky http://ift.tt/1G8aT8c