Ask Ethan #86: The last light in the Universe (Synopsis) [Starts With A Bang]

“A single tiny light creates a space where darkness cannot exist. The light vanquishes the darkness. Try as it might, the darkness cannot conquer the light.” –Donald L. Hicks

While it might seem like there are an endless supply of stars in the Universe, the process that powers each and every one requires fuel to burn. At some point — even though it’s far in the future — that fuel will all be spent, and all we’ll be left with are stellar corpses of various types.

Image credit: E. Siegel.

Image credit: E. Siegel.

But the Universe is full of second chances, and opportunities to bring not only burned-out stars back to life, but to give life to the failed stars-that-never-were. Of all the possibilities out there, what’s going to give rise to the very last light in the Universe?

Image credit: Janella Williams, Penn State University, via http://ift.tt/10EpghW.

Image credit: Janella Williams, Penn State University, via http://ift.tt/10EpghW.

The smart money is on colliding and merging (but not inspiraling) failed stars known as brown dwarfs. Come find out why!



from ScienceBlogs http://ift.tt/1E2LUP2

“A single tiny light creates a space where darkness cannot exist. The light vanquishes the darkness. Try as it might, the darkness cannot conquer the light.” –Donald L. Hicks

While it might seem like there are an endless supply of stars in the Universe, the process that powers each and every one requires fuel to burn. At some point — even though it’s far in the future — that fuel will all be spent, and all we’ll be left with are stellar corpses of various types.

Image credit: E. Siegel.

Image credit: E. Siegel.

But the Universe is full of second chances, and opportunities to bring not only burned-out stars back to life, but to give life to the failed stars-that-never-were. Of all the possibilities out there, what’s going to give rise to the very last light in the Universe?

Image credit: Janella Williams, Penn State University, via http://ift.tt/10EpghW.

Image credit: Janella Williams, Penn State University, via http://ift.tt/10EpghW.

The smart money is on colliding and merging (but not inspiraling) failed stars known as brown dwarfs. Come find out why!



from ScienceBlogs http://ift.tt/1E2LUP2

Mans sings “If I only had a brain” during MRI

Researchers use a new technique that is 10 times faster than standard MRI scanners to illustrate how the hundreds of muscles in our neck, jaw, tongue, and lips work together to make sound.

The sound of the voice is created in the larynx, located in the neck. When we sing or speak, the vocal folds — the two small pieces of tissue — come together and, as air passes over them, they vibrate, which produces sound.

Aaron Johnson, assistant professor in Speech and Hearing Science, and Beckman Institute faculty member, is the guy singing in the video. Johnson said:

The fact that we can produce all sorts of sounds and we can sing is just amazing to me. Sounds are produced by the vibrations of just two little pieces of tissue. That’s why I’ve devoted my whole life to studying it: I think it’s just incredible.

The new MRI technique, developed by a team of researchers at the Beckman Institute, captures MRI images at a far faster rate than any other MRI technique in the world. This dynamic imaging is especially useful in studying how rapidly the tongue is moving, along with other muscles in the head and neck during speech and singing.

The researchers published their technique in the May, 2015 issue of the journal Magnetic Resonance in Medicine.

Read more from the Beckman Institute



from EarthSky http://ift.tt/1JGpieW

Researchers use a new technique that is 10 times faster than standard MRI scanners to illustrate how the hundreds of muscles in our neck, jaw, tongue, and lips work together to make sound.

The sound of the voice is created in the larynx, located in the neck. When we sing or speak, the vocal folds — the two small pieces of tissue — come together and, as air passes over them, they vibrate, which produces sound.

Aaron Johnson, assistant professor in Speech and Hearing Science, and Beckman Institute faculty member, is the guy singing in the video. Johnson said:

The fact that we can produce all sorts of sounds and we can sing is just amazing to me. Sounds are produced by the vibrations of just two little pieces of tissue. That’s why I’ve devoted my whole life to studying it: I think it’s just incredible.

The new MRI technique, developed by a team of researchers at the Beckman Institute, captures MRI images at a far faster rate than any other MRI technique in the world. This dynamic imaging is especially useful in studying how rapidly the tongue is moving, along with other muscles in the head and neck during speech and singing.

The researchers published their technique in the May, 2015 issue of the journal Magnetic Resonance in Medicine.

Read more from the Beckman Institute



from EarthSky http://ift.tt/1JGpieW

The wisdom of worms [Pharyngula]

nematode

In my previous post about Paul Nelson’s weirdly ignorant view of nematode evolution, Kevin Anthoney made a prescient comment:

Remember that Nelson’s got this bizarre linear view of evolution which starts with a single cell creature, which evolves into a creature with a few cells, which evolves into one with a few more cells, and so on until you reach the 1031 cells in the nematode today. It wouldn’t surprise me at all if Nelson thought that the creature at the 150 cell stage in this process had to be like a modern nematode at the 150 cell stage of development.

The Discovery Institute has responded. I got as far as the massive projection in the following paragraph before giving up.

He acknowledges that the unit of selection (the stage of an organism’s life cycle that natural selection selects) is the individual capable of reproduction — in other words, the adult. And I infer from this that he believes there must have been a step-wise selectable pathway from a single cell to multicellular adult, each step of which was both viable and capable of reproduction. [Their emphasis –pzm]

No, I don’t think that. It seems Anthoney was right: these people have this imaginary model of development and evolution in their heads, in which the 150-cell stage had to have been viable and free-living, and capable of reproduction, in order for it’s specific pattern of differentiation to have been selected. I’m arguing the exact opposite.

A functional end result was selected for, just like in a game of poker, where a winning hand is ‘selected’ — however it got to that point. A full house is a full house whether it was dealt straight to you, whether you drew one card or two. As I mentioned in the previous post, there are multiple ways for development to produce a working worm, and I cited a paper that discussed the taxonomic variation present in various nematode species and genera.

Just to add another detail that kills their design model: that pedigree of cell divisions that produces the adult worm yields 1,090 cells…but 131 of them die during development, leaving no descendants, to produce a canonical nematode with 959 cells. That’s a wastage of 12%! Shouldn’t it be obvious that this animal was not optimized at each stage of development?

Expect to see more from the Discovery Institute on nematodes in the future. It doesn’t matter how often they are refuted, or that all of the investigators of worm development use evolution as a framework to understand what’s going on — they’ll just hammer that dead worm into the ground.



from ScienceBlogs http://ift.tt/1bmotsI

nematode

In my previous post about Paul Nelson’s weirdly ignorant view of nematode evolution, Kevin Anthoney made a prescient comment:

Remember that Nelson’s got this bizarre linear view of evolution which starts with a single cell creature, which evolves into a creature with a few cells, which evolves into one with a few more cells, and so on until you reach the 1031 cells in the nematode today. It wouldn’t surprise me at all if Nelson thought that the creature at the 150 cell stage in this process had to be like a modern nematode at the 150 cell stage of development.

The Discovery Institute has responded. I got as far as the massive projection in the following paragraph before giving up.

He acknowledges that the unit of selection (the stage of an organism’s life cycle that natural selection selects) is the individual capable of reproduction — in other words, the adult. And I infer from this that he believes there must have been a step-wise selectable pathway from a single cell to multicellular adult, each step of which was both viable and capable of reproduction. [Their emphasis –pzm]

No, I don’t think that. It seems Anthoney was right: these people have this imaginary model of development and evolution in their heads, in which the 150-cell stage had to have been viable and free-living, and capable of reproduction, in order for it’s specific pattern of differentiation to have been selected. I’m arguing the exact opposite.

A functional end result was selected for, just like in a game of poker, where a winning hand is ‘selected’ — however it got to that point. A full house is a full house whether it was dealt straight to you, whether you drew one card or two. As I mentioned in the previous post, there are multiple ways for development to produce a working worm, and I cited a paper that discussed the taxonomic variation present in various nematode species and genera.

Just to add another detail that kills their design model: that pedigree of cell divisions that produces the adult worm yields 1,090 cells…but 131 of them die during development, leaving no descendants, to produce a canonical nematode with 959 cells. That’s a wastage of 12%! Shouldn’t it be obvious that this animal was not optimized at each stage of development?

Expect to see more from the Discovery Institute on nematodes in the future. It doesn’t matter how often they are refuted, or that all of the investigators of worm development use evolution as a framework to understand what’s going on — they’ll just hammer that dead worm into the ground.



from ScienceBlogs http://ift.tt/1bmotsI

MESSENGER Went Raving into That Good Night [Page 3.14]

Yesterday the MESSENGER spacecraft circled behind Mercury one last time, where no one on Earth could see it, and slammed into the surface of the intemperate planet at an estimated 8750 miles per hour. It is the second probe to have visited Mercury—Mariner 10 completed three fly-bys of the planet in 1974, and according to NASA, is still orbiting the sun; probably fried, out of fuel, a derelict.

Mosaic Image of Mercury as photographed by Mariner 10. "PIA03101 Mercury's Southern Hemisphere" by NASA/JPL

Mosaic Image of Mercury as photographed by Mariner 10. Image credit: NASA/JPL.

MESSENGER began its scientific mission in 2011. With its destination so near to the sun, under the influence of immense gravitational force, MESSENGER had to fly by “Earth once, Venus twice, and Mercury itself three times” in order to slow down and finally achieve orbit. The spacecraft proceeded to take over 100,000 images of Mercury’s surface, allowing NASA to map 100% of the innermost planet. After completing its primary objectives in 2012, Messenger’s mission was twice extended. According to Wikipedia, “MESSENGER‘s instruments have yielded significant data, including a characterization of Mercury’s magnetic field and the discovery of water ice at the planet’s north pole.” Using an array of spectrometers and a laser altimeter, MESSENGER also revealed a ton about the geology and composition of the planet, including evidence of past volcanism and a liquid iron core.

Topography of Mercury's northern hemisphere as measured by MESSENGER's laser altimeter.

Topography of Mercury’s northern hemisphere as measured by MESSENGER’s laser altimeter. Image credit: NASA/JPL.

MESSENGER continued its surveys until its dying breath. After running out of liquid propellant, its operations were extended four weeks with the innovative appropriation of gaseous helium as a fuel source. Finally, when there was no way left to steer itself, MESSENGER made its very own, brand new crater on the planet to which its life was dedicated.



from ScienceBlogs http://ift.tt/1KAsSVw

Yesterday the MESSENGER spacecraft circled behind Mercury one last time, where no one on Earth could see it, and slammed into the surface of the intemperate planet at an estimated 8750 miles per hour. It is the second probe to have visited Mercury—Mariner 10 completed three fly-bys of the planet in 1974, and according to NASA, is still orbiting the sun; probably fried, out of fuel, a derelict.

Mosaic Image of Mercury as photographed by Mariner 10. "PIA03101 Mercury's Southern Hemisphere" by NASA/JPL

Mosaic Image of Mercury as photographed by Mariner 10. Image credit: NASA/JPL.

MESSENGER began its scientific mission in 2011. With its destination so near to the sun, under the influence of immense gravitational force, MESSENGER had to fly by “Earth once, Venus twice, and Mercury itself three times” in order to slow down and finally achieve orbit. The spacecraft proceeded to take over 100,000 images of Mercury’s surface, allowing NASA to map 100% of the innermost planet. After completing its primary objectives in 2012, Messenger’s mission was twice extended. According to Wikipedia, “MESSENGER‘s instruments have yielded significant data, including a characterization of Mercury’s magnetic field and the discovery of water ice at the planet’s north pole.” Using an array of spectrometers and a laser altimeter, MESSENGER also revealed a ton about the geology and composition of the planet, including evidence of past volcanism and a liquid iron core.

Topography of Mercury's northern hemisphere as measured by MESSENGER's laser altimeter.

Topography of Mercury’s northern hemisphere as measured by MESSENGER’s laser altimeter. Image credit: NASA/JPL.

MESSENGER continued its surveys until its dying breath. After running out of liquid propellant, its operations were extended four weeks with the innovative appropriation of gaseous helium as a fuel source. Finally, when there was no way left to steer itself, MESSENGER made its very own, brand new crater on the planet to which its life was dedicated.



from ScienceBlogs http://ift.tt/1KAsSVw

Students Experience the Field Trip of a Lifetime at the X-STEM Symposium! [USA Science and Engineering Festival: The Blog]

This past Tuesday, over 3,500 students, in grades 6-12, traveled to the Washington D.C. Convention Center to engage in presentations and hands-on workshops from some of the most creative and inspiring minds in STEM at the 2nd X-STEM Symposium. Sponsored by MedImmune, the X-STEM Symposium featured over 30 speaker presentations from advanced luminaries in STEM fields such as Dean Kamen, Inventor and Founder of FIRST; Dr. Aprille Ericsson from NASA; Dr. Irwin Jacobs, Founder of Qualcomm to young up and coming innovators like 13 year old Alyssa Carson, AKA the NASA Blueberry and 19 year old Easton LaChappelle, inventor of the robotic arm.

X STEM Visionaries Dr. Irwin Jacobs Dr. Anthony Fauci Alyssa Carson Dr. James McLurkin and Ben Gulak

Visionaries Dr. Irwin Jacobs from Qualcomm, Dr. Anthony Fauci from National Institute of Allergy and Infectious Diseases (NIAID), NASA Blueberry Alyssa Carson, Dr. James McLurkin from Rice University & Ben Gulak from BPG Motors

The Festival team would like to thank all of the X-STEM Speakers and Workshop Organizations for their enthusiastic commitment to our STEM Education outreach efforts. We also appreciate the support of our X-STEM Sponsor MedImmune, the many Festival Volunteers who helped make the event a great success and of course thank you to the student and teacher attendees for realizing the importance of connecting to STEM professionals and participating in hands-on workshops.

X STEM Students

View photos of the event here on Facebook as well as here on Flickr.

Stay tuned to our newsletter, website and social media sites (Facebook, Twitter, Google + & LinkedIn) for updates on the 2016 X-STEM Symposium!



from ScienceBlogs http://ift.tt/1OMy26S

This past Tuesday, over 3,500 students, in grades 6-12, traveled to the Washington D.C. Convention Center to engage in presentations and hands-on workshops from some of the most creative and inspiring minds in STEM at the 2nd X-STEM Symposium. Sponsored by MedImmune, the X-STEM Symposium featured over 30 speaker presentations from advanced luminaries in STEM fields such as Dean Kamen, Inventor and Founder of FIRST; Dr. Aprille Ericsson from NASA; Dr. Irwin Jacobs, Founder of Qualcomm to young up and coming innovators like 13 year old Alyssa Carson, AKA the NASA Blueberry and 19 year old Easton LaChappelle, inventor of the robotic arm.

X STEM Visionaries Dr. Irwin Jacobs Dr. Anthony Fauci Alyssa Carson Dr. James McLurkin and Ben Gulak

Visionaries Dr. Irwin Jacobs from Qualcomm, Dr. Anthony Fauci from National Institute of Allergy and Infectious Diseases (NIAID), NASA Blueberry Alyssa Carson, Dr. James McLurkin from Rice University & Ben Gulak from BPG Motors

The Festival team would like to thank all of the X-STEM Speakers and Workshop Organizations for their enthusiastic commitment to our STEM Education outreach efforts. We also appreciate the support of our X-STEM Sponsor MedImmune, the many Festival Volunteers who helped make the event a great success and of course thank you to the student and teacher attendees for realizing the importance of connecting to STEM professionals and participating in hands-on workshops.

X STEM Students

View photos of the event here on Facebook as well as here on Flickr.

Stay tuned to our newsletter, website and social media sites (Facebook, Twitter, Google + & LinkedIn) for updates on the 2016 X-STEM Symposium!



from ScienceBlogs http://ift.tt/1OMy26S

Friday Cephalopod: I just want to jump in with them! [Pharyngula]

Vast, dense swarms of migrating squid, all swirling about a boat. How can the sailors resist leaping into the water with them?



I’m picturing millions of tiny beaks, each taking a tiny nip, and millions of tentacles, each stroking and rasping away a thin layer of skin, all in endless succession. And then as the blood painted the waters, a boiling, roiling mass would heave over you, each little slimy creature frantically slurping up a tiny piece of you, until nothing was left but shiny white bones disarticulating to tumble down to the bottom of the sea, where the bone worms would gnaw you into a thin calcium slurry.

Nah, probably not. But a guy can fantasize, can’t he?



from ScienceBlogs http://ift.tt/1I3gqQ7

Vast, dense swarms of migrating squid, all swirling about a boat. How can the sailors resist leaping into the water with them?



I’m picturing millions of tiny beaks, each taking a tiny nip, and millions of tentacles, each stroking and rasping away a thin layer of skin, all in endless succession. And then as the blood painted the waters, a boiling, roiling mass would heave over you, each little slimy creature frantically slurping up a tiny piece of you, until nothing was left but shiny white bones disarticulating to tumble down to the bottom of the sea, where the bone worms would gnaw you into a thin calcium slurry.

Nah, probably not. But a guy can fantasize, can’t he?



from ScienceBlogs http://ift.tt/1I3gqQ7

Overlooked evidence - global warming may proceed faster than expected

It’s known as “single study syndrome”. When a new scientific paper is published suggesting that the climate is relatively insensitive to the increased greenhouse effect, potentially modestly downgrading the associated climate change threats, that sort of paper will generally receive disproportionate media attention. Because of that media attention, people will tend to remember the results of that single paper, and neglect the many recent studies that have arrived at very different conclusions.

Clouds Point to a Sensitive Climate

For example, there have been several recent studies finding that the global climate models that most accurately simulate observed changes in clouds and humidity over the past 10–15 years also happen to be the ones that are the mostsensitive to the increased greenhouse effect. For example, a 2012 paper by Kevin Trenberth and John Fasullo concluded,

These results suggest a systematic deficiency in the drying effect of either subsident circulations or spurious mixing of moister air into the region in low-sensitivity models that directly relate to their projected changes in cloud amount and albedo … the results strongly suggest that the more sensitive models perform better, and indeed the less sensitive models are not adequate in replicating vital aspects of today’s climate.

A 2014 paper led by Steven Sherwood took a similar approach with similar results. The paper concluded,

The mixing inferred from observations appears to be sufficiently strong to imply a climate sensitivity of more than 3 degrees for a doubling of carbon dioxide. This is significantly higher than the currently accepted lower bound of 1.5 degrees, thereby constraining model projections towards relatively severe future warming.

Figure (derived from Sherwood et al. 2014, Fig. 5c) showing the relationship between the models’ estimate of Lower Tropospheric Mixing (LTMI) and sensitivity, along with estimates of the same metric from radiosondes and the MERRA and ERA-Interim reanalyses.Figure (derived from Sherwood et al. 2014, Fig. 5c) showing the relationship between the models’ estimate of Lower Tropospheric Mixing (LTMI) and sensitivity, along with estimates of the same metric from radiosondes and the MERRA and ERA-Interim reanalyses.  Source: RealClimate.

Another 2014 paper published by scientists from CalTech and UCLA arrived at a similar conclusion, as lead author Hui Su explains,

This study used an index that represents how models capture the observed spatial structure of the Hadley Circulation and associated humidity and cloud distributions. We showed that the inter-model spread in climate sensitivity and cloud feedback is closely related to models’ Hadley Circulation change and present-day circulation strength varies systematically with models’ climate sensitivity (Figure 9). The stronger Hadley Circulation in the models, the higher climate sensitivity the models have (Figure 10). The observed circulation strength is on the high end of the modeled ones.

Clouds Hold the Key

Clouds are a key to determining the Earth’s climate sensitivity. We know that by itself, a doubling of the amount of carbon dioxide in the atmosphere will cause about 1.2°C global warming. A warmer atmosphere will hold more water vapor, and as another greenhouse gas, we know that increase in water vapor will roughly double that carbon-caused warming (a “positive feedback”). We also know of some other significant positive feedbacks, like melting ice decreasing the reflectivity of the Earth’s surface, causing it to absorb more energy from the sun.

Those who argue that the Earth’s climate is relatively insensitive to the increased greenhouse effect need a big negative feedback to offset those factors we know amplify global warming. Clouds represent the only such plausible mechanism, because we don’t have a very good grasp on how different types of clouds will change in a hotter world.

For example, climate scientist contrarian Richard Lindzen came up with what’s known as the “iris hypothesis” in 2001, suggesting that in a warmer world, high cirrus clouds will contract like the iris on an eye to allow more heat to escape.That hypothesis has not withstood the test of time, however, with four studies published within a year of Lindzen’s paper effectively refuting the hypothesis. One recent paper found that even if the iris effect is real, it would reduce the Earth’s climate sensitivity by no more than 20%, still well within the range of possible values outlined by the IPCC.

Not only have the aforementioned studies found that changes in humidity and clouds are consistent with simulations from more sensitive climate models, butprevious research led by Andrew Dessler and more recently by Kevin Trenberth and colleagues has shown that observed changes in water vapor amplifying global warming as expected, and that clouds are thus far acting to weakly amplify global warming. These observations are inconsistent with the strong cloud dampening effect contrarians need to justify arguments for low climate sensitivity.

Low Sensitivity Single Study Syndrome

There have been a few recent studies using what’s called an “energy balance model” approach, combining simple climate models with recent observational data, concluding that climate sensitivity is on the low end of IPCC estimates. However, subsequent research has identified some potentially serious flaws in this approach.

These types of studies have nevertheless been the focus of disproportionate attention. For example, in recent testimony before the US House of Representatives Committee on Science, Space and Technology, contrarian climate scientist Judith Curry said,

Recent data and research supports the importance of natural climate variability and calls into question the conclusion that humans are the dominant cause of recent climate change: … Reduced estimates of the sensitivity of climate to carbon dioxide

Curry referenced just one paper (using the energy balance model approach) to support that argument – the very definition of single study syndrome – plus an interpretation of a second paper whose whose author objected, saying, 

Click here to read the rest



from Skeptical Science http://ift.tt/1QTJs7l

It’s known as “single study syndrome”. When a new scientific paper is published suggesting that the climate is relatively insensitive to the increased greenhouse effect, potentially modestly downgrading the associated climate change threats, that sort of paper will generally receive disproportionate media attention. Because of that media attention, people will tend to remember the results of that single paper, and neglect the many recent studies that have arrived at very different conclusions.

Clouds Point to a Sensitive Climate

For example, there have been several recent studies finding that the global climate models that most accurately simulate observed changes in clouds and humidity over the past 10–15 years also happen to be the ones that are the mostsensitive to the increased greenhouse effect. For example, a 2012 paper by Kevin Trenberth and John Fasullo concluded,

These results suggest a systematic deficiency in the drying effect of either subsident circulations or spurious mixing of moister air into the region in low-sensitivity models that directly relate to their projected changes in cloud amount and albedo … the results strongly suggest that the more sensitive models perform better, and indeed the less sensitive models are not adequate in replicating vital aspects of today’s climate.

A 2014 paper led by Steven Sherwood took a similar approach with similar results. The paper concluded,

The mixing inferred from observations appears to be sufficiently strong to imply a climate sensitivity of more than 3 degrees for a doubling of carbon dioxide. This is significantly higher than the currently accepted lower bound of 1.5 degrees, thereby constraining model projections towards relatively severe future warming.

Figure (derived from Sherwood et al. 2014, Fig. 5c) showing the relationship between the models’ estimate of Lower Tropospheric Mixing (LTMI) and sensitivity, along with estimates of the same metric from radiosondes and the MERRA and ERA-Interim reanalyses.Figure (derived from Sherwood et al. 2014, Fig. 5c) showing the relationship between the models’ estimate of Lower Tropospheric Mixing (LTMI) and sensitivity, along with estimates of the same metric from radiosondes and the MERRA and ERA-Interim reanalyses.  Source: RealClimate.

Another 2014 paper published by scientists from CalTech and UCLA arrived at a similar conclusion, as lead author Hui Su explains,

This study used an index that represents how models capture the observed spatial structure of the Hadley Circulation and associated humidity and cloud distributions. We showed that the inter-model spread in climate sensitivity and cloud feedback is closely related to models’ Hadley Circulation change and present-day circulation strength varies systematically with models’ climate sensitivity (Figure 9). The stronger Hadley Circulation in the models, the higher climate sensitivity the models have (Figure 10). The observed circulation strength is on the high end of the modeled ones.

Clouds Hold the Key

Clouds are a key to determining the Earth’s climate sensitivity. We know that by itself, a doubling of the amount of carbon dioxide in the atmosphere will cause about 1.2°C global warming. A warmer atmosphere will hold more water vapor, and as another greenhouse gas, we know that increase in water vapor will roughly double that carbon-caused warming (a “positive feedback”). We also know of some other significant positive feedbacks, like melting ice decreasing the reflectivity of the Earth’s surface, causing it to absorb more energy from the sun.

Those who argue that the Earth’s climate is relatively insensitive to the increased greenhouse effect need a big negative feedback to offset those factors we know amplify global warming. Clouds represent the only such plausible mechanism, because we don’t have a very good grasp on how different types of clouds will change in a hotter world.

For example, climate scientist contrarian Richard Lindzen came up with what’s known as the “iris hypothesis” in 2001, suggesting that in a warmer world, high cirrus clouds will contract like the iris on an eye to allow more heat to escape.That hypothesis has not withstood the test of time, however, with four studies published within a year of Lindzen’s paper effectively refuting the hypothesis. One recent paper found that even if the iris effect is real, it would reduce the Earth’s climate sensitivity by no more than 20%, still well within the range of possible values outlined by the IPCC.

Not only have the aforementioned studies found that changes in humidity and clouds are consistent with simulations from more sensitive climate models, butprevious research led by Andrew Dessler and more recently by Kevin Trenberth and colleagues has shown that observed changes in water vapor amplifying global warming as expected, and that clouds are thus far acting to weakly amplify global warming. These observations are inconsistent with the strong cloud dampening effect contrarians need to justify arguments for low climate sensitivity.

Low Sensitivity Single Study Syndrome

There have been a few recent studies using what’s called an “energy balance model” approach, combining simple climate models with recent observational data, concluding that climate sensitivity is on the low end of IPCC estimates. However, subsequent research has identified some potentially serious flaws in this approach.

These types of studies have nevertheless been the focus of disproportionate attention. For example, in recent testimony before the US House of Representatives Committee on Science, Space and Technology, contrarian climate scientist Judith Curry said,

Recent data and research supports the importance of natural climate variability and calls into question the conclusion that humans are the dominant cause of recent climate change: … Reduced estimates of the sensitivity of climate to carbon dioxide

Curry referenced just one paper (using the energy balance model approach) to support that argument – the very definition of single study syndrome – plus an interpretation of a second paper whose whose author objected, saying, 

Click here to read the rest



from Skeptical Science http://ift.tt/1QTJs7l