Watch: Four Gas Giants In Orbit Around Another Star For The First Time (Synopsis) [Starts With A Bang]

“The Beta Pic animation looked so cool that we’ve wanted to do more. We wanted to make one that was even more impactful for the audience and could begin to show what one of these systems looks like.” -Jason Wang

In 2004, humanity was able to take the first direct image of an exoplanet around its parent star by going to infrared wavelengths. Four years later, the system HR 8799 was determined to have three (later upgraded to four) exoplanets orbiting it. They could all not only be imaged, but imaged over time. As the planets continue to move in their orbits, follow-up observations have continued to track them.

An exoplanet detected around the star Fomalhaut, seen to move in multiple images over time. Image credit: NASA, ESA, and P. Kalas, University of California, Berkeley and SETI Institute.

An exoplanet detected around the star Fomalhaut, seen to move in multiple images over time. Image credit: NASA, ESA, and P. Kalas, University of California, Berkeley and SETI Institute.

For the first time, we can directly determine the orbital period of planets around distant worlds from direct imaging. When the next generation of space and ground-based telescopes come online, we should be able to directly image worlds around thousands of stars, including Earth-like planets around the nearest ones.

Four gas giants orbiting the star HR 8799. Image credit: Jason Wang / Christian Marois.

Four gas giants orbiting the star HR 8799. Image credit: Jason Wang / Christian Marois.

Come get the full story in pictures, animations and no more than 200 words on this edition of Mostly Mute Monday.



from ScienceBlogs http://ift.tt/2k8MvxM

“The Beta Pic animation looked so cool that we’ve wanted to do more. We wanted to make one that was even more impactful for the audience and could begin to show what one of these systems looks like.” -Jason Wang

In 2004, humanity was able to take the first direct image of an exoplanet around its parent star by going to infrared wavelengths. Four years later, the system HR 8799 was determined to have three (later upgraded to four) exoplanets orbiting it. They could all not only be imaged, but imaged over time. As the planets continue to move in their orbits, follow-up observations have continued to track them.

An exoplanet detected around the star Fomalhaut, seen to move in multiple images over time. Image credit: NASA, ESA, and P. Kalas, University of California, Berkeley and SETI Institute.

An exoplanet detected around the star Fomalhaut, seen to move in multiple images over time. Image credit: NASA, ESA, and P. Kalas, University of California, Berkeley and SETI Institute.

For the first time, we can directly determine the orbital period of planets around distant worlds from direct imaging. When the next generation of space and ground-based telescopes come online, we should be able to directly image worlds around thousands of stars, including Earth-like planets around the nearest ones.

Four gas giants orbiting the star HR 8799. Image credit: Jason Wang / Christian Marois.

Four gas giants orbiting the star HR 8799. Image credit: Jason Wang / Christian Marois.

Come get the full story in pictures, animations and no more than 200 words on this edition of Mostly Mute Monday.



from ScienceBlogs http://ift.tt/2k8MvxM

DoD’s ‘Organ-on-a-Chip’ Innovation Wins Big

26562977435_110c63077e_b The full PuLMo artificial lung system in operation. (Courtesy: DTRA)

By Yolanda R. Arrington
DoD News, Defense Media Activity

These days, we want everything smaller. From cell phones to computers, the trend is to get as mobile and pocket-sized as possible. That trend isn’t just for our digital gadgets; scientists are also aiming for smaller tools and devices.

Researchers have created a miniature artificial lung that acts just like a real lung when exposed to drugs and toxins. Their recreated lung was recently named among one of the 100 most technologically significant products of 2016.

A product of R&D Magazine (research and development), the R&D 100 Awards are often called the “Oscars of Invention” because they highlight the top technology products of the year. The Defense Threat Reduction Agency’s Joint Science and Technology Office funded work on the Pulmonary Lung Model, or PuLMo, which recently took home one of the R&D 100 Awards. The Los Alamos National Laboratory and Wake Forest University worked under DTRA contract to develop PuLMo and other human organs as part of the eX-vivo Capability for Evaluation and Licensure program, commonly called “organ-on-a-chip.”

What is PuLMo?

The PuLMo alveolar unit is readied for testing. (Courtesy: DTRA)

The PuLMo alveolar unit is readied for testing. (Courtesy: DTRA)

PuLMo is a miniature, tissue-engineered lung that revolutionizes the screening of new drugs or toxic agents. Currently, screening methods may not always accurately predict how a drug or agent will interact with the body in humans. PuLMo changes that. It gives researchers the ability to assess, in real-time, how human organs react to various drugs.

How does PuLMo work?
PuLMo works by incorporating cells that mimic the human airway. The Los Alamos team made two submodels of bronchiole and alveoli to recreate a human’s airway. The first focuses on tissue engineering and co-culture of multiple cell types along with breathable membranes. The second model works to mimic the air-flow dynamics of the human lung by recreating the branching of the late generations of the respiratory bronchiole and the alveolar sacs from a human lung. These submodel units are connected on a microfluidic chip, called the Fluid Circuit Board, which helps manage the flows of air and tissue media.

Both submodels co-culture at least three different cell types from three different regions of the lung and include several physiological characteristics, such as mucus production and cyclic stretching of membranes.

PuLMo mimics the lungs’ response to medications allowing researchers to quickly test multiple compounds without extensive animal testing yet in a human environment.

“Historically, drug trials have been done in many different types of animals. Some of them have passed animal testing, but in human trials they’ve failed or even killed people,” said Jennifer Harris of Biosecurity and Public Health at Los Alamos.

This innovation is a big step in reducing animal use, acquiring human data, and improving researchers’ understanding of compound liabilities and toxic agent’s interactions with the human body.

How will PuLMo benefit the military?

Dr. Ronald Hann, director of the Joint Science and Technology Office at the Defense Threat Reduction Agency, examines the liver module of the eX-vivo Capability for Evaluation and Licensure project at Los Alamos National Laboratory. (Defense Threat Reduction Agency Chemical and Biological Technologies Department photo)

Dr. Ronald Hann, director of the Joint Science and Technology Office at the Defense Threat Reduction Agency, examines the liver module of the eX-vivo Capability for Evaluation and Licensure project at Los Alamos National Laboratory. (Defense Threat Reduction Agency Chemical and Biological Technologies Department photo)

Scientists can use PuLMo to study how particles flow inside the lung. Flow dynamics are essential in researching how drugs, agents, smoke and other fumes affect the lungs, which could prove vital for devising targeted medical countermeasures for the warfighter.

This technological feat could save lives and money by allowing new drugs to be screened more effectively and reliably for their toxicity, as well as enabling better prediction of a new drug’s efficacy in humans. Also, the PulMo model makes some previously impossible human threat agent assessment possible for DoD.

The 2016 award for this technological advancement highlights the work that DTRA is doing to keep military science on the cutting edge.

RELATED LINKS: Surgeons Using Robots in Operating Rooms
Sensors on Scan
Neutralizing the Threat Of Chemical Warfare, One ‘Inator’ at a Time

Follow the Department of Defense on Facebook and Twitter!

———

Disclaimer: The appearance of hyperlinks does not constitute endorsement by the Department of Defense of this website or the information, products or services contained therein. For other than authorized activities such as military exchanges and Morale, Welfare and Recreation sites, the Department of Defense does not exercise any editorial control over the information you may find at these locations. Such links are provided consistent with the stated purpose of this DOD website.



from Armed with Science http://ift.tt/2juSevR

26562977435_110c63077e_b The full PuLMo artificial lung system in operation. (Courtesy: DTRA)

By Yolanda R. Arrington
DoD News, Defense Media Activity

These days, we want everything smaller. From cell phones to computers, the trend is to get as mobile and pocket-sized as possible. That trend isn’t just for our digital gadgets; scientists are also aiming for smaller tools and devices.

Researchers have created a miniature artificial lung that acts just like a real lung when exposed to drugs and toxins. Their recreated lung was recently named among one of the 100 most technologically significant products of 2016.

A product of R&D Magazine (research and development), the R&D 100 Awards are often called the “Oscars of Invention” because they highlight the top technology products of the year. The Defense Threat Reduction Agency’s Joint Science and Technology Office funded work on the Pulmonary Lung Model, or PuLMo, which recently took home one of the R&D 100 Awards. The Los Alamos National Laboratory and Wake Forest University worked under DTRA contract to develop PuLMo and other human organs as part of the eX-vivo Capability for Evaluation and Licensure program, commonly called “organ-on-a-chip.”

What is PuLMo?

The PuLMo alveolar unit is readied for testing. (Courtesy: DTRA)

The PuLMo alveolar unit is readied for testing. (Courtesy: DTRA)

PuLMo is a miniature, tissue-engineered lung that revolutionizes the screening of new drugs or toxic agents. Currently, screening methods may not always accurately predict how a drug or agent will interact with the body in humans. PuLMo changes that. It gives researchers the ability to assess, in real-time, how human organs react to various drugs.

How does PuLMo work?
PuLMo works by incorporating cells that mimic the human airway. The Los Alamos team made two submodels of bronchiole and alveoli to recreate a human’s airway. The first focuses on tissue engineering and co-culture of multiple cell types along with breathable membranes. The second model works to mimic the air-flow dynamics of the human lung by recreating the branching of the late generations of the respiratory bronchiole and the alveolar sacs from a human lung. These submodel units are connected on a microfluidic chip, called the Fluid Circuit Board, which helps manage the flows of air and tissue media.

Both submodels co-culture at least three different cell types from three different regions of the lung and include several physiological characteristics, such as mucus production and cyclic stretching of membranes.

PuLMo mimics the lungs’ response to medications allowing researchers to quickly test multiple compounds without extensive animal testing yet in a human environment.

“Historically, drug trials have been done in many different types of animals. Some of them have passed animal testing, but in human trials they’ve failed or even killed people,” said Jennifer Harris of Biosecurity and Public Health at Los Alamos.

This innovation is a big step in reducing animal use, acquiring human data, and improving researchers’ understanding of compound liabilities and toxic agent’s interactions with the human body.

How will PuLMo benefit the military?

Dr. Ronald Hann, director of the Joint Science and Technology Office at the Defense Threat Reduction Agency, examines the liver module of the eX-vivo Capability for Evaluation and Licensure project at Los Alamos National Laboratory. (Defense Threat Reduction Agency Chemical and Biological Technologies Department photo)

Dr. Ronald Hann, director of the Joint Science and Technology Office at the Defense Threat Reduction Agency, examines the liver module of the eX-vivo Capability for Evaluation and Licensure project at Los Alamos National Laboratory. (Defense Threat Reduction Agency Chemical and Biological Technologies Department photo)

Scientists can use PuLMo to study how particles flow inside the lung. Flow dynamics are essential in researching how drugs, agents, smoke and other fumes affect the lungs, which could prove vital for devising targeted medical countermeasures for the warfighter.

This technological feat could save lives and money by allowing new drugs to be screened more effectively and reliably for their toxicity, as well as enabling better prediction of a new drug’s efficacy in humans. Also, the PulMo model makes some previously impossible human threat agent assessment possible for DoD.

The 2016 award for this technological advancement highlights the work that DTRA is doing to keep military science on the cutting edge.

RELATED LINKS: Surgeons Using Robots in Operating Rooms
Sensors on Scan
Neutralizing the Threat Of Chemical Warfare, One ‘Inator’ at a Time

Follow the Department of Defense on Facebook and Twitter!

———

Disclaimer: The appearance of hyperlinks does not constitute endorsement by the Department of Defense of this website or the information, products or services contained therein. For other than authorized activities such as military exchanges and Morale, Welfare and Recreation sites, the Department of Defense does not exercise any editorial control over the information you may find at these locations. Such links are provided consistent with the stated purpose of this DOD website.



from Armed with Science http://ift.tt/2juSevR

Where’s the moon? Waxing crescent

Steve Pauken caught this beautiful shot of the waxing crescent moon on January 29, 2017.

A waxing crescent moon – sometimes called a young moon – is always seen in the west after sunset.

At this moon phase, the Earth, moon and sun are located nearly on a line in space. If they were more precisely on a line, as they are at new moon, we wouldn’t see the moon. The moon would travel across the sky during the day, lost in the sun’s glare.

But a waxing crescent moon is far enough away from that Earth-sun line to be visible near the sun’s glare – that is, in the west after sunset. Although there are seasonal effects, in general a waxing moon is seen one day to several days after new moon. It’s always seen in the evening, and it’s always seen in the west. On these days, the moon rises one hour to several hours behind the sun and follows the sun across the sky during the day. When the sun sets, and the sky darkens, the moon pops into view in the western sky.

The moon is now waxing toward first quarter. Next first quarter moon will be February 4, 2017 at 04:19 UTC.

Next full moon is February 11 at 00:33 UTC. This full moon will stage a penumbral lunar eclipse.

Translate to your time zone.

In late January, the waxing crescent moon will join up with the planets Venus and Mars in the west after sunset. Read more.

The New Year started out with a beautiful crescent moon. This day-lapse composite image combines the earthshine moon from New Year’s Day with the crescent moon from the following day. A wide-field image with Venus at sunset and more information on how to make day-lapse images is available from Robert Pettengill of Austin, Texas.

Note that a crescent moon has nothing to do with Earth’s shadow on the moon. The only time Earth’s shadow can fall on the moon is at full moon, during a lunar eclipse. There is a shadow on a crescent moon, but it’s the moon’s own shadow. Night on the moon happens on the part of the moon submerged in the moon’s own shadow. Likewise, night on Earth happens on the part of Earth submerged in Earth’s own shadow.

Because the waxing crescent moon is nearly on a line with the Earth and sun, its illuminated hemisphere – or day side – is facing mostly away from us. We see only a slender fraction of the day side: a crescent moon. Each evening, because the moon is moving eastward in orbit around Earth, the moon appears farther from the sunset glare. It is moving farther from the Earth-sun line in space. Each evening, as the moon’s orbital motion carries it away from the Earth-sun line, we see more of the moon’s day side. Thus the crescent in the west after sunset appears to wax, or grow fatter each evening.

Here’s more to look for in tonight’s sky

The pale glow on the darkened portion (night side) of a crescent moon is called earthshine. Is caused by light reflected from Earth’s day side onto the moon. After all, when you see a crescent moon in Earth’s sky, any moon people looking back at our world would see a nearly full Earth. Read more: What is earthshine?

As the moon orbits Earth, it changes phase in an orderly way. Follow these links to understand the various phases of the moon.

Four keys to understanding moon phases

Where’s the moon? Waxing crescent
Where’s the moon? First quarter
Where’s the moon? Waxing gibbous
What’s special about a full moon?
Where’s the moon? Waning gibbous
Where’s the moon? Last quarter
Where’s the moon? Waning crescent
Where’s the moon? New phase

Check out EarthSky’s guide to the bright planets.



from EarthSky http://ift.tt/1trITpz

Steve Pauken caught this beautiful shot of the waxing crescent moon on January 29, 2017.

A waxing crescent moon – sometimes called a young moon – is always seen in the west after sunset.

At this moon phase, the Earth, moon and sun are located nearly on a line in space. If they were more precisely on a line, as they are at new moon, we wouldn’t see the moon. The moon would travel across the sky during the day, lost in the sun’s glare.

But a waxing crescent moon is far enough away from that Earth-sun line to be visible near the sun’s glare – that is, in the west after sunset. Although there are seasonal effects, in general a waxing moon is seen one day to several days after new moon. It’s always seen in the evening, and it’s always seen in the west. On these days, the moon rises one hour to several hours behind the sun and follows the sun across the sky during the day. When the sun sets, and the sky darkens, the moon pops into view in the western sky.

The moon is now waxing toward first quarter. Next first quarter moon will be February 4, 2017 at 04:19 UTC.

Next full moon is February 11 at 00:33 UTC. This full moon will stage a penumbral lunar eclipse.

Translate to your time zone.

In late January, the waxing crescent moon will join up with the planets Venus and Mars in the west after sunset. Read more.

The New Year started out with a beautiful crescent moon. This day-lapse composite image combines the earthshine moon from New Year’s Day with the crescent moon from the following day. A wide-field image with Venus at sunset and more information on how to make day-lapse images is available from Robert Pettengill of Austin, Texas.

Note that a crescent moon has nothing to do with Earth’s shadow on the moon. The only time Earth’s shadow can fall on the moon is at full moon, during a lunar eclipse. There is a shadow on a crescent moon, but it’s the moon’s own shadow. Night on the moon happens on the part of the moon submerged in the moon’s own shadow. Likewise, night on Earth happens on the part of Earth submerged in Earth’s own shadow.

Because the waxing crescent moon is nearly on a line with the Earth and sun, its illuminated hemisphere – or day side – is facing mostly away from us. We see only a slender fraction of the day side: a crescent moon. Each evening, because the moon is moving eastward in orbit around Earth, the moon appears farther from the sunset glare. It is moving farther from the Earth-sun line in space. Each evening, as the moon’s orbital motion carries it away from the Earth-sun line, we see more of the moon’s day side. Thus the crescent in the west after sunset appears to wax, or grow fatter each evening.

Here’s more to look for in tonight’s sky

The pale glow on the darkened portion (night side) of a crescent moon is called earthshine. Is caused by light reflected from Earth’s day side onto the moon. After all, when you see a crescent moon in Earth’s sky, any moon people looking back at our world would see a nearly full Earth. Read more: What is earthshine?

As the moon orbits Earth, it changes phase in an orderly way. Follow these links to understand the various phases of the moon.

Four keys to understanding moon phases

Where’s the moon? Waxing crescent
Where’s the moon? First quarter
Where’s the moon? Waxing gibbous
What’s special about a full moon?
Where’s the moon? Waning gibbous
Where’s the moon? Last quarter
Where’s the moon? Waning crescent
Where’s the moon? New phase

Check out EarthSky’s guide to the bright planets.



from EarthSky http://ift.tt/1trITpz

How big are the biggest monster stars?

Near-infrared image of the R136 cluster, from ESO’s Very Large Telescope. The most massive known star, labeled R136a1, is located at the center of the image. Image via ESO/ P. Crowther/ C.J. Evans.

When speaking of bigness among stars, you have to define your terms. There are very heavy stars. And there are gigantic stars, in terms of sheer physical size. The heaviest star is thought to be R136a1. It’s 265 times more massive than our sun – nearly twice as massive as what astronomers thought was possible. It’s the most massive star known at this time. But there are more ways than one to measure stars’ bigness. In terms of sheer physical size, the star UY Scuti is considered the biggest known. It’s only 30 times the sun’s mass, but has a radius more than 1,700 greater than the sun. Follow the links below to learn more about these monster stars.

R136a1 is the heaviest star, with 265 times the sun’s mass

UY Scuti is just plain big, with a radius 1,700 times that of our sun

Left to right: a red dwarf, the Sun, a blue dwarf, and R136a1. R136a1 is not the largest known star in terms of radius or volume, only in mass and luminosity. Image via Wikipedia.

R136a1 is the heaviest star, with 265 times the sun’s mass. Located in the Large Magellanic Cloud – some 160,000 light-years away – R136a1 is what’s known as a Wolf–Rayet star. Its surface temperature is over 100,000 degrees F. It’s also the most luminous star known at more than 7 million times the luminosity of our sun.

For decades, theories have suggested that no stars can be born by ordinary processes above 150 solar masses. So how did R136a1 and stars like it grow so large? And why aren’t monster stars scattered throughout space?

One idea is that supermassive stars like R136a1 form through mergers of multiple stars. In 2012, astronomers at the University of Bonn suggested the the ultramassive stars in the Large Magellanic Cloud – such as R136a1 – were created lighter stars in tight double-star systems merged.

Still, double-star systems are common. So why don’t we see more super-sized stars? The astronomers in Bonn say it’s because these stars formed under special conditions – in a densely packed star cluster. In a closely packed star cluster, double-stars are more likely to encounter each other and merge.

But if these ultramassive stars form in this way, why don’t we see more of them? After all, multiple star systems are common throughout space, while monster stars are few and far between.

The answer may be that monster stars don’t live very long. They evolve very quickly in contrast to less massive stars like our sun. They end their lives in violent supernova explosions.

List of most massive stars

Enjoying EarthSky so far? Sign up for our free daily newsletter today!

UY Scuti size comparison to the sun. Philip Park, CC BY via Jillian Scudder

UY Scuti size comparison to the sun. Image by Philip Park via Jillian Scudder

UY Scuti is just plain big, with a radius 1,700 times that of our sun. Located some 9,500 light-years away, this star is the leading candidate for being the largest known star. Astrophysicist Jillian Scudder of University of Sussex posted has said of this star:

Mass and physical size don’t always correlate for stars, particularly for giant stars.

UY Scuti is thought to have a mass only slightly more than 30 times the mass of our sun. But its radius is thought to be something like 1,700 times greater than the radius of the sun. That would make this star is nearly eight astronomical units across – that’s eight times the distance between the Earth and sun. In other words, this single star is so large that its outer surface would extend far beyond the orbit of the planet Jupiter (which lies about five times farther from the sun than Earth). Scudder said:

This star is one of a class of stars that varies in brightness because it varies in size, so this number is also likely to change over time. The margin of error on this measurement is about 192 solar radii. This uncertainty is why I used ‘possibly one of the largest stars’ in my description of UY Scuti. If it is smaller by 192 solar radii, there are a few other candidates that would beat UY Scuti.

Who are those other candidates? They would include NML Cygni, whose estimated distance is about 5,300 light-years way and whose radius is thought to be 1,650 times greater than that of our sun. A recent study of this star suggested that it’s an unusual hypergiant star cocooned within a nebula and severely obscured by dust. Thus we don’t know its size exactly, and the true range might between 1,642 to 2,775 solar radii. The upper part of the range would make it larger than UY Scuti.

Another hypergiant star is WOH G64, also in the Large Magellanic Cloud, and thus located at a distance of some 168,000 light years from Earth. At an estimated 1,540 times the sun’s radius, this star is thought to be the largest star in the Large Magellanic Cloud, in terms of sheer physical size. And, again, we’re talking size here, not mass. This star is thought to have only 25 times the sun’s mass.

List of largest known stars

So you can see that there are extremely heavy stars … and then there are simply gigantic stars. What makes a star big might be its mass (like R136a1) or its physical size (like UY Scuti and the two other stars mentioned here). Either way, it’s fun to imagine what it would be like to have one of these stars relatively close to us in space … say, the distance to the nearest star system, Alpha Centauri, only four light-years away.

At that distance, any of these stars would blaze in our night sky!

Bottom line: The most massive star known is R136a1, in the Large Magellanic Cloud. It’s thought to be about 265 times more massive than our sun. Stars are also considered big if their sheer physical size is big. In terms of sheer size, UY Scuti is the biggest known star.



from EarthSky http://ift.tt/16UpG8K

Near-infrared image of the R136 cluster, from ESO’s Very Large Telescope. The most massive known star, labeled R136a1, is located at the center of the image. Image via ESO/ P. Crowther/ C.J. Evans.

When speaking of bigness among stars, you have to define your terms. There are very heavy stars. And there are gigantic stars, in terms of sheer physical size. The heaviest star is thought to be R136a1. It’s 265 times more massive than our sun – nearly twice as massive as what astronomers thought was possible. It’s the most massive star known at this time. But there are more ways than one to measure stars’ bigness. In terms of sheer physical size, the star UY Scuti is considered the biggest known. It’s only 30 times the sun’s mass, but has a radius more than 1,700 greater than the sun. Follow the links below to learn more about these monster stars.

R136a1 is the heaviest star, with 265 times the sun’s mass

UY Scuti is just plain big, with a radius 1,700 times that of our sun

Left to right: a red dwarf, the Sun, a blue dwarf, and R136a1. R136a1 is not the largest known star in terms of radius or volume, only in mass and luminosity. Image via Wikipedia.

R136a1 is the heaviest star, with 265 times the sun’s mass. Located in the Large Magellanic Cloud – some 160,000 light-years away – R136a1 is what’s known as a Wolf–Rayet star. Its surface temperature is over 100,000 degrees F. It’s also the most luminous star known at more than 7 million times the luminosity of our sun.

For decades, theories have suggested that no stars can be born by ordinary processes above 150 solar masses. So how did R136a1 and stars like it grow so large? And why aren’t monster stars scattered throughout space?

One idea is that supermassive stars like R136a1 form through mergers of multiple stars. In 2012, astronomers at the University of Bonn suggested the the ultramassive stars in the Large Magellanic Cloud – such as R136a1 – were created lighter stars in tight double-star systems merged.

Still, double-star systems are common. So why don’t we see more super-sized stars? The astronomers in Bonn say it’s because these stars formed under special conditions – in a densely packed star cluster. In a closely packed star cluster, double-stars are more likely to encounter each other and merge.

But if these ultramassive stars form in this way, why don’t we see more of them? After all, multiple star systems are common throughout space, while monster stars are few and far between.

The answer may be that monster stars don’t live very long. They evolve very quickly in contrast to less massive stars like our sun. They end their lives in violent supernova explosions.

List of most massive stars

Enjoying EarthSky so far? Sign up for our free daily newsletter today!

UY Scuti size comparison to the sun. Philip Park, CC BY via Jillian Scudder

UY Scuti size comparison to the sun. Image by Philip Park via Jillian Scudder

UY Scuti is just plain big, with a radius 1,700 times that of our sun. Located some 9,500 light-years away, this star is the leading candidate for being the largest known star. Astrophysicist Jillian Scudder of University of Sussex posted has said of this star:

Mass and physical size don’t always correlate for stars, particularly for giant stars.

UY Scuti is thought to have a mass only slightly more than 30 times the mass of our sun. But its radius is thought to be something like 1,700 times greater than the radius of the sun. That would make this star is nearly eight astronomical units across – that’s eight times the distance between the Earth and sun. In other words, this single star is so large that its outer surface would extend far beyond the orbit of the planet Jupiter (which lies about five times farther from the sun than Earth). Scudder said:

This star is one of a class of stars that varies in brightness because it varies in size, so this number is also likely to change over time. The margin of error on this measurement is about 192 solar radii. This uncertainty is why I used ‘possibly one of the largest stars’ in my description of UY Scuti. If it is smaller by 192 solar radii, there are a few other candidates that would beat UY Scuti.

Who are those other candidates? They would include NML Cygni, whose estimated distance is about 5,300 light-years way and whose radius is thought to be 1,650 times greater than that of our sun. A recent study of this star suggested that it’s an unusual hypergiant star cocooned within a nebula and severely obscured by dust. Thus we don’t know its size exactly, and the true range might between 1,642 to 2,775 solar radii. The upper part of the range would make it larger than UY Scuti.

Another hypergiant star is WOH G64, also in the Large Magellanic Cloud, and thus located at a distance of some 168,000 light years from Earth. At an estimated 1,540 times the sun’s radius, this star is thought to be the largest star in the Large Magellanic Cloud, in terms of sheer physical size. And, again, we’re talking size here, not mass. This star is thought to have only 25 times the sun’s mass.

List of largest known stars

So you can see that there are extremely heavy stars … and then there are simply gigantic stars. What makes a star big might be its mass (like R136a1) or its physical size (like UY Scuti and the two other stars mentioned here). Either way, it’s fun to imagine what it would be like to have one of these stars relatively close to us in space … say, the distance to the nearest star system, Alpha Centauri, only four light-years away.

At that distance, any of these stars would blaze in our night sky!

Bottom line: The most massive star known is R136a1, in the Large Magellanic Cloud. It’s thought to be about 265 times more massive than our sun. Stars are also considered big if their sheer physical size is big. In terms of sheer size, UY Scuti is the biggest known star.



from EarthSky http://ift.tt/16UpG8K

See it! Young moon, Venus, Mars

Mars, Venus our moon and Earth, as seen in New Mexico by Peter Rodney Breaux, January 29, 2017. Venus is the bright one above the moon; Mars is fainter, reddish and above Venus.

Genevieve Martin in San Antonio, Texas caught the January 29, 2017 moon and Venus on her evening walk.

Here are Venus and Mars seen from Earth’s Southern Hemisphere on January 29, 2017. Notice that their orientation to the horizon is different from what we see in this hemisphere. Helio C. Vital, who captured this photo, wrote: “This photo shows the planets Venus and Mars separated by only 5.5° in the evening sky over Rio de Janeiro, Brazil. Venus was 5.6 magnitudes (or almost 200 times) brighter than Mars.”

If you looked at Venus through a telescope now, here’s what you’d see. The planet will soon pass between in the Earth and sun, and it’s showing a waning crescent phase toward Earth. January 29, 2017 photo by Alex Ustick. “Venus at -4.6 magnitude tonight. Brightness reduced to show detail.”

Moon and Venus on January 29, 2017 from Chintan Gadani in Ahmedabad, India.

This photo resonated with the weekend’s events. On a plaque mounted on this statue, it says: “Give me your tired, your poor, your huddled masses yearning to breathe free…” Waxing crescent moon partly shadowed by clouds, with Statue of Liberty below, as captured on January 29, 2017 by Gowrishankar Lakshminarayanan.

Kelly Thomas in Hetch Hetchy, California caught the very young moon on January 28, 2017.

By the way, Venus and Mars aren’t the only planets in the west after sunset. Uranus is there, too, invisible to the eye alone. Chirag Upreti in Rajastan, India caught it on January 27. You’ll see it in the inset at right. He wrote: “In this image you can see Venus shine through the ‘Babul’ tree in local language taxonomic synonym Acacia nilotica, a native of this dry and semiarid climate.”



from EarthSky http://ift.tt/2jJwRu8

Mars, Venus our moon and Earth, as seen in New Mexico by Peter Rodney Breaux, January 29, 2017. Venus is the bright one above the moon; Mars is fainter, reddish and above Venus.

Genevieve Martin in San Antonio, Texas caught the January 29, 2017 moon and Venus on her evening walk.

Here are Venus and Mars seen from Earth’s Southern Hemisphere on January 29, 2017. Notice that their orientation to the horizon is different from what we see in this hemisphere. Helio C. Vital, who captured this photo, wrote: “This photo shows the planets Venus and Mars separated by only 5.5° in the evening sky over Rio de Janeiro, Brazil. Venus was 5.6 magnitudes (or almost 200 times) brighter than Mars.”

If you looked at Venus through a telescope now, here’s what you’d see. The planet will soon pass between in the Earth and sun, and it’s showing a waning crescent phase toward Earth. January 29, 2017 photo by Alex Ustick. “Venus at -4.6 magnitude tonight. Brightness reduced to show detail.”

Moon and Venus on January 29, 2017 from Chintan Gadani in Ahmedabad, India.

This photo resonated with the weekend’s events. On a plaque mounted on this statue, it says: “Give me your tired, your poor, your huddled masses yearning to breathe free…” Waxing crescent moon partly shadowed by clouds, with Statue of Liberty below, as captured on January 29, 2017 by Gowrishankar Lakshminarayanan.

Kelly Thomas in Hetch Hetchy, California caught the very young moon on January 28, 2017.

By the way, Venus and Mars aren’t the only planets in the west after sunset. Uranus is there, too, invisible to the eye alone. Chirag Upreti in Rajastan, India caught it on January 27. You’ll see it in the inset at right. He wrote: “In this image you can see Venus shine through the ‘Babul’ tree in local language taxonomic synonym Acacia nilotica, a native of this dry and semiarid climate.”



from EarthSky http://ift.tt/2jJwRu8

Is there a reproducibility “crisis” in biomedical research? (2017 edition) [Respectful Insolence]

About a week ago, I happened upon a number of stories about a study and project that demonstrates a key difference between science and pseudoscience. They had titles like, “Rigorous replication effort succeeds for just two of five cancer papers” (Science), “Cancer reproducibility project releases first results: An open-science effort to replicate dozens of cancer-biology studies is off to a confusing start” (Nature), and “What Does It Mean When Cancer Findings Can’t Be Reproduced?” (NPR). Basically, these stories all reference a review of the initial results of the Reproducibility Project in Cancer Biology. The studies are summed up in an overview by Brian A. Nosek and Timothy M. Errington (“Making sense of replications“) and an editorial published by eLife, the open-access journal reporting the results of various reproducibility projects (“Reproducibility in cancer biology: The challenges of replication“). After all the politically-charged topics I’ve been dealing with the last week, a post purely about science is just the break I need. So let’s dig in, starting with some background, noting that only two of the five papers could be rigorously replicated.

Reproducibility: One cornerstone of science

Reproducibility is key to science. If science is the best method that we have of figuring out how nature works, if our hypotheses and theories are to have any basis in reality, then the observations upon which those hypotheses and theories are based must be reproducible. To the average lay person without a background in science, this doesn’t sound like a particularly difficult issue. After an interesting scientific paper is published, why can’t other scientists just do what the scientists publishing the paper did? However, as any scientist knows, particularly biological scientists, it’s nowhere near that simple. First, there is little or no reward for just reproducing the work of other scientists. Certainly, a scientist is not going to get a grant to reproduce those results, and publications reporting reproduced results will not be published in high impact journals. As the Reproducibility Project: Cancer Biology puts it:

Despite being a defining feature of science, reproducibility is more an assumption than a practice in the present scientific ecosystem (Collins, 1985; Schmidt, 2009). Incentives for scientific achievement prioritize innovation over replication (Alberts et al., 2014; Nosek, et al., 2012). Peer review tends to favor manuscripts that contain new findings over those that improve our understanding of a previously published finding. Moreover, careers are made by producing exciting new results at the frontiers of knowledge, not by verifying prior discoveries.

Which is, of course, true. Scientists go into science in the first place to make new discoveries, and translational scientists go into cancer research to discover new understandings of what causes cancer and how to use those new understandings to find new and innovative treatments for cancer.

Usually, one of the only times it’s deemed worthwhile to reproduce another scientist’s results is as the first step to trying to expand on the observations of that scientist, and that in fact is probably how most scientific research is replicated when it is replicated. Basically, you have to know that you’re doing things the same way and getting the same results using the same materials and methods before you can build on those results. Even so, such replications are usually not direct or complete replications; usually scientists only replicate as little as they need to assure themselves they’re on the right track. Complete sets of experiments are rarely replicated, the more expensive and time-consuming the experiment the less frequently replicated.

Another aspect of reproducibility is how well scientists record their methods in scientific papers; i.e., the transparency of science. The standard should be to record the methods in sufficient detail that a scientist knowledgeable in the field could replicate the experiments using the published description alone, but that standard is rarely met. If you read a number of scientific papers, you will find that there is huge variability in the amount of detail provided in the Methods sections of scientific papers. For some journals, like Cell, the amount of detail is pretty high, although often still not high enough to easily reproduce an experiment. For other journals (like, ironically enough, very high impact journals like Science and Nature), the level of detail can be frustratingly low. For most journals, it’s somewhere in between. I, like any other scientist, know from personal experience, particularly during graduate school and my PhD studies, just how difficult it can be to look at the Methods section of a paper and figure out how to replicate an experiment as the first step towards asking additional experiments . Not uncommonly, it was necessary to contact the lab that published the work I was trying to replicate. Sometimes we needed their reagents, such as plasmids or other recombinant DNA constructs. Sometimes we needed help troubleshooting when we didn’t get the same results.

Again, as the Reproducibility Project: Cancer Biology puts it:

Reproducing prior results is challenging because of insufficient, incomplete, or inaccurate reporting of methodologies (Hess, 2011; Prinz et al., 2011; Steward et al., 2012; Hackam and Redelmeier, 2006; Landis et al., 2011). Further, a lack of information about research resources makes it difficult or impossible to determine what was used in a published study (Vasilevsky et al., 2013). These challenges are compounded by the lack of funding support available from agencies and foundations to support replication research. When replications are performed, they are rarely published (Collins, 1985; Schmidt, 2009). A literature review in psychological science, for example, estimated that 0.15% of the published results were direct replications of prior published results (Makel et al., 2012). Finally, reproducing analyses with prior data is difficult because researchers are often reluctant to share data, even when required by funding bodies or scientific societies (Wicherts et al., 2006), and because data loss increases rapidly with time after publication (Vines et al., 2014).

Finally, although not really discussed that much, there are intangible reasons—or seemingly intangible reasons—why it can be difficult to reproduce research. Some experimental techniques, for example, require considerable skill to produce meaningful measurements. Immunofluorescence, for instance, is one, particularly when using multiple antibodies to label different proteins with different fluorescent colors. Techniques that depend on surgical skill on small animals (e.g., mice and other rodents) are another. I’ve known a few scientists over the years who suddenly had trouble reproducing their own work when a skilled technician or postdoc left the lab. The explanation was not fraud but rather because the remaining personnel didn’t know all the ins and outs of the experimental technique. It’s not uncommon for a lot of time to be wasted due to loss of skilled personnel as those left behind troubleshoot and figure out subtleties of an experimental technique that aren’t recorded in their lab protocol books, no matter how detailed. Basically, the “institutional” memory of a laboratory is difficult to maintain, given that, other than the principal investigator and (sometimes) a permanent technician and/or lab manager, most personnel in labs are only there for at most a few years to get their PhD or do a postdoctoral fellowship. Turnover is high by design. Often there are little “tricks” or nuances to various experimental techniques to get them to work well that are lost when someone leaves a lab. That’s why maintaining protocol notebooks is so important, but few labs do this as rigorously as they should, and even detailed protocol books aren’t always enough.

Is there a “reproducibility crisis”?

Part of the impetus to form the Reproducibility Project, a collaboration between Science Exchange and the Center for Open Science, was based on the perception that there is a “crisis” in reproducibility. Although I know there were papers and commentaries dating long before that, the first commentary that brought this topic to the public consciousness in a big way was written by C. Glenn Begley, a consultant for Amgen, and Lee M. Ellis, a cancer surgeon at the University of Texas M.D. Anderson Cancer Center, that concluded that 47 out of 53 “landmark” preclinical studies in cancer (i.e., basic science studies in cancer) couldn’t be replicated by Amgen sufficiently rigorously to proceed with using the results to design drugs to target the interventions. As I pointed out at the time, that was a very high bar for any finding in science, given that not all discoveries of molecular targets or mechanisms would necessarily be druggable or suitable for therapy. I also noted that the papers were from high impact journals which are known for publishing only the most “cutting edge” science, which tends to be the kind of science whose findings are most often later overturned or found to be incorrect.

Of course, there are other studies and other indications. Just last year, Nature published a survey that found that more than 70% of scientists have failed reproduce another scientist’s experiment and that 50% had even failed to reproduce their own. Some 52% of the scientists surveyed thought that there was a reproducibility “crisis.” I agreed that there was a problem, but I don’t believe it is a “crisis.”

Reproducibility: A personal anecdote

To illustrate the complexities of “reproducibility,” I often recount an incident from my early scientific career as a surgical oncology fellow working in a radiation oncology laboratory in the late 1990s. At the time, Dr. Judah Folkman had recently published papers describing the angiogenesis inhibitors angiostatin and endostatin and how strikingly they shrank tumors down to the point where they became dormant as a small clump of cells that didn’t grow. There were a lot of exaggerated headlines at the time along the line of “Is this the cure for cancer?” (I shudder to think what reporting would have been like if Facebook, Twitter, and the like had existed at the time,) Angiogenesis inhibitors block the formation of blood vessels by blocking the action of factors secreted by the tumor to induce the growth of new blood vessels to feed its growth and thereby hijack the normal physiologic process of angiogenesis. Our laboratory wanted to combine angiostatin and radiation therapy in an animal model to see if the effects were additive or synergistic.

Our results were ultimately published in Nature, the only Nature paper on my CV, but the path to these results was not straight. It was widely known through the grapevine at the time that other laboratories were having difficulty reproducing Folkman’s striking results. In our case, we were not observing nearly as potent an antitumor effect as Folkman had described with angiostatin in our angiostatin alone group, which we wanted to compare with a group of mice treated with both angiostatin and radiation therapy. We wondered if it was something to do with the angiostatin itself, which was being made in bacteria from a plasmid by our collaborators. Given that Folkman was one of the best scientists I ever met, none of us doubted his results and assumed that it must be something we were doing.

It actually was. We contacted Folkman, who provided reagents, protocols, and advice, as well as some angiostatin made in his laboratory. It turns out that the peptide we were making was easily denatured (unfolded), which was why it was not as potent as Folkman had reported. Now here’s why I say we couldn’t replicate his results. It’s because we couldn’t fully replicate his results. Our angiostatin inhibited the growth of a wide variety of tumors, but, even after applying the tweaks to our angiostatin production suggested by Folkman, in our hands angiostatin never inhibited tumor growth as potently as Folkman had reported. So in other words, there could easily have been something else going on that we never figured out. Be that as it may, Folkman had the best attitude I’ve ever seen in a scientist regarding reproducibility, as we learned later when we heard of how he had done the same thing for several other labs, even to the point of dispatching one of his postdocs to help other investigators to get angiostatin and endostatin to work. Still, few investigators could ever quite replicate Folkman’s initial results, although many, including our lab, demonstrated that angiostatin and endostatin were potent angiogenesis inhibitors.

So why do I repeat this anecdote almost every time discussions of scientific reproducibility come up? Simple. It’s to illustrate that reproducibility falls on a spectrum. Did we fail to reproduce Folkman’s results? Yes and no. Yes, we reproduced the key result that angiostatin inhibits tumor growth by blocking angiogenesis, but, no, we didn’t reproduce the same very powerful effect size reported by Folkman. The point is that replication of any given scientific finding can range from total failure to replicate (e.g., if we had failed to show any antitumor effect of angiostatin at all) to partial failure to replicate (e.g., what actually happened) to success at replication (e.g., we had shown angiostatin to block tumor growth in the angiostatin along group as powerfully as Folkman had). It all becomes even more confusing when we consider that there is no standard definition of or clear consensus over what does and doesn’t constitute scientific reproducibility.

One of the strengths of the initial results of the Reproducibility Project: Cancer Biology, is that this messiness is appreciated and explained.

Reproducibility in Cancer Biology Research: The setup

Before I discuss the results, I should explain a bit more about what the Reproducibility Project: Cancer Biology is and does. In brief, the Reproducibility Project: Cancer Biology, which was founded in 2014, established a core team to design, prepare, and monitor project operations dedicated to testing the reproducibility of results reported in 50 high-impact papers published between 2010 and 2012. The plan was to replicate a subset of experimental results from each article. For each chosen paper a Registered Report detailing the proposed experimental designs and protocols for each subset of experiments to be replicated was to be peer reviewed and published prior to data collection. Following completion of data collection, results were to be published as a Replication Study. The report that made the news last week represents the results of Replication Studies for the first five papers chosen.

Here are the five papers, with links to the Registered Report and the Replication Study for each:

  1. BET Bromodomain Inhibition as a Therapeutic Strategy to Target c-Myc (Registered Report; Replication Study).
  2. Coadministration of a Tumor-Penetrating Peptide Enhances the Efficacy of Cancer Drugs (Registered Report; Replication Study).
  3. Discovery and Preclinical Validation of Drug Indications Using Compendia of Public Gene Expression Data (Registered Report; Replication Study).
  4. The CD47-signal regulatory protein alpha (SIRPa) interaction is a therapeutic target for human solid tumors (Registered Report; Replication Study).
  5. Melanoma genome sequencing reveals frequent PREX2 mutations (Registered Report; Replication Study).

There’s your reading assignment. No, just kidding. I only provide the links to make it easier for interested readers to check out the studies and replication studies if they are so inclined. Also, it’s easier to refer to each study by number. But how were these studies chosen? That’s a fair question.

The sampling frame was defined as the 400 most cited papers from both Scopus and Web of Science using the search terms (cancer, onco*, tumor*, metasta*, neoplas*, malignan*, carcino*) for 2010, 2011, and 2012. Citations were counted from all sources, which include primary research articles and reviews. This produced an initial sample of 501 articles from 2010, 444 from 2011, and 438 from 2012. Altmetrics scores from Mendeley and Altmetric.com were collected for the entire dataset and used to create a final impact score for each paper. Citation rates and altmetric scores were each standardized by dividing each metric by the highest in the dataset to give each paper a normalized metric score between 0 and 1, which was summed to create an aggregate impact score. Within each year, articles were reviewed for inclusion eligibility starting with the highest aggregate impact article. Articles were removed if they were clinical trials, case studies, reviews, or if they required specialized samples, techniques, or equipment that would be difficult or impossible to obtain. Also, articles reporting sequencing results, such as publications from The Cancer Genome Atlas project, were excluded. However, if sequencing or proteomic experiments were only part of an article, the other experiments in those papers could still be eligible. Review of articles continued until a total of 50 articles, about one-third from each year, were identified as eligible. The final set included 17 papers from 2010, 17 from 2011, and 16 from 2012. From each paper, a subset of experiments were identified for replication, prioritizing those that support the main conclusions of the paper while also attending to feasibility and resource constraints.

So what we are seeing is just one-tenth of the original plan. Actually, it’s more than that, because in late 2015, the Reproducibility Project: Cancer Biology announced that it had to cut back its ambitions and now plans to do only 37 papers, largely because of budgetary constraints. As a bit of a reality check for reproducibility efforts, I note that the project had originally budgeted around $25,000 to $35,000 for each experiment, but this figure turned out to be too low. Thanks to time-consuming peer-reviews, material transfer agreements, and expensive animal experiments, the team came up with a new estimated cost of $40,000 per experiment on average. This is, of course, one reason why replications are not done nearly as often as they should be. In fact, only 29 will now be done thanks to both budgetary problems and the difficulties investigators had obtaining information and materials.

I also note an inherent bias in the choice of papers. Highly cited reports, or “high impact” reports, tend to be the most novel, interesting, or even controversial (i.e., again, “cutting edge”), which also means that, again, they are “frontier science,” whose results are more frequently overturned.

Before I discuss the actual results, here’s another wrinkle, thrown in to emphasize the complexity involved in these replication studies. I can’t help but briefly quote Nosek and Errington themselves:

There is no such thing as exact replication because there are always differences between the original study and the replication. These differences could be obvious (like the date, the location of the experiment, or the experimenters) or they could be more subtle (like small differences in reagents or the execution of experimental protocols). As a consequence, repeating the methodology does not mean an exact replication, but rather the repetition of what is presumed to matter for obtaining the original result.

Direct replication is defined as attempting to reproduce a previously observed result with a procedure that provides no a priori reason to expect a different outcome (Open Science Collaboration, 2015; Schmidt, 2009). In a direct replication, protocols from the original study are followed with different samples of the same or similar materials: as such, a direct replication reflects the current beliefs about what is needed to produce a finding. Conducting a direct replication tests those beliefs empirically. In a conceptual replication, on the other hand, a different methodology (such as a different experimental technique or a different model of a disease) is used to test the same hypothesis: as such, by employing multiple methodologies conceptual replications can provide evidence that enables researchers to converge on an explanation for a finding that is not dependent on any one methodology.

Most replication in science is conceptual replication.

Reproducibility in cancer research: The results

So here is how Nosek and Errington describe the results of the first five replication papers:

The first five Replication Studies have now been published. Two of the studies reproduced important parts of the original papers (Kandela et al., 2017; Aird et al., 2017), and one did not (Mantis et al., 2017). The other two Replication Studies were uninterpretable because the control tumors grew too quickly or too slowly (or exhibited spontaneous regressions) to reliably measure whether the experimental intervention had the predicted effect (Horrigan et al., 2017a; Horrigan et al., 2017b): however, in one of these two cases the original paper (Willingham et al., 2012) has led to clinical trials for anti-CD47 antibody therapy that will provide extensive additional data on the effectiveness of this approach. Three of the Replication Studies are also accompanied by Insight articles (Dang, 2017; Davis, 2017; Sun and Gao, 2017).

First, let’s look at the clear failure to replicate (#2). The original paper reported the effects of a tumor-penetrating peptide (short protein), iRGD peptide, which in the paper increased cellular uptake of the chemotherapy agent doxorubicin in a xenograft model of prostate cancer. A xenograft model involves injecting human tumor cells into immunosuppressed mice and measuring their growth and the ability of the intervention tested to inhibit or reverse that growth. Basically, the Replication Study failed to find statistically significant differences in the penetrance of doxorubicin into the tumor cells, the tumor weight for mice treated with DOX and iRGD compared to DOX alone, or a measure of programmed cell death (apoptosis).

Neither of the two studies reported to be replicated (#1 and #3) was exactly a resounding replication, either. For example, in #3, the results were mixed, as described in an accompanying commentary. In the original paper, for example, the authors noted that cimetidine induced the death of human lung cancer cell line A549. So they tested three doses of the drug against A549 tumor xenografts (tumor cells implanted in mice) and noticed that cimetidine decreased the growth rate of these cells, an affect that was statistically significant, an effect that was close to as strong as that of low dose doxorubicin, a chemotherapy agent. In the replication study, it was found that cimetidine still reduced in decreases in tumor sizes, but they were not statistically significant when a Bonferroni correction for multiple comparisons was applied. However, a statistically significant effect was observed when the dataset from the Replication Study was combined with that from the original paper in a meta-analysis. What stood out to me was that the doxorubicin also failed to produce a statistically significant result in the Replication Study. Doxorubicin is a powerful chemotherapeutic agent; so I have to ask what was going on in the Replication Study. Be that as it may, as noted in the commentary, there can be many factors that influence the robustness of the xenograft models used in these experiments, including “batch effects on the efficacy of the drugs used; changes in the properties of cell lines over time; the strains of the mice used, and also their sex; factors related to microbiome and chow; circadian effects; temperature; and the antimicrobials that might be used in certain facilities.”

The second study (#1) was also somewhat mixed. As noted in an accompanying commentary, the treatment tested in mice did work in decreasing the level of c-Myc (the gene targeted) in multiple myeloma cell lines and did increase the overall survival of the mice in the tumor model used, but the results as measured by bioluminescence were not statistically significant. It is speculated that it was because many of the control mice had to be euthanized early before their pre-specified endpoint because of disease progression and high tumor burden.

In fact, anyone who’s ever done tumor xenograft experiments in mice has encountered the problem of tumors either growing so fast that the mice have to be euthanized prior to the planned end of the experiment or not growing at all. Certainly I have. Indeed, that is the reason that the other two studies uninterpretable. In #4, several tumors exhibited spontaneous regression, and in one Replication Study (#5) melanoma xenografts in the control group grew much faster than they did in the original study, which made the detection of the accelerated tumor growth due to mutations in the PREX 2 gene very difficult to detect compared to the original study. Even the author of the accompanying commentary seemed puzzled:

This Replication Study represents a cautionary tale concerning the impact of biological variability on experimental design. While strenuous efforts were made to precisely copy the experimental conditions employed in the original study, the xenografts in the Replication Study behaved in a fundamentally different way to those in the original study. The mechanistic basis for the observed differences is unclear. Presumably, there was a difference in the melanoma cells and/or the mice. Although the cells were obtained from the same source, small differences in culture conditions or passage history could have contributed to differences between the studies. Similarly, although the mice were obtained from the same source, housing the animals in a different facility may have contributed to differences between the studies.

My guess as someone who’s done quite a few xenograft experiments in a career dating back to the 1990s is that, for whatever reason, in the Replication Study, the number of tumor cells injected was too high for the mice and conditions, for whatever reason. There is a lot of biological variability in the behavior of tumors and a lot of factors that can affect their growth that might not be obvious. That’s why, if you’re doing experiments in a different institution, doing preliminary dose-response experiments to determine the optimal number of cells to inject is highly advisable. Also, what all of the Replication Studies suggest, whether they replicated key parts of the original studies or not, is that many of the animal experiments reported in the literature are underpowered to test the hypotheses under consideration.

Scientists react

Unsurprisingly, some scientists are not exactly fans of the Reproducibility Project. For example, Robert Weinberg, one of whose studies is a Reproducibility Project currently on hold, isn’t thrilled, having said, “It’s a naÏveté that by simply embracing this ethic, which sounds eminently reasonable, that one can clean out the Augean stables of science.” Maybe, but he doesn’t exactly say exactly what’s wrong with this ethic either.

Other scientists react this way:

This past January, the cancer reproducibility project published its protocol for replicating the experiments, and the waiting began for [Richard] Young to see whether his work will hold up in their hands. He says that if the project does match his results, it will be unsurprising —the paper’s findings have already been reproduced. If it doesn’t, a lack of expertise in the replicating lab may be responsible. Either way, the project seems a waste of time, Young says. “I am a huge fan of reproducibility. But this mechanism is not the way to test it.”

Others say:

Almost every scientist targeted by the project who spoke with Science agrees that studies in cancer biology, as in many other fields, too often turn out to be irreproducible, for reasons such as problematic reagents and the fickleness of biological systems. But few feel comfortable with this particular effort, which plans to announce its findings in coming months. Their reactions range from annoyance to anxiety to outrage. “It’s an admirable, ambitious effort. I like the concept,” says cancer geneticist Todd Golub of the Broad Institute in Cambridge, who has a paper on the group’s list. But he is “concerned about a single group using scientists without deep expertise to reproduce decades of complicated, nuanced experiments.”

This is not an unreasonable concern. Nor is that of Erkki Ruoslahti, author of the one study that clearly failed to replicate:

Ruoslahti, a cancer biologist at the Sanford Burnham Prebys Medical Discovery Institute in La Jolla, California, disputes the verdict on his research. After all, at least ten laboratories in the United States, Europe, China, South Korea and Japan have validated the 2010 paper1 in which he first reported the value of the drug, a peptide designed to penetrate tumours and enhance the cancer-killing power of other chemotherapy agents. “Have three generations of postdocs in my lab fooled themselves, and all these other people done the same? I have a hard time believing that,” he says.

This is, of course, an argument in favor of conceptual replication. Yes, it is self-serving, but that doesn’t mean it’s not a valid argument.

Is there a crisis in cancer biology research reproducibility?

Traditionally, the way science is replicated is not usually through the direct replication of key experiments in papers, as the Reproducibility Project: Cancer Biology is doing. It is usually through other laboratories doing what Ruoslahti says and testing the same hypothesis using different methods and taking the next steps. Unsurprisingly, Ruoslahti is concerned that the recently published report will harm his ability to raise capital for DrugCendR, a company in La Jolla that he founded to develop his therapy. Is that fair? It might not be if his results have indeed been replicated by other groups. On the other hand, to me all science is fair game for attempts at replication.

Tim Errington, manager of the Reproducibility Project, emphasizes that a single failure to replicate is not proof that the initial findings were wrong and shouldn’t put a stain on individual papers, but surely he must know that scientists will interpret a failure to replicate in just that manner. After all, scientists have pride and ego as well. Not surprisingly, many are alarmed and defensive when informed that another scientist failed to replicate their findings. I’m sure even Judah Folkman was disturbed by the news that scientists were having a hard time replicating his results with angiostatin and endostatin. That’s where scientists need to strive to be more like Folkman. Most at least try to be, but far too many have a hard time separating their ego from their work.

Overall, I tend to look as the results of these five Replication Studies as two out of three being replicated, with the other two not counting because of flukes that introduced problems not seen in the original paper, meaning:

Such conflicts mean that the replication efforts are not very informative, says Levi Garraway, a cancer biologist at the Dana-Farber Cancer Institute in Boston, Massachusetts. “You can’t distinguish between a trivial reason for a result versus a profound result,” he says. In his study, which identified mutations that accelerate cancer formation, cells that did not carry the mutations grew much faster in the replication effort — perhaps because of changes in cell culture. This meant that the replication couldn’t be compared to the original.

Even so, the optimistic interpretation is that more than 33% of studies are likely to be difficult to replicate and the more conventional interpretation that 60% of the studies were not replicated, with the remaining 40% having some problems, I’d conclude that there is definitely a problem. However, one of the greatest strengths of science (and science-based medicine) is that it is self-correcting. The process is, of course, messy and slow, but it is ongoing. Also remember that this is a small sample, only five studies, all of which of the type that were considered “cutting edge” and therefore less “safe” than the average study. I look at research like the Reproducibility Project as the means to identify the problem. What we next need to do is to figure out what causes the problem and to focus on solutions. I’m not convinced that replicability problems in preclinical research are the major reason why so many drugs that make it to clinical trials fail to be approved by the FDA, given how much conceptual replication goes on before a drug ever makes it to clinical trials in the first place, but certainly it couldn’t hurt.

The NIH agrees. It’s recently released guidelines meant to improve the reproducibility of cancer research and recommend that journals ask for more thorough methods sections and more sharing of data, and the Reproducibility Project have produced a wiki describing how they went about their work and describing the changes they would like to see. Change is coming, and it appears to be for the better.



from ScienceBlogs http://ift.tt/2kKz7yI

About a week ago, I happened upon a number of stories about a study and project that demonstrates a key difference between science and pseudoscience. They had titles like, “Rigorous replication effort succeeds for just two of five cancer papers” (Science), “Cancer reproducibility project releases first results: An open-science effort to replicate dozens of cancer-biology studies is off to a confusing start” (Nature), and “What Does It Mean When Cancer Findings Can’t Be Reproduced?” (NPR). Basically, these stories all reference a review of the initial results of the Reproducibility Project in Cancer Biology. The studies are summed up in an overview by Brian A. Nosek and Timothy M. Errington (“Making sense of replications“) and an editorial published by eLife, the open-access journal reporting the results of various reproducibility projects (“Reproducibility in cancer biology: The challenges of replication“). After all the politically-charged topics I’ve been dealing with the last week, a post purely about science is just the break I need. So let’s dig in, starting with some background, noting that only two of the five papers could be rigorously replicated.

Reproducibility: One cornerstone of science

Reproducibility is key to science. If science is the best method that we have of figuring out how nature works, if our hypotheses and theories are to have any basis in reality, then the observations upon which those hypotheses and theories are based must be reproducible. To the average lay person without a background in science, this doesn’t sound like a particularly difficult issue. After an interesting scientific paper is published, why can’t other scientists just do what the scientists publishing the paper did? However, as any scientist knows, particularly biological scientists, it’s nowhere near that simple. First, there is little or no reward for just reproducing the work of other scientists. Certainly, a scientist is not going to get a grant to reproduce those results, and publications reporting reproduced results will not be published in high impact journals. As the Reproducibility Project: Cancer Biology puts it:

Despite being a defining feature of science, reproducibility is more an assumption than a practice in the present scientific ecosystem (Collins, 1985; Schmidt, 2009). Incentives for scientific achievement prioritize innovation over replication (Alberts et al., 2014; Nosek, et al., 2012). Peer review tends to favor manuscripts that contain new findings over those that improve our understanding of a previously published finding. Moreover, careers are made by producing exciting new results at the frontiers of knowledge, not by verifying prior discoveries.

Which is, of course, true. Scientists go into science in the first place to make new discoveries, and translational scientists go into cancer research to discover new understandings of what causes cancer and how to use those new understandings to find new and innovative treatments for cancer.

Usually, one of the only times it’s deemed worthwhile to reproduce another scientist’s results is as the first step to trying to expand on the observations of that scientist, and that in fact is probably how most scientific research is replicated when it is replicated. Basically, you have to know that you’re doing things the same way and getting the same results using the same materials and methods before you can build on those results. Even so, such replications are usually not direct or complete replications; usually scientists only replicate as little as they need to assure themselves they’re on the right track. Complete sets of experiments are rarely replicated, the more expensive and time-consuming the experiment the less frequently replicated.

Another aspect of reproducibility is how well scientists record their methods in scientific papers; i.e., the transparency of science. The standard should be to record the methods in sufficient detail that a scientist knowledgeable in the field could replicate the experiments using the published description alone, but that standard is rarely met. If you read a number of scientific papers, you will find that there is huge variability in the amount of detail provided in the Methods sections of scientific papers. For some journals, like Cell, the amount of detail is pretty high, although often still not high enough to easily reproduce an experiment. For other journals (like, ironically enough, very high impact journals like Science and Nature), the level of detail can be frustratingly low. For most journals, it’s somewhere in between. I, like any other scientist, know from personal experience, particularly during graduate school and my PhD studies, just how difficult it can be to look at the Methods section of a paper and figure out how to replicate an experiment as the first step towards asking additional experiments . Not uncommonly, it was necessary to contact the lab that published the work I was trying to replicate. Sometimes we needed their reagents, such as plasmids or other recombinant DNA constructs. Sometimes we needed help troubleshooting when we didn’t get the same results.

Again, as the Reproducibility Project: Cancer Biology puts it:

Reproducing prior results is challenging because of insufficient, incomplete, or inaccurate reporting of methodologies (Hess, 2011; Prinz et al., 2011; Steward et al., 2012; Hackam and Redelmeier, 2006; Landis et al., 2011). Further, a lack of information about research resources makes it difficult or impossible to determine what was used in a published study (Vasilevsky et al., 2013). These challenges are compounded by the lack of funding support available from agencies and foundations to support replication research. When replications are performed, they are rarely published (Collins, 1985; Schmidt, 2009). A literature review in psychological science, for example, estimated that 0.15% of the published results were direct replications of prior published results (Makel et al., 2012). Finally, reproducing analyses with prior data is difficult because researchers are often reluctant to share data, even when required by funding bodies or scientific societies (Wicherts et al., 2006), and because data loss increases rapidly with time after publication (Vines et al., 2014).

Finally, although not really discussed that much, there are intangible reasons—or seemingly intangible reasons—why it can be difficult to reproduce research. Some experimental techniques, for example, require considerable skill to produce meaningful measurements. Immunofluorescence, for instance, is one, particularly when using multiple antibodies to label different proteins with different fluorescent colors. Techniques that depend on surgical skill on small animals (e.g., mice and other rodents) are another. I’ve known a few scientists over the years who suddenly had trouble reproducing their own work when a skilled technician or postdoc left the lab. The explanation was not fraud but rather because the remaining personnel didn’t know all the ins and outs of the experimental technique. It’s not uncommon for a lot of time to be wasted due to loss of skilled personnel as those left behind troubleshoot and figure out subtleties of an experimental technique that aren’t recorded in their lab protocol books, no matter how detailed. Basically, the “institutional” memory of a laboratory is difficult to maintain, given that, other than the principal investigator and (sometimes) a permanent technician and/or lab manager, most personnel in labs are only there for at most a few years to get their PhD or do a postdoctoral fellowship. Turnover is high by design. Often there are little “tricks” or nuances to various experimental techniques to get them to work well that are lost when someone leaves a lab. That’s why maintaining protocol notebooks is so important, but few labs do this as rigorously as they should, and even detailed protocol books aren’t always enough.

Is there a “reproducibility crisis”?

Part of the impetus to form the Reproducibility Project, a collaboration between Science Exchange and the Center for Open Science, was based on the perception that there is a “crisis” in reproducibility. Although I know there were papers and commentaries dating long before that, the first commentary that brought this topic to the public consciousness in a big way was written by C. Glenn Begley, a consultant for Amgen, and Lee M. Ellis, a cancer surgeon at the University of Texas M.D. Anderson Cancer Center, that concluded that 47 out of 53 “landmark” preclinical studies in cancer (i.e., basic science studies in cancer) couldn’t be replicated by Amgen sufficiently rigorously to proceed with using the results to design drugs to target the interventions. As I pointed out at the time, that was a very high bar for any finding in science, given that not all discoveries of molecular targets or mechanisms would necessarily be druggable or suitable for therapy. I also noted that the papers were from high impact journals which are known for publishing only the most “cutting edge” science, which tends to be the kind of science whose findings are most often later overturned or found to be incorrect.

Of course, there are other studies and other indications. Just last year, Nature published a survey that found that more than 70% of scientists have failed reproduce another scientist’s experiment and that 50% had even failed to reproduce their own. Some 52% of the scientists surveyed thought that there was a reproducibility “crisis.” I agreed that there was a problem, but I don’t believe it is a “crisis.”

Reproducibility: A personal anecdote

To illustrate the complexities of “reproducibility,” I often recount an incident from my early scientific career as a surgical oncology fellow working in a radiation oncology laboratory in the late 1990s. At the time, Dr. Judah Folkman had recently published papers describing the angiogenesis inhibitors angiostatin and endostatin and how strikingly they shrank tumors down to the point where they became dormant as a small clump of cells that didn’t grow. There were a lot of exaggerated headlines at the time along the line of “Is this the cure for cancer?” (I shudder to think what reporting would have been like if Facebook, Twitter, and the like had existed at the time,) Angiogenesis inhibitors block the formation of blood vessels by blocking the action of factors secreted by the tumor to induce the growth of new blood vessels to feed its growth and thereby hijack the normal physiologic process of angiogenesis. Our laboratory wanted to combine angiostatin and radiation therapy in an animal model to see if the effects were additive or synergistic.

Our results were ultimately published in Nature, the only Nature paper on my CV, but the path to these results was not straight. It was widely known through the grapevine at the time that other laboratories were having difficulty reproducing Folkman’s striking results. In our case, we were not observing nearly as potent an antitumor effect as Folkman had described with angiostatin in our angiostatin alone group, which we wanted to compare with a group of mice treated with both angiostatin and radiation therapy. We wondered if it was something to do with the angiostatin itself, which was being made in bacteria from a plasmid by our collaborators. Given that Folkman was one of the best scientists I ever met, none of us doubted his results and assumed that it must be something we were doing.

It actually was. We contacted Folkman, who provided reagents, protocols, and advice, as well as some angiostatin made in his laboratory. It turns out that the peptide we were making was easily denatured (unfolded), which was why it was not as potent as Folkman had reported. Now here’s why I say we couldn’t replicate his results. It’s because we couldn’t fully replicate his results. Our angiostatin inhibited the growth of a wide variety of tumors, but, even after applying the tweaks to our angiostatin production suggested by Folkman, in our hands angiostatin never inhibited tumor growth as potently as Folkman had reported. So in other words, there could easily have been something else going on that we never figured out. Be that as it may, Folkman had the best attitude I’ve ever seen in a scientist regarding reproducibility, as we learned later when we heard of how he had done the same thing for several other labs, even to the point of dispatching one of his postdocs to help other investigators to get angiostatin and endostatin to work. Still, few investigators could ever quite replicate Folkman’s initial results, although many, including our lab, demonstrated that angiostatin and endostatin were potent angiogenesis inhibitors.

So why do I repeat this anecdote almost every time discussions of scientific reproducibility come up? Simple. It’s to illustrate that reproducibility falls on a spectrum. Did we fail to reproduce Folkman’s results? Yes and no. Yes, we reproduced the key result that angiostatin inhibits tumor growth by blocking angiogenesis, but, no, we didn’t reproduce the same very powerful effect size reported by Folkman. The point is that replication of any given scientific finding can range from total failure to replicate (e.g., if we had failed to show any antitumor effect of angiostatin at all) to partial failure to replicate (e.g., what actually happened) to success at replication (e.g., we had shown angiostatin to block tumor growth in the angiostatin along group as powerfully as Folkman had). It all becomes even more confusing when we consider that there is no standard definition of or clear consensus over what does and doesn’t constitute scientific reproducibility.

One of the strengths of the initial results of the Reproducibility Project: Cancer Biology, is that this messiness is appreciated and explained.

Reproducibility in Cancer Biology Research: The setup

Before I discuss the results, I should explain a bit more about what the Reproducibility Project: Cancer Biology is and does. In brief, the Reproducibility Project: Cancer Biology, which was founded in 2014, established a core team to design, prepare, and monitor project operations dedicated to testing the reproducibility of results reported in 50 high-impact papers published between 2010 and 2012. The plan was to replicate a subset of experimental results from each article. For each chosen paper a Registered Report detailing the proposed experimental designs and protocols for each subset of experiments to be replicated was to be peer reviewed and published prior to data collection. Following completion of data collection, results were to be published as a Replication Study. The report that made the news last week represents the results of Replication Studies for the first five papers chosen.

Here are the five papers, with links to the Registered Report and the Replication Study for each:

  1. BET Bromodomain Inhibition as a Therapeutic Strategy to Target c-Myc (Registered Report; Replication Study).
  2. Coadministration of a Tumor-Penetrating Peptide Enhances the Efficacy of Cancer Drugs (Registered Report; Replication Study).
  3. Discovery and Preclinical Validation of Drug Indications Using Compendia of Public Gene Expression Data (Registered Report; Replication Study).
  4. The CD47-signal regulatory protein alpha (SIRPa) interaction is a therapeutic target for human solid tumors (Registered Report; Replication Study).
  5. Melanoma genome sequencing reveals frequent PREX2 mutations (Registered Report; Replication Study).

There’s your reading assignment. No, just kidding. I only provide the links to make it easier for interested readers to check out the studies and replication studies if they are so inclined. Also, it’s easier to refer to each study by number. But how were these studies chosen? That’s a fair question.

The sampling frame was defined as the 400 most cited papers from both Scopus and Web of Science using the search terms (cancer, onco*, tumor*, metasta*, neoplas*, malignan*, carcino*) for 2010, 2011, and 2012. Citations were counted from all sources, which include primary research articles and reviews. This produced an initial sample of 501 articles from 2010, 444 from 2011, and 438 from 2012. Altmetrics scores from Mendeley and Altmetric.com were collected for the entire dataset and used to create a final impact score for each paper. Citation rates and altmetric scores were each standardized by dividing each metric by the highest in the dataset to give each paper a normalized metric score between 0 and 1, which was summed to create an aggregate impact score. Within each year, articles were reviewed for inclusion eligibility starting with the highest aggregate impact article. Articles were removed if they were clinical trials, case studies, reviews, or if they required specialized samples, techniques, or equipment that would be difficult or impossible to obtain. Also, articles reporting sequencing results, such as publications from The Cancer Genome Atlas project, were excluded. However, if sequencing or proteomic experiments were only part of an article, the other experiments in those papers could still be eligible. Review of articles continued until a total of 50 articles, about one-third from each year, were identified as eligible. The final set included 17 papers from 2010, 17 from 2011, and 16 from 2012. From each paper, a subset of experiments were identified for replication, prioritizing those that support the main conclusions of the paper while also attending to feasibility and resource constraints.

So what we are seeing is just one-tenth of the original plan. Actually, it’s more than that, because in late 2015, the Reproducibility Project: Cancer Biology announced that it had to cut back its ambitions and now plans to do only 37 papers, largely because of budgetary constraints. As a bit of a reality check for reproducibility efforts, I note that the project had originally budgeted around $25,000 to $35,000 for each experiment, but this figure turned out to be too low. Thanks to time-consuming peer-reviews, material transfer agreements, and expensive animal experiments, the team came up with a new estimated cost of $40,000 per experiment on average. This is, of course, one reason why replications are not done nearly as often as they should be. In fact, only 29 will now be done thanks to both budgetary problems and the difficulties investigators had obtaining information and materials.

I also note an inherent bias in the choice of papers. Highly cited reports, or “high impact” reports, tend to be the most novel, interesting, or even controversial (i.e., again, “cutting edge”), which also means that, again, they are “frontier science,” whose results are more frequently overturned.

Before I discuss the actual results, here’s another wrinkle, thrown in to emphasize the complexity involved in these replication studies. I can’t help but briefly quote Nosek and Errington themselves:

There is no such thing as exact replication because there are always differences between the original study and the replication. These differences could be obvious (like the date, the location of the experiment, or the experimenters) or they could be more subtle (like small differences in reagents or the execution of experimental protocols). As a consequence, repeating the methodology does not mean an exact replication, but rather the repetition of what is presumed to matter for obtaining the original result.

Direct replication is defined as attempting to reproduce a previously observed result with a procedure that provides no a priori reason to expect a different outcome (Open Science Collaboration, 2015; Schmidt, 2009). In a direct replication, protocols from the original study are followed with different samples of the same or similar materials: as such, a direct replication reflects the current beliefs about what is needed to produce a finding. Conducting a direct replication tests those beliefs empirically. In a conceptual replication, on the other hand, a different methodology (such as a different experimental technique or a different model of a disease) is used to test the same hypothesis: as such, by employing multiple methodologies conceptual replications can provide evidence that enables researchers to converge on an explanation for a finding that is not dependent on any one methodology.

Most replication in science is conceptual replication.

Reproducibility in cancer research: The results

So here is how Nosek and Errington describe the results of the first five replication papers:

The first five Replication Studies have now been published. Two of the studies reproduced important parts of the original papers (Kandela et al., 2017; Aird et al., 2017), and one did not (Mantis et al., 2017). The other two Replication Studies were uninterpretable because the control tumors grew too quickly or too slowly (or exhibited spontaneous regressions) to reliably measure whether the experimental intervention had the predicted effect (Horrigan et al., 2017a; Horrigan et al., 2017b): however, in one of these two cases the original paper (Willingham et al., 2012) has led to clinical trials for anti-CD47 antibody therapy that will provide extensive additional data on the effectiveness of this approach. Three of the Replication Studies are also accompanied by Insight articles (Dang, 2017; Davis, 2017; Sun and Gao, 2017).

First, let’s look at the clear failure to replicate (#2). The original paper reported the effects of a tumor-penetrating peptide (short protein), iRGD peptide, which in the paper increased cellular uptake of the chemotherapy agent doxorubicin in a xenograft model of prostate cancer. A xenograft model involves injecting human tumor cells into immunosuppressed mice and measuring their growth and the ability of the intervention tested to inhibit or reverse that growth. Basically, the Replication Study failed to find statistically significant differences in the penetrance of doxorubicin into the tumor cells, the tumor weight for mice treated with DOX and iRGD compared to DOX alone, or a measure of programmed cell death (apoptosis).

Neither of the two studies reported to be replicated (#1 and #3) was exactly a resounding replication, either. For example, in #3, the results were mixed, as described in an accompanying commentary. In the original paper, for example, the authors noted that cimetidine induced the death of human lung cancer cell line A549. So they tested three doses of the drug against A549 tumor xenografts (tumor cells implanted in mice) and noticed that cimetidine decreased the growth rate of these cells, an affect that was statistically significant, an effect that was close to as strong as that of low dose doxorubicin, a chemotherapy agent. In the replication study, it was found that cimetidine still reduced in decreases in tumor sizes, but they were not statistically significant when a Bonferroni correction for multiple comparisons was applied. However, a statistically significant effect was observed when the dataset from the Replication Study was combined with that from the original paper in a meta-analysis. What stood out to me was that the doxorubicin also failed to produce a statistically significant result in the Replication Study. Doxorubicin is a powerful chemotherapeutic agent; so I have to ask what was going on in the Replication Study. Be that as it may, as noted in the commentary, there can be many factors that influence the robustness of the xenograft models used in these experiments, including “batch effects on the efficacy of the drugs used; changes in the properties of cell lines over time; the strains of the mice used, and also their sex; factors related to microbiome and chow; circadian effects; temperature; and the antimicrobials that might be used in certain facilities.”

The second study (#1) was also somewhat mixed. As noted in an accompanying commentary, the treatment tested in mice did work in decreasing the level of c-Myc (the gene targeted) in multiple myeloma cell lines and did increase the overall survival of the mice in the tumor model used, but the results as measured by bioluminescence were not statistically significant. It is speculated that it was because many of the control mice had to be euthanized early before their pre-specified endpoint because of disease progression and high tumor burden.

In fact, anyone who’s ever done tumor xenograft experiments in mice has encountered the problem of tumors either growing so fast that the mice have to be euthanized prior to the planned end of the experiment or not growing at all. Certainly I have. Indeed, that is the reason that the other two studies uninterpretable. In #4, several tumors exhibited spontaneous regression, and in one Replication Study (#5) melanoma xenografts in the control group grew much faster than they did in the original study, which made the detection of the accelerated tumor growth due to mutations in the PREX 2 gene very difficult to detect compared to the original study. Even the author of the accompanying commentary seemed puzzled:

This Replication Study represents a cautionary tale concerning the impact of biological variability on experimental design. While strenuous efforts were made to precisely copy the experimental conditions employed in the original study, the xenografts in the Replication Study behaved in a fundamentally different way to those in the original study. The mechanistic basis for the observed differences is unclear. Presumably, there was a difference in the melanoma cells and/or the mice. Although the cells were obtained from the same source, small differences in culture conditions or passage history could have contributed to differences between the studies. Similarly, although the mice were obtained from the same source, housing the animals in a different facility may have contributed to differences between the studies.

My guess as someone who’s done quite a few xenograft experiments in a career dating back to the 1990s is that, for whatever reason, in the Replication Study, the number of tumor cells injected was too high for the mice and conditions, for whatever reason. There is a lot of biological variability in the behavior of tumors and a lot of factors that can affect their growth that might not be obvious. That’s why, if you’re doing experiments in a different institution, doing preliminary dose-response experiments to determine the optimal number of cells to inject is highly advisable. Also, what all of the Replication Studies suggest, whether they replicated key parts of the original studies or not, is that many of the animal experiments reported in the literature are underpowered to test the hypotheses under consideration.

Scientists react

Unsurprisingly, some scientists are not exactly fans of the Reproducibility Project. For example, Robert Weinberg, one of whose studies is a Reproducibility Project currently on hold, isn’t thrilled, having said, “It’s a naÏveté that by simply embracing this ethic, which sounds eminently reasonable, that one can clean out the Augean stables of science.” Maybe, but he doesn’t exactly say exactly what’s wrong with this ethic either.

Other scientists react this way:

This past January, the cancer reproducibility project published its protocol for replicating the experiments, and the waiting began for [Richard] Young to see whether his work will hold up in their hands. He says that if the project does match his results, it will be unsurprising —the paper’s findings have already been reproduced. If it doesn’t, a lack of expertise in the replicating lab may be responsible. Either way, the project seems a waste of time, Young says. “I am a huge fan of reproducibility. But this mechanism is not the way to test it.”

Others say:

Almost every scientist targeted by the project who spoke with Science agrees that studies in cancer biology, as in many other fields, too often turn out to be irreproducible, for reasons such as problematic reagents and the fickleness of biological systems. But few feel comfortable with this particular effort, which plans to announce its findings in coming months. Their reactions range from annoyance to anxiety to outrage. “It’s an admirable, ambitious effort. I like the concept,” says cancer geneticist Todd Golub of the Broad Institute in Cambridge, who has a paper on the group’s list. But he is “concerned about a single group using scientists without deep expertise to reproduce decades of complicated, nuanced experiments.”

This is not an unreasonable concern. Nor is that of Erkki Ruoslahti, author of the one study that clearly failed to replicate:

Ruoslahti, a cancer biologist at the Sanford Burnham Prebys Medical Discovery Institute in La Jolla, California, disputes the verdict on his research. After all, at least ten laboratories in the United States, Europe, China, South Korea and Japan have validated the 2010 paper1 in which he first reported the value of the drug, a peptide designed to penetrate tumours and enhance the cancer-killing power of other chemotherapy agents. “Have three generations of postdocs in my lab fooled themselves, and all these other people done the same? I have a hard time believing that,” he says.

This is, of course, an argument in favor of conceptual replication. Yes, it is self-serving, but that doesn’t mean it’s not a valid argument.

Is there a crisis in cancer biology research reproducibility?

Traditionally, the way science is replicated is not usually through the direct replication of key experiments in papers, as the Reproducibility Project: Cancer Biology is doing. It is usually through other laboratories doing what Ruoslahti says and testing the same hypothesis using different methods and taking the next steps. Unsurprisingly, Ruoslahti is concerned that the recently published report will harm his ability to raise capital for DrugCendR, a company in La Jolla that he founded to develop his therapy. Is that fair? It might not be if his results have indeed been replicated by other groups. On the other hand, to me all science is fair game for attempts at replication.

Tim Errington, manager of the Reproducibility Project, emphasizes that a single failure to replicate is not proof that the initial findings were wrong and shouldn’t put a stain on individual papers, but surely he must know that scientists will interpret a failure to replicate in just that manner. After all, scientists have pride and ego as well. Not surprisingly, many are alarmed and defensive when informed that another scientist failed to replicate their findings. I’m sure even Judah Folkman was disturbed by the news that scientists were having a hard time replicating his results with angiostatin and endostatin. That’s where scientists need to strive to be more like Folkman. Most at least try to be, but far too many have a hard time separating their ego from their work.

Overall, I tend to look as the results of these five Replication Studies as two out of three being replicated, with the other two not counting because of flukes that introduced problems not seen in the original paper, meaning:

Such conflicts mean that the replication efforts are not very informative, says Levi Garraway, a cancer biologist at the Dana-Farber Cancer Institute in Boston, Massachusetts. “You can’t distinguish between a trivial reason for a result versus a profound result,” he says. In his study, which identified mutations that accelerate cancer formation, cells that did not carry the mutations grew much faster in the replication effort — perhaps because of changes in cell culture. This meant that the replication couldn’t be compared to the original.

Even so, the optimistic interpretation is that more than 33% of studies are likely to be difficult to replicate and the more conventional interpretation that 60% of the studies were not replicated, with the remaining 40% having some problems, I’d conclude that there is definitely a problem. However, one of the greatest strengths of science (and science-based medicine) is that it is self-correcting. The process is, of course, messy and slow, but it is ongoing. Also remember that this is a small sample, only five studies, all of which of the type that were considered “cutting edge” and therefore less “safe” than the average study. I look at research like the Reproducibility Project as the means to identify the problem. What we next need to do is to figure out what causes the problem and to focus on solutions. I’m not convinced that replicability problems in preclinical research are the major reason why so many drugs that make it to clinical trials fail to be approved by the FDA, given how much conceptual replication goes on before a drug ever makes it to clinical trials in the first place, but certainly it couldn’t hurt.

The NIH agrees. It’s recently released guidelines meant to improve the reproducibility of cancer research and recommend that journals ask for more thorough methods sections and more sharing of data, and the Reproducibility Project have produced a wiki describing how they went about their work and describing the changes they would like to see. Change is coming, and it appears to be for the better.



from ScienceBlogs http://ift.tt/2kKz7yI

Holocaust Denial from the White House [denialism blog]

The White House in its statement on Holocaust Remembrance Day engaged in Holocaust denial. Then they doubled down on the action and via Reince Priebus on Meet the Press expressed no regret about the wording which had no mention of the Jews in their supposed “remembrance”. This has been criticized from both ends of the political spectrum, from John Podhoretz in Commentary Magazine (a Reagan speechwriter and conservative columnist) to Tim Kaine characterizing it, correctly, as Holocaust denial.

You may ask, why is this denial? Is this hyperbole? You may even find the administration excuse that they are trying to be more “inclusive” of all the others who were victimized in the Holocaust plausible. And to the uninitiated it probably seems reasonable, and so the administration will likely get away with it.

But the reality is, this is part of a long history of Holocaust denial, in which the experience, memory, and truth of Jewish survivors and victims is diminished and denied. The first step of Holocaust denial isn’t an outright denial of the Holocaust, the deniers have become more subtle in the decades since Paul Rassinier outright denied its existence in the aftermath of WWII. Holocaust deniers instead start with the exact kind of minimization and distraction that the White House engaged in with this statement. They say, “well, the Holocaust was about so many other groups, not just the Jews.” This argument seems to have a patina of credibility, but on any real inspection it is foolish.

The Holocaust was a deliberate, systematic attempt by the Nazis to eliminate Jews from the face of the Earth. That other groups such as homosexuals, dissidents, and others despised by Nazis were also targeted does not change or diminish the fact the primary intent was to destroy the Jewish people.

These distracting arguments undercut the truth about the purpose of the Final Solution and are thus a denial of history and truth. They are the same as the arguments that diminish the numbers of victims, that suggest the Nazis weren’t specifically targeting Jews, or inevitably, the crimes of the allies (such as the bombing of Dresden) were just as bad.

Deborah Lipstadt wrote a book Denying the Holocaust: The growing assault on truth and memory that is as relevant today as two decades ago. It a major influence on my writing about denialism because Lipstadt systematically exposes the tactics deniers used to subvert legitimate scholarship and scientific fact, which, it turns out, are pretty universal to denialist movements. One of the key points she makes is that denial doesn’t have to start with dismissal of the entire horror of the Holocaust, but rather is a chain of lies that begins with the kind of minimization of the Holocaust such as the White House espoused on Remembrance Day. Lipstadt is now the focus of a film entitled “Denial” about a libel case brought against her by the Holocaust denier David Irving. Irving, like most deniers did not like being called a denier, even though it was true. Deniers know that to be a Holocaust denier is bad, and makes them bad people, so they like to pretend they’re not Holocaust deniers even while they deny the Holocaust. Spoiler alert, she won, because Holocaust deniers are lying liars who lie.

Finally, you may ask, what proof was there this was purposeful? Should intent be included in the accusation of Holocaust denial rather than mere incompetence? Well, for one thing, when this was pointed out to the White House they defended the language twice – both Hope Hicks and Reince Preibus expressed this specific language excluding Jews was purposeful and a lack of regret that it specifically leaves Jews out of Holocaust remembrance. They deny that it was Holocaust denial but Holocaust deniers usually lie and say they’re not denying the Holocaust. Second, we have seen a pattern from this administration of courting and hiring white nationalists, including Steve Bannon (also an alleged anti-Semite), and repeating propaganda from white supremacists (eg whitegenocide) and neo-nazis repeatedly during the campaign (anyone remember the “Sheriff’s Star”?).

via USA Today

via USA Today

Chuck C. Johnson, who engaged in holocaust denial on a recent Reddit thread also claims to be advising the White House on nominees. Finally, you see how these signals from the White House are received by racists like David Duke and neo-nazis, who now represent the administration’s most ardent supporters:

stormertweet

To summarize, this is classic Holocaust denial from an administration that (1) has been documented courting racists and neo-Nazis, (2) has a known white nationalist as a political advisor to the president, (3) has admitted the exclusion of the Jews from the statement was purposeful, (4) has expressed no regret about excluding Jews from the statement, and (5) received acclaim from neo-Nazis for the use of this language.

This is a clear-cut case of deliberate Holocaust denial presented on a day that was meant for remembrance of this specific history. It was accompanied by an attack on refugees, many of whom are fleeing murder and oppression, based on religion, which is horrifically reminiscent of this period of history. This will surely represent a low point in the history of our own country, and will forever stain the politicians and leaders who fail to speak out against this denial of history and human decency.



from ScienceBlogs http://ift.tt/2kJBl5N

The White House in its statement on Holocaust Remembrance Day engaged in Holocaust denial. Then they doubled down on the action and via Reince Priebus on Meet the Press expressed no regret about the wording which had no mention of the Jews in their supposed “remembrance”. This has been criticized from both ends of the political spectrum, from John Podhoretz in Commentary Magazine (a Reagan speechwriter and conservative columnist) to Tim Kaine characterizing it, correctly, as Holocaust denial.

You may ask, why is this denial? Is this hyperbole? You may even find the administration excuse that they are trying to be more “inclusive” of all the others who were victimized in the Holocaust plausible. And to the uninitiated it probably seems reasonable, and so the administration will likely get away with it.

But the reality is, this is part of a long history of Holocaust denial, in which the experience, memory, and truth of Jewish survivors and victims is diminished and denied. The first step of Holocaust denial isn’t an outright denial of the Holocaust, the deniers have become more subtle in the decades since Paul Rassinier outright denied its existence in the aftermath of WWII. Holocaust deniers instead start with the exact kind of minimization and distraction that the White House engaged in with this statement. They say, “well, the Holocaust was about so many other groups, not just the Jews.” This argument seems to have a patina of credibility, but on any real inspection it is foolish.

The Holocaust was a deliberate, systematic attempt by the Nazis to eliminate Jews from the face of the Earth. That other groups such as homosexuals, dissidents, and others despised by Nazis were also targeted does not change or diminish the fact the primary intent was to destroy the Jewish people.

These distracting arguments undercut the truth about the purpose of the Final Solution and are thus a denial of history and truth. They are the same as the arguments that diminish the numbers of victims, that suggest the Nazis weren’t specifically targeting Jews, or inevitably, the crimes of the allies (such as the bombing of Dresden) were just as bad.

Deborah Lipstadt wrote a book Denying the Holocaust: The growing assault on truth and memory that is as relevant today as two decades ago. It a major influence on my writing about denialism because Lipstadt systematically exposes the tactics deniers used to subvert legitimate scholarship and scientific fact, which, it turns out, are pretty universal to denialist movements. One of the key points she makes is that denial doesn’t have to start with dismissal of the entire horror of the Holocaust, but rather is a chain of lies that begins with the kind of minimization of the Holocaust such as the White House espoused on Remembrance Day. Lipstadt is now the focus of a film entitled “Denial” about a libel case brought against her by the Holocaust denier David Irving. Irving, like most deniers did not like being called a denier, even though it was true. Deniers know that to be a Holocaust denier is bad, and makes them bad people, so they like to pretend they’re not Holocaust deniers even while they deny the Holocaust. Spoiler alert, she won, because Holocaust deniers are lying liars who lie.

Finally, you may ask, what proof was there this was purposeful? Should intent be included in the accusation of Holocaust denial rather than mere incompetence? Well, for one thing, when this was pointed out to the White House they defended the language twice – both Hope Hicks and Reince Preibus expressed this specific language excluding Jews was purposeful and a lack of regret that it specifically leaves Jews out of Holocaust remembrance. They deny that it was Holocaust denial but Holocaust deniers usually lie and say they’re not denying the Holocaust. Second, we have seen a pattern from this administration of courting and hiring white nationalists, including Steve Bannon (also an alleged anti-Semite), and repeating propaganda from white supremacists (eg whitegenocide) and neo-nazis repeatedly during the campaign (anyone remember the “Sheriff’s Star”?).

via USA Today

via USA Today

Chuck C. Johnson, who engaged in holocaust denial on a recent Reddit thread also claims to be advising the White House on nominees. Finally, you see how these signals from the White House are received by racists like David Duke and neo-nazis, who now represent the administration’s most ardent supporters:

stormertweet

To summarize, this is classic Holocaust denial from an administration that (1) has been documented courting racists and neo-Nazis, (2) has a known white nationalist as a political advisor to the president, (3) has admitted the exclusion of the Jews from the statement was purposeful, (4) has expressed no regret about excluding Jews from the statement, and (5) received acclaim from neo-Nazis for the use of this language.

This is a clear-cut case of deliberate Holocaust denial presented on a day that was meant for remembrance of this specific history. It was accompanied by an attack on refugees, many of whom are fleeing murder and oppression, based on religion, which is horrifically reminiscent of this period of history. This will surely represent a low point in the history of our own country, and will forever stain the politicians and leaders who fail to speak out against this denial of history and human decency.



from ScienceBlogs http://ift.tt/2kJBl5N