aads

Feeling Sick? Blame Your Selfish Genes [The Weizmann Wave]

ThinkstockPhotos_flu

Why does infection with bacteria or viruses make you feel sick? Prof. Guy Shakhar and Dr. Keren Shakhar have proposed that your symptoms are not just a byproduct of your body’s attempt to get rid of the infection. It is your genes’ way of ensuring they are passed down. The long and short of their argument is that the malaise, loss of appetite and lethargy are all ways of isolating you from your social group – so that your kin, who carry many of your genes, are not infected as well.

That means we share an evolutionary adaptation with such organisms as bees that go off to die far from the hive if they get sick. Shakhar and Shakhar note that we look, behave, sound (meh!) and even smell different when we are sick, and that these signals trigger the basic instinct in others to stay away.

Bees leave the hive when they are sick

Bees leave the hive when they are sick. Image: Wikimedia commons

The researchers say that their proposal is not just an interesting thought exercise. Modern medicine enables us to ignore our innate instincts when we’re sick, take a pill, and go to work. They think it might be time to start paying attention to what millions of years of evolution have written into our behavior, and maybe stop spreading our infectious diseases around the office.

And if you happen to be recovering at home (or just browsing at your office desk), you can read our other two stories today:

An atomic clock that Weizmann Institute scientists are working on for the ESA’s mission to Jupiter that will test the planet’s atmosphere and its moons’ gravity, and self-assembling nanoflasks that make chemical reactions run hundreds of times faster.

 

 

 



from ScienceBlogs http://ift.tt/1ZPf2Xl

ThinkstockPhotos_flu

Why does infection with bacteria or viruses make you feel sick? Prof. Guy Shakhar and Dr. Keren Shakhar have proposed that your symptoms are not just a byproduct of your body’s attempt to get rid of the infection. It is your genes’ way of ensuring they are passed down. The long and short of their argument is that the malaise, loss of appetite and lethargy are all ways of isolating you from your social group – so that your kin, who carry many of your genes, are not infected as well.

That means we share an evolutionary adaptation with such organisms as bees that go off to die far from the hive if they get sick. Shakhar and Shakhar note that we look, behave, sound (meh!) and even smell different when we are sick, and that these signals trigger the basic instinct in others to stay away.

Bees leave the hive when they are sick

Bees leave the hive when they are sick. Image: Wikimedia commons

The researchers say that their proposal is not just an interesting thought exercise. Modern medicine enables us to ignore our innate instincts when we’re sick, take a pill, and go to work. They think it might be time to start paying attention to what millions of years of evolution have written into our behavior, and maybe stop spreading our infectious diseases around the office.

And if you happen to be recovering at home (or just browsing at your office desk), you can read our other two stories today:

An atomic clock that Weizmann Institute scientists are working on for the ESA’s mission to Jupiter that will test the planet’s atmosphere and its moons’ gravity, and self-assembling nanoflasks that make chemical reactions run hundreds of times faster.

 

 

 



from ScienceBlogs http://ift.tt/1ZPf2Xl

Sun moves toward star Vega in journey around our galaxy

Tonight, as you contemplate the stars, think about the way our local star, the sun, moves through our Milky Way galaxy. A friend from Australia wrote:

I seek to find out what speed our sun is traveling at and also how many years it takes to circumnavigate the galaxy.

Our Milky Way galaxy is a collection of several hundred billion stars. It has an estimated diameter of 100,000 light-years. Our sun does indeed circumnavigate the Milky Way galaxy. In space, everything moves. There are various estimates for the speed the sun travels through the galaxy, but its speed is about 140 miles per second.

Likewise, there are many estimates for the length of time it takes the sun to complete one circuit of the galaxy, but a typical estimate is about 230 million years.

That period of time – the length of the sun’s orbit around the Milky Way’s center – is known as a cosmic year.

It so happens that astronomers know which star the sun is moving toward in its journey around the galaxy. Our sun and family of planets travel more or less toward the star Vega – and away from the star Sirius. Unsurprisingly, Vega and Sirius lie in opposite directions in Earth’s sky.

At this time of year from mid-northern latitudes, Vega star appears over the northwest horizon at dusk and early evening. Vega sets around mid-evening. It also appears low in the northeast sky in the predawn and dawn hours. It’s one of the loveliest stars you’ll ever see, Vega in the constellation Lyra the Harp.

Vega is the brightest star in the Summer Trinagle. From mid-northern latitudes, the Summer Triangle sits close to the west-nothwest horizon as darkness fall in January.

Vega is the brightest star in a famous star pattern known as the Summer Triangle. From mid-northern latitudes, as darkness falls on January evenings, the Summer Triangle sits close to the west-northwestern horizon.

Vega resides almost exactly opposite of Sirius, the brightest star in the nighttime sky. If you have an unobstructed horizon, you may see Sirius low in the southeast, as Vega sits low in the northwest.

At mid-northern latitudes, you’ll possibly see both stars around 7 to 8 p.m. local time. Sirius swings low in the southwest sky by around 3 to 4 a.m., at which time Vega reappears in the northeast sky (at mid-northern latitudes). Click here to find out when these two stars rise/set in your sky.

Use Orion's Belt to find Sirius, the brightest star of the nighttime sky.

Use Orion’s Belt to find Sirius, the brightest star of the nighttime sky. From mid-latitudes in the Northern Hemisphere, you might see Sirius low in the southeast, as Vega sits low in the northwest.

Our sun’s direction of motion (and thus our Earth’s corresponding motion) toward Vega has a special name. It’s called the apex of the sun’s way. Vega – the solar apex star – can be found in the eastern sky during the dawn and predawn hours throughout the winter season.

Bottom line: Our sun moves toward the star Vega as it revolves around the center of the Milky Way galaxy. One circuit takes about 230 million years, or one “cosmic year.”

Read more about Vega: Blue-white Harp Star

A planisphere is virtually indispensable for beginning stargazers. Order your EarthSky Planisphere today!



from EarthSky http://ift.tt/1x6YCbB

Tonight, as you contemplate the stars, think about the way our local star, the sun, moves through our Milky Way galaxy. A friend from Australia wrote:

I seek to find out what speed our sun is traveling at and also how many years it takes to circumnavigate the galaxy.

Our Milky Way galaxy is a collection of several hundred billion stars. It has an estimated diameter of 100,000 light-years. Our sun does indeed circumnavigate the Milky Way galaxy. In space, everything moves. There are various estimates for the speed the sun travels through the galaxy, but its speed is about 140 miles per second.

Likewise, there are many estimates for the length of time it takes the sun to complete one circuit of the galaxy, but a typical estimate is about 230 million years.

That period of time – the length of the sun’s orbit around the Milky Way’s center – is known as a cosmic year.

It so happens that astronomers know which star the sun is moving toward in its journey around the galaxy. Our sun and family of planets travel more or less toward the star Vega – and away from the star Sirius. Unsurprisingly, Vega and Sirius lie in opposite directions in Earth’s sky.

At this time of year from mid-northern latitudes, Vega star appears over the northwest horizon at dusk and early evening. Vega sets around mid-evening. It also appears low in the northeast sky in the predawn and dawn hours. It’s one of the loveliest stars you’ll ever see, Vega in the constellation Lyra the Harp.

Vega is the brightest star in the Summer Trinagle. From mid-northern latitudes, the Summer Triangle sits close to the west-nothwest horizon as darkness fall in January.

Vega is the brightest star in a famous star pattern known as the Summer Triangle. From mid-northern latitudes, as darkness falls on January evenings, the Summer Triangle sits close to the west-northwestern horizon.

Vega resides almost exactly opposite of Sirius, the brightest star in the nighttime sky. If you have an unobstructed horizon, you may see Sirius low in the southeast, as Vega sits low in the northwest.

At mid-northern latitudes, you’ll possibly see both stars around 7 to 8 p.m. local time. Sirius swings low in the southwest sky by around 3 to 4 a.m., at which time Vega reappears in the northeast sky (at mid-northern latitudes). Click here to find out when these two stars rise/set in your sky.

Use Orion's Belt to find Sirius, the brightest star of the nighttime sky.

Use Orion’s Belt to find Sirius, the brightest star of the nighttime sky. From mid-latitudes in the Northern Hemisphere, you might see Sirius low in the southeast, as Vega sits low in the northwest.

Our sun’s direction of motion (and thus our Earth’s corresponding motion) toward Vega has a special name. It’s called the apex of the sun’s way. Vega – the solar apex star – can be found in the eastern sky during the dawn and predawn hours throughout the winter season.

Bottom line: Our sun moves toward the star Vega as it revolves around the center of the Milky Way galaxy. One circuit takes about 230 million years, or one “cosmic year.”

Read more about Vega: Blue-white Harp Star

A planisphere is virtually indispensable for beginning stargazers. Order your EarthSky Planisphere today!



from EarthSky http://ift.tt/1x6YCbB

Another Round On Specified Complexity [EvolutionBlog]

There’s a famous short story by Woody Allen called “The Gossage-Vardebedian Papers” that I like to reread from time to time. (It’s very short, so follow the link if you’ve never read it before.) The story is told through the correspondence of Gossage and Vardebedian, as they argue about a game of postal chess in which they are engaged. There’s one excerpt that keeps coming back to me, since it applies so perfectly in so many contexts:

Received your latest letter today, and while it was just shy of coherence, I think I can see where your bewilderment lies. From your enclosed diagram, it has become apparent to me that for the past six weeks we have been playing two completely different chess games—myself according to our correspondence, you more in keeping with the world as you would have it, rather than with any rational system of order.

I was reminded of that when reading the latest missive from Winston Ewert on the subject of specified complexity. It has become apparent to me that we have been engaged in two entirely different conversations–myself, according to what has actually been said, Ewert more in keeping with the world as he would have it.

For the background: Ewert’s original essay is here. I then replied in two parts: here and here.

As an example of how things are going to go, here is Ewert’s opening:

In a recent series of posts here at Evolution News, I answered objections to arguments for intelligent design based on specified complexity and conservation of information (see here, here, here, and here). The series, in turn, provoked responses that I would like to address now. Over at Panda’s Thumb, University of Washington geneticist Joe Felsenstein declares that it is “Game over for antievolutionary No Free Lunch argument.”

But if you follow the link Ewert provides, you will find that it was Nick Matzke who declared “Game Over,” not Joe Felsenstein.

Moving on, we quickly come to this:

In both cases, the authors claim I’ve essentially admitted that the arguments behind specified complexity and conservation of information were incorrect. Both accordingly declare that the arguments we have made are now vanquished. But this is simply not true. What I’ve called false is the straw-man version of these arguments, which our critics chose to attack over the real ones. And I haven’t “admitted” this so much as done my best, repeatedly, to clarify the difference between the straw man and genuine arguments for ID.

I’ll let Joe speak for himself, but I certainly never claimed any such thing. I did not say Ewert essentially admitted that the arguments behind specified complexity were incorrect. What I actually said, after providing relevant quotes from Ewert, was this:

Ewert has simply conceded Felsenstein’s point here. Assertions of specified complexity are entirely parasitic on prior arguments about irreducible complexity and the like. It is those prior arguments that are doing all the work, and not any subsequent claims about specified complexity. For example, if Behe’s claims about irreducible complexity were correct, then that would all by itself be a very strong argument against evolution. No probability calculation would be needed to make it stronger. But since Behe’s claims are not correct, no calculation based on the assumption that they are is going to be relevant.

That’s the point. Assertions of specified complexity contribute nothing to the argument between evolution and ID. Ewert “effectively admitted” that by acknowledging that any relevant probability calculation must assume the correctness of prior ID arguments, such as Michael Behe’s claims about irreducible complexity. It is those prior arguments that are doing all the work.

Ewert now continues:

Specified complexity has always required the calculation of probability. It has never been based on assuming that every outcome was equally probable.

After providing links to his previous writings he writes:

Yet Rosenhouse insists that specified complexity required calculation assuming every outcome is equally likely. For support, he points to a section of a critique of No Free Lunch by Richard Wein (“Not a Free Lunch but a Box of Chocolates”), describing this as an exhaustive documentation for his claim. But Wein himself describes the evidence as inconclusive. Wein points to examples and arguments that he thinks imply uniform probability, but he ignores the places where William Dembski explicitly states that the calculation has to be done according to the hypothesis under consideration.

Oh for heaven’s sake! Now he’s just making stuff up.

First of all, I said Wein provided “extensive” documentation, not “exhaustive.”

More importantly, did I insist that specified complexity required calculation assuming every outcome is equally likely? I don’t recall insisting on any such thing. The way I remember it, what I actually said was this:

[T]he problem here is that Dembski, in his various writings on this subject, is incredibly vague on how, precisely, one carries out the relevant probability calculations. Look hard enough and you can find him saying just about anything you want him to be saying. But as Richard Wein has documented extensively, in response to Dembski’s book No Free Lunch, Dembski routinely writes as though the probability calculations should always be carried out with respect to a uniform distribution, which is what Felsenstein had in mind in referring to chance alone.

I think I’m having a Nathan Thurm moment. Is it me or is it him? It’s him, right?

I mean, seriously, how could I have been more clear? How do you go from what I wrote to a charge that I am insisting that ID proponents insist every outcome must be assumed to be equally likely? My charge was that Dembski is vague, and that you can find him saying just about anything you want him to be saying on the subject of probability calculations. I then pointed out that Dembski often writes as though a uniform distribution should be used. Indeed he does, as Richard Wein documented extensively. In some places Dembski seems to assume arbitrarily that a uniform probability distribution should be used, in other places he suggests some other method. Hence the frustration of his many critics in trying to pin down precisely what he is claiming.

As for specified complexity, my whole freaking point was this:

But in any case where we are genuinely uncertain as to whether the event or object is the result of design, we are also going to lack the information to carry out relevant probability calculations and to identify the design-suggesting patterns.

Why would I have said this, if I thought we were meant to assume a uniform distribution? In trying to calculate the probability of a biological structure, like a flagellum, the information we lack is precisely the correct distribution to apply to our space. If I thought we could just assume a uniform distribution, the calculation would be simple.

And that’s what it all comes down to. Assertions about specified complexity contribute nothing to any arguments about evolution and ID because there is no way to carry out a meaningful probability calculation in any biological case. That is hardly the only problem with the concept, but it is sufficient for dismissing it from the conversation.

Finally, we come to this:

In my most recent discussion of specified complexity, I quoted Rosenhouse, which prompted his responses. His primary objection is that I’ve quoted him out of context. I quoted his presentation of the argument that improbable events happen all the time, but did not reference his later discussion of specified complexity.

The fact is, Rosenhouse has misread the context in which I discussed him. I dealt there with a particular objection to specified complexity. Felsenstein had claimed the correct version of specified complexity was useless. It is not useless, because specified complexity justifies rejecting an explanation that assigns too low a probability to the outcome it purports to explain. If I win the lottery every day for a month, you are entitled to conclude that I was not playing fairly. The objection is raised that this is plainly obvious, and it is sometimes suggested that no reasonable person would ever raise this objection. But they have indeed done so.

In quoting Rosenhouse and the others, my purpose was not to make them look bad or to suggest they don’t understand specified complexity. My point was to show that people do actually raise this objection. Critics point out that low probability events happen all the time, and that this undermines rejecting an explanation on improbability alone. This objection, a valuable one, was not a figment of Dembski’s imagination. It’s a real argument that needed to be addressed.

I misread the context? Let’s have a look. Ewert introduced his quotation of me like this:

Some critics of intelligent design regard this as an obvious point. If complex life were prohibitively improbable under Darwinian evolution (an idea these critics certainly reject), Darwinian evolution would clearly be false. They find it difficult to believe that specified complexity was developed to defend such an obvious point. However, other critics insist that low probabilities of complex life would not provide evidence that we should reject Darwinian evolution.

I was one of three people charged with rejecting an obvious point. After presenting these quotations, Ewert writes:

All of these critics argue that we cannot draw conclusions about Darwinian evolution from small probabilities. To be fair, many critics would not agree with these simplistic criticisms of probability arguments. They would in fact argue that evolution makes complex biological systems highly probable. It is unfortunate that these writers feel the need to disparage specified complexity, which exists to defend against an argument they would not make.

Now my argument is described as simplistic, to the point where fairness to other ID critics requires acknowledging that many would reject what I am saying. So I think I can be forgiven for concluding that the point here was to make me look bad.

Substantively, Ewert is not making any sense here. He seems to think it is silly of me to argue that we cannot reject Darwinian evolution just from finding that one of its outcomes was highly improbable. Another person who makes this argument is William Dembski, as I documented in my previous post. And rightly so! Of course low probability by itself is never enough to reject an explanation.

This is clear even from Ewert’s lottery example. Consider the last thirty winners of the lottery. The probability of precisely those thirty people winning those lotteries is exactly the same as the probability of one person winning the lottery thirty times. But the first scenario is not suspicious while the second one is. Of course I would suspect foul play if one person wins the lottery thirty times in a row, but it would not be just because it is very improbable for one person to win the lottery thirty times.

Let’s try to end on a substantive note. If Ewert believes that specified complexity has contributed something to the discussion between evolution and ID beyond the arguments made by other ID writers, then let him point to a specific example where that is the case. The entire body of ID literature records precisely one example where the machinery of specified complexity was applied to a specific case: William Dembski’s flagellum calculation in No Free Lunch. That calculation was entirely parasitic on Michael Behe’s arguments about irreducible complexity, and was ludicrous even granting, for the sake of argument, the correctness of those arguments. Since Behe’s arguments were not correct (Dembski’s attempted refinements of those arguments notwithstanding) it’s unnecessary to pay any attention at all to the details of the calculation.

If Ewert has something better than that, I’m all ears. Since I suspect he does not, the rest of this is just posturing.



from ScienceBlogs http://ift.tt/22Mkvk2

There’s a famous short story by Woody Allen called “The Gossage-Vardebedian Papers” that I like to reread from time to time. (It’s very short, so follow the link if you’ve never read it before.) The story is told through the correspondence of Gossage and Vardebedian, as they argue about a game of postal chess in which they are engaged. There’s one excerpt that keeps coming back to me, since it applies so perfectly in so many contexts:

Received your latest letter today, and while it was just shy of coherence, I think I can see where your bewilderment lies. From your enclosed diagram, it has become apparent to me that for the past six weeks we have been playing two completely different chess games—myself according to our correspondence, you more in keeping with the world as you would have it, rather than with any rational system of order.

I was reminded of that when reading the latest missive from Winston Ewert on the subject of specified complexity. It has become apparent to me that we have been engaged in two entirely different conversations–myself, according to what has actually been said, Ewert more in keeping with the world as he would have it.

For the background: Ewert’s original essay is here. I then replied in two parts: here and here.

As an example of how things are going to go, here is Ewert’s opening:

In a recent series of posts here at Evolution News, I answered objections to arguments for intelligent design based on specified complexity and conservation of information (see here, here, here, and here). The series, in turn, provoked responses that I would like to address now. Over at Panda’s Thumb, University of Washington geneticist Joe Felsenstein declares that it is “Game over for antievolutionary No Free Lunch argument.”

But if you follow the link Ewert provides, you will find that it was Nick Matzke who declared “Game Over,” not Joe Felsenstein.

Moving on, we quickly come to this:

In both cases, the authors claim I’ve essentially admitted that the arguments behind specified complexity and conservation of information were incorrect. Both accordingly declare that the arguments we have made are now vanquished. But this is simply not true. What I’ve called false is the straw-man version of these arguments, which our critics chose to attack over the real ones. And I haven’t “admitted” this so much as done my best, repeatedly, to clarify the difference between the straw man and genuine arguments for ID.

I’ll let Joe speak for himself, but I certainly never claimed any such thing. I did not say Ewert essentially admitted that the arguments behind specified complexity were incorrect. What I actually said, after providing relevant quotes from Ewert, was this:

Ewert has simply conceded Felsenstein’s point here. Assertions of specified complexity are entirely parasitic on prior arguments about irreducible complexity and the like. It is those prior arguments that are doing all the work, and not any subsequent claims about specified complexity. For example, if Behe’s claims about irreducible complexity were correct, then that would all by itself be a very strong argument against evolution. No probability calculation would be needed to make it stronger. But since Behe’s claims are not correct, no calculation based on the assumption that they are is going to be relevant.

That’s the point. Assertions of specified complexity contribute nothing to the argument between evolution and ID. Ewert “effectively admitted” that by acknowledging that any relevant probability calculation must assume the correctness of prior ID arguments, such as Michael Behe’s claims about irreducible complexity. It is those prior arguments that are doing all the work.

Ewert now continues:

Specified complexity has always required the calculation of probability. It has never been based on assuming that every outcome was equally probable.

After providing links to his previous writings he writes:

Yet Rosenhouse insists that specified complexity required calculation assuming every outcome is equally likely. For support, he points to a section of a critique of No Free Lunch by Richard Wein (“Not a Free Lunch but a Box of Chocolates”), describing this as an exhaustive documentation for his claim. But Wein himself describes the evidence as inconclusive. Wein points to examples and arguments that he thinks imply uniform probability, but he ignores the places where William Dembski explicitly states that the calculation has to be done according to the hypothesis under consideration.

Oh for heaven’s sake! Now he’s just making stuff up.

First of all, I said Wein provided “extensive” documentation, not “exhaustive.”

More importantly, did I insist that specified complexity required calculation assuming every outcome is equally likely? I don’t recall insisting on any such thing. The way I remember it, what I actually said was this:

[T]he problem here is that Dembski, in his various writings on this subject, is incredibly vague on how, precisely, one carries out the relevant probability calculations. Look hard enough and you can find him saying just about anything you want him to be saying. But as Richard Wein has documented extensively, in response to Dembski’s book No Free Lunch, Dembski routinely writes as though the probability calculations should always be carried out with respect to a uniform distribution, which is what Felsenstein had in mind in referring to chance alone.

I think I’m having a Nathan Thurm moment. Is it me or is it him? It’s him, right?

I mean, seriously, how could I have been more clear? How do you go from what I wrote to a charge that I am insisting that ID proponents insist every outcome must be assumed to be equally likely? My charge was that Dembski is vague, and that you can find him saying just about anything you want him to be saying on the subject of probability calculations. I then pointed out that Dembski often writes as though a uniform distribution should be used. Indeed he does, as Richard Wein documented extensively. In some places Dembski seems to assume arbitrarily that a uniform probability distribution should be used, in other places he suggests some other method. Hence the frustration of his many critics in trying to pin down precisely what he is claiming.

As for specified complexity, my whole freaking point was this:

But in any case where we are genuinely uncertain as to whether the event or object is the result of design, we are also going to lack the information to carry out relevant probability calculations and to identify the design-suggesting patterns.

Why would I have said this, if I thought we were meant to assume a uniform distribution? In trying to calculate the probability of a biological structure, like a flagellum, the information we lack is precisely the correct distribution to apply to our space. If I thought we could just assume a uniform distribution, the calculation would be simple.

And that’s what it all comes down to. Assertions about specified complexity contribute nothing to any arguments about evolution and ID because there is no way to carry out a meaningful probability calculation in any biological case. That is hardly the only problem with the concept, but it is sufficient for dismissing it from the conversation.

Finally, we come to this:

In my most recent discussion of specified complexity, I quoted Rosenhouse, which prompted his responses. His primary objection is that I’ve quoted him out of context. I quoted his presentation of the argument that improbable events happen all the time, but did not reference his later discussion of specified complexity.

The fact is, Rosenhouse has misread the context in which I discussed him. I dealt there with a particular objection to specified complexity. Felsenstein had claimed the correct version of specified complexity was useless. It is not useless, because specified complexity justifies rejecting an explanation that assigns too low a probability to the outcome it purports to explain. If I win the lottery every day for a month, you are entitled to conclude that I was not playing fairly. The objection is raised that this is plainly obvious, and it is sometimes suggested that no reasonable person would ever raise this objection. But they have indeed done so.

In quoting Rosenhouse and the others, my purpose was not to make them look bad or to suggest they don’t understand specified complexity. My point was to show that people do actually raise this objection. Critics point out that low probability events happen all the time, and that this undermines rejecting an explanation on improbability alone. This objection, a valuable one, was not a figment of Dembski’s imagination. It’s a real argument that needed to be addressed.

I misread the context? Let’s have a look. Ewert introduced his quotation of me like this:

Some critics of intelligent design regard this as an obvious point. If complex life were prohibitively improbable under Darwinian evolution (an idea these critics certainly reject), Darwinian evolution would clearly be false. They find it difficult to believe that specified complexity was developed to defend such an obvious point. However, other critics insist that low probabilities of complex life would not provide evidence that we should reject Darwinian evolution.

I was one of three people charged with rejecting an obvious point. After presenting these quotations, Ewert writes:

All of these critics argue that we cannot draw conclusions about Darwinian evolution from small probabilities. To be fair, many critics would not agree with these simplistic criticisms of probability arguments. They would in fact argue that evolution makes complex biological systems highly probable. It is unfortunate that these writers feel the need to disparage specified complexity, which exists to defend against an argument they would not make.

Now my argument is described as simplistic, to the point where fairness to other ID critics requires acknowledging that many would reject what I am saying. So I think I can be forgiven for concluding that the point here was to make me look bad.

Substantively, Ewert is not making any sense here. He seems to think it is silly of me to argue that we cannot reject Darwinian evolution just from finding that one of its outcomes was highly improbable. Another person who makes this argument is William Dembski, as I documented in my previous post. And rightly so! Of course low probability by itself is never enough to reject an explanation.

This is clear even from Ewert’s lottery example. Consider the last thirty winners of the lottery. The probability of precisely those thirty people winning those lotteries is exactly the same as the probability of one person winning the lottery thirty times. But the first scenario is not suspicious while the second one is. Of course I would suspect foul play if one person wins the lottery thirty times in a row, but it would not be just because it is very improbable for one person to win the lottery thirty times.

Let’s try to end on a substantive note. If Ewert believes that specified complexity has contributed something to the discussion between evolution and ID beyond the arguments made by other ID writers, then let him point to a specific example where that is the case. The entire body of ID literature records precisely one example where the machinery of specified complexity was applied to a specific case: William Dembski’s flagellum calculation in No Free Lunch. That calculation was entirely parasitic on Michael Behe’s arguments about irreducible complexity, and was ludicrous even granting, for the sake of argument, the correctness of those arguments. Since Behe’s arguments were not correct (Dembski’s attempted refinements of those arguments notwithstanding) it’s unnecessary to pay any attention at all to the details of the calculation.

If Ewert has something better than that, I’m all ears. Since I suspect he does not, the rest of this is just posturing.



from ScienceBlogs http://ift.tt/22Mkvk2

Obesity and cancer – time for concerted action

Weighing Scales Blog

Over the last twenty years, there’s been a steady increase in rates of obesity in the UK. Around two-thirds of adults in the UK are now overweight or obese.

This places them at increased risk of a range of health problems – including up to ten different types of cancer.

But despite this dramatic increase in the nation’s waistline, and the growing impact on its health, progress in tackling obesity has been slow. The recently published cancer strategy for England pointed out that:

There has, to date, not been co-ordinated and concerted action taken to address obesity, and it is essential that this now becomes a priority

Today we’ve published a new report – Tipping the Scales – to try to galvanise action on obesity, and to show that without it, the epidemic will only get worse – leading to hundreds of thousands of people falling ill, and costing the NHS billions.

Modelling the future

The UK has changed a lot since the 1990s – the decade of John Major, Take That, Sega Megadrives and Baywatch. But in the 25 years since, the nation’s excess bodyweight has led to tens of thousands of cases of cancer that could have been avoided.

And regardless of whatever cultural fads we see over the next 20 years, we want to ensure that we don’t see a similar failure to protect people from obesity-related cancers over the next 25 years.

But to make the case for political action, we need to understand the potential future impact of obesity on cancer and the NHS. And to do so, we teamed up with the UK Health Forum – the team behind the last major Government report on obesity trends  – the Government Office for Science’s  2007 ‘Foresight’ report (which we blogged about at the time).

For the new report, we analysed a range of data on weight and lifestyle behaviours across the UK, together with information on changes in the population, and then used mathematical models to uncover the likely impact obesity will have on:

  • The proportion of people who will be overweight or obese by 2035 if current trends continue.
  • The number of cases of different diseases (cancer, diabetes, coronary heart disease and stroke) which could be avoided by reducing levels of obesity, both over the next 20 years and annually from 2035.
  • The ‘direct’ cost of obesity to the NHS and health care (calculated from primary and secondary care, urgent and emergency (A&E) care, community care, prevention and health promotion and social care) and the ‘indirect’ cost to society from lost economic productivity due to early illness and death.

What we found

Our headline findings are a real cause for concern. If current trends continue, we predict that:

1) The UK will get heavier

The proportion of people who will be obese looks set to continue increasing: by 2035 around four out of ten who will be obese:

Obesity---Obesity-trend-gra

The poorest in our society will continue to be most affected, with almost half (49 per cent) of women from the poorest fifth of the UK projected to be obese in 2035, compared to a quarter (25 per cent) of women from the richest fifth. This trend, though less pronounced, is the same for men.

2) This will lead to millions of new cases of disease

Over the next 20 years, we’ve estimated that rising levels of obesity could lead to around 700,000 new cases of cancer – a staggering number.

This means that, by 2035, the UK’s weight problem could be causing around 38,500 extra cases of cancer a year. This estimate is higher than previous research we’ve funded, and emphasises the significant burden of obesity as a risk factor for cancer.

But on top of this, we also found that as well as cancer, obesity could also cause millions more cases of diseases like type 2 diabetes, heart disease and stroke – all of which cast a worrying shadow over the nation’s health prospects.

3) As well as hitting the country’s waistlines, it’ll also hit the purse-strings

We estimate that by 2035, these diseases could be costing the NHS and social care services an additional £2.5bn per year, every year, which we predict to be a conservative estimate. The striking thing is that while obesity-related cancers are relatively small proportion of the total, the cost of treating them is high – £330m a year.

This adds weight to previous predictions about the eye-watering cost of obesity to society. For example, the Foresight report projected that overweight and obesity would cost the NHS a total of £8.3bn per year from 2030. But they all show a clear message – if the Government doesn’t act now, avoidable obesity-linked diseases will cost the NHS billions of pounds every year.

4) There’s hope – small reductions can have a substantial impact

As well as obesity’s impact, we also looked at what small reductions in obesity rates could do to mitigate it.

Reducing the prevalence by just one per cent each year below the predicted trends could avoid 64,200 new cases of cancer over the next 20 years.

In the year 2035 alone, this would likely save £300 million in NHS health and social care costs for all diseases caused by obesity, of which £42 million could be saved through NHS cancer care.

Obesity---Small-reductions-

 

What can be done?

While these are just a brief summary of the report’s findings, they emphasise its central point – that the Government must act to help avoid obesity-linked cancers.

We want the Government to develop policies that could help achieve consistent reductions in obesity – we’ve discussed a range of recommendations in the report.  Many of these focus on preventing children from gaining weight, because there’s substantial evidence that the obesity problem starts early: obese children are much more likely to become obese adults. And in the UK, one-fifth of children are obese before their eleventh birthday.

Options the government could consider include:

  • A ban on TV advertising of junk food between 6am and 9pm
  • Restrictions on online marketing of unhealthy food and drinks
  • A 20p/litre tax on sugary drinks
  • Looking into new or higher taxes on foods high in sugar, salt and fat, while making and healthy alternatives cheaper
  • Making healthier food more widely available in publicly-funded institutions such as schools and hospitals
  • Better food labelling
  • Wider availability of recreation facilities and open space, particularly for lower-income groups
  • Better promotion of walking and cycling as easy and accessible modes of transport
  • Putting more pressure on food producers to reduce free sugars, saturated fat and calories in their products

Happily, the Government has made welcome signs that it intends to get serious, and to try to reduce the number of children who are obese. Next month, it’s due to publish a new children’s obesity strategy – something we’re eagerly awaiting, and in which we hope to see practical policies to achieve consistent reductions in obesity.

2035 may seem a long time away. By then, there will have been ten Olympic Games, several UK Prime Ministers, and countless pop-culture fads. But in order to prevent the current obesity epidemic from wreaking havoc with the nation’s health, we need action – and we need it now.

  • Dan Hunt is a Policy Advisor at Cancer Research UK

Download a copy of our report – Tipping the Scales – or read an executive summary.



from Cancer Research UK - Science blog http://ift.tt/1Z6JKJn
Weighing Scales Blog

Over the last twenty years, there’s been a steady increase in rates of obesity in the UK. Around two-thirds of adults in the UK are now overweight or obese.

This places them at increased risk of a range of health problems – including up to ten different types of cancer.

But despite this dramatic increase in the nation’s waistline, and the growing impact on its health, progress in tackling obesity has been slow. The recently published cancer strategy for England pointed out that:

There has, to date, not been co-ordinated and concerted action taken to address obesity, and it is essential that this now becomes a priority

Today we’ve published a new report – Tipping the Scales – to try to galvanise action on obesity, and to show that without it, the epidemic will only get worse – leading to hundreds of thousands of people falling ill, and costing the NHS billions.

Modelling the future

The UK has changed a lot since the 1990s – the decade of John Major, Take That, Sega Megadrives and Baywatch. But in the 25 years since, the nation’s excess bodyweight has led to tens of thousands of cases of cancer that could have been avoided.

And regardless of whatever cultural fads we see over the next 20 years, we want to ensure that we don’t see a similar failure to protect people from obesity-related cancers over the next 25 years.

But to make the case for political action, we need to understand the potential future impact of obesity on cancer and the NHS. And to do so, we teamed up with the UK Health Forum – the team behind the last major Government report on obesity trends  – the Government Office for Science’s  2007 ‘Foresight’ report (which we blogged about at the time).

For the new report, we analysed a range of data on weight and lifestyle behaviours across the UK, together with information on changes in the population, and then used mathematical models to uncover the likely impact obesity will have on:

  • The proportion of people who will be overweight or obese by 2035 if current trends continue.
  • The number of cases of different diseases (cancer, diabetes, coronary heart disease and stroke) which could be avoided by reducing levels of obesity, both over the next 20 years and annually from 2035.
  • The ‘direct’ cost of obesity to the NHS and health care (calculated from primary and secondary care, urgent and emergency (A&E) care, community care, prevention and health promotion and social care) and the ‘indirect’ cost to society from lost economic productivity due to early illness and death.

What we found

Our headline findings are a real cause for concern. If current trends continue, we predict that:

1) The UK will get heavier

The proportion of people who will be obese looks set to continue increasing: by 2035 around four out of ten who will be obese:

Obesity---Obesity-trend-gra

The poorest in our society will continue to be most affected, with almost half (49 per cent) of women from the poorest fifth of the UK projected to be obese in 2035, compared to a quarter (25 per cent) of women from the richest fifth. This trend, though less pronounced, is the same for men.

2) This will lead to millions of new cases of disease

Over the next 20 years, we’ve estimated that rising levels of obesity could lead to around 700,000 new cases of cancer – a staggering number.

This means that, by 2035, the UK’s weight problem could be causing around 38,500 extra cases of cancer a year. This estimate is higher than previous research we’ve funded, and emphasises the significant burden of obesity as a risk factor for cancer.

But on top of this, we also found that as well as cancer, obesity could also cause millions more cases of diseases like type 2 diabetes, heart disease and stroke – all of which cast a worrying shadow over the nation’s health prospects.

3) As well as hitting the country’s waistlines, it’ll also hit the purse-strings

We estimate that by 2035, these diseases could be costing the NHS and social care services an additional £2.5bn per year, every year, which we predict to be a conservative estimate. The striking thing is that while obesity-related cancers are relatively small proportion of the total, the cost of treating them is high – £330m a year.

This adds weight to previous predictions about the eye-watering cost of obesity to society. For example, the Foresight report projected that overweight and obesity would cost the NHS a total of £8.3bn per year from 2030. But they all show a clear message – if the Government doesn’t act now, avoidable obesity-linked diseases will cost the NHS billions of pounds every year.

4) There’s hope – small reductions can have a substantial impact

As well as obesity’s impact, we also looked at what small reductions in obesity rates could do to mitigate it.

Reducing the prevalence by just one per cent each year below the predicted trends could avoid 64,200 new cases of cancer over the next 20 years.

In the year 2035 alone, this would likely save £300 million in NHS health and social care costs for all diseases caused by obesity, of which £42 million could be saved through NHS cancer care.

Obesity---Small-reductions-

 

What can be done?

While these are just a brief summary of the report’s findings, they emphasise its central point – that the Government must act to help avoid obesity-linked cancers.

We want the Government to develop policies that could help achieve consistent reductions in obesity – we’ve discussed a range of recommendations in the report.  Many of these focus on preventing children from gaining weight, because there’s substantial evidence that the obesity problem starts early: obese children are much more likely to become obese adults. And in the UK, one-fifth of children are obese before their eleventh birthday.

Options the government could consider include:

  • A ban on TV advertising of junk food between 6am and 9pm
  • Restrictions on online marketing of unhealthy food and drinks
  • A 20p/litre tax on sugary drinks
  • Looking into new or higher taxes on foods high in sugar, salt and fat, while making and healthy alternatives cheaper
  • Making healthier food more widely available in publicly-funded institutions such as schools and hospitals
  • Better food labelling
  • Wider availability of recreation facilities and open space, particularly for lower-income groups
  • Better promotion of walking and cycling as easy and accessible modes of transport
  • Putting more pressure on food producers to reduce free sugars, saturated fat and calories in their products

Happily, the Government has made welcome signs that it intends to get serious, and to try to reduce the number of children who are obese. Next month, it’s due to publish a new children’s obesity strategy – something we’re eagerly awaiting, and in which we hope to see practical policies to achieve consistent reductions in obesity.

2035 may seem a long time away. By then, there will have been ten Olympic Games, several UK Prime Ministers, and countless pop-culture fads. But in order to prevent the current obesity epidemic from wreaking havoc with the nation’s health, we need action – and we need it now.

  • Dan Hunt is a Policy Advisor at Cancer Research UK

Download a copy of our report – Tipping the Scales – or read an executive summary.



from Cancer Research UK - Science blog http://ift.tt/1Z6JKJn

Kepler Found Its Longest-Period Planet Ever (Synopsis) [Starts With A Bang]

“Mars is much closer to the characteristics of Earth. It has a fall, winter, summer and spring. North Pole, South Pole, mountains and lots of ice. No one is going to live on Venus; no one is going to live on Jupiter.” -Buzz Aldrin

When a planet passes in front of its star from our point of view, that transiting phenomenon can be detected as a dip in starlight. By surveying some 150,000 stars, the Kepler mission has detected close to 10,000 planetary candidates, many of which have been identified by the stellar wobble technique.

Image credit: NASA Ames.

Image credit: NASA Ames.

But this “wobbling” also sometimes contain rising or falling trend lines, which can indicate a more massive, outer world orbiting the central star. Even though Kepler could normally never detect some of these planets, since their baseline time is so long and their likelihood of transits are lower, the inner worlds that Kepler can find, when we subject them to verification, can sometimes reveal the presence of an outer, massive world. A new record for the longest-period planet ever found as the result of Kepler was just set: 1000 days, with a mass of around six Jupiter masses.

Image credit: D. Huber et al., Science 18 October 2013: Vol. 342 no. 6156 pp. 331-334; DOI: 10.1126/science.1242066.

Image credit: D. Huber et al., Science 18 October 2013: Vol. 342 no. 6156 pp. 331-334; DOI: 10.1126/science.1242066.

Go get the whole story over at Forbes!



from ScienceBlogs http://ift.tt/1SAzgSE

“Mars is much closer to the characteristics of Earth. It has a fall, winter, summer and spring. North Pole, South Pole, mountains and lots of ice. No one is going to live on Venus; no one is going to live on Jupiter.” -Buzz Aldrin

When a planet passes in front of its star from our point of view, that transiting phenomenon can be detected as a dip in starlight. By surveying some 150,000 stars, the Kepler mission has detected close to 10,000 planetary candidates, many of which have been identified by the stellar wobble technique.

Image credit: NASA Ames.

Image credit: NASA Ames.

But this “wobbling” also sometimes contain rising or falling trend lines, which can indicate a more massive, outer world orbiting the central star. Even though Kepler could normally never detect some of these planets, since their baseline time is so long and their likelihood of transits are lower, the inner worlds that Kepler can find, when we subject them to verification, can sometimes reveal the presence of an outer, massive world. A new record for the longest-period planet ever found as the result of Kepler was just set: 1000 days, with a mass of around six Jupiter masses.

Image credit: D. Huber et al., Science 18 October 2013: Vol. 342 no. 6156 pp. 331-334; DOI: 10.1126/science.1242066.

Image credit: D. Huber et al., Science 18 October 2013: Vol. 342 no. 6156 pp. 331-334; DOI: 10.1126/science.1242066.

Go get the whole story over at Forbes!



from ScienceBlogs http://ift.tt/1SAzgSE

Kepler Found Its Longest-Period Planet Ever (Synopsis) [Starts With A Bang]

“Mars is much closer to the characteristics of Earth. It has a fall, winter, summer and spring. North Pole, South Pole, mountains and lots of ice. No one is going to live on Venus; no one is going to live on Jupiter.” -Buzz Aldrin

When a planet passes in front of its star from our point of view, that transiting phenomenon can be detected as a dip in starlight. By surveying some 150,000 stars, the Kepler mission has detected close to 10,000 planetary candidates, many of which have been identified by the stellar wobble technique.

Image credit: NASA Ames.

Image credit: NASA Ames.

But this “wobbling” also sometimes contain rising or falling trend lines, which can indicate a more massive, outer world orbiting the central star. Even though Kepler could normally never detect some of these planets, since their baseline time is so long and their likelihood of transits are lower, the inner worlds that Kepler can find, when we subject them to verification, can sometimes reveal the presence of an outer, massive world. A new record for the longest-period planet ever found as the result of Kepler was just set: 1000 days, with a mass of around six Jupiter masses.

Image credit: D. Huber et al., Science 18 October 2013: Vol. 342 no. 6156 pp. 331-334; DOI: 10.1126/science.1242066.

Image credit: D. Huber et al., Science 18 October 2013: Vol. 342 no. 6156 pp. 331-334; DOI: 10.1126/science.1242066.

Go get the whole story over at Forbes!



from ScienceBlogs http://ift.tt/1SAzgSE

“Mars is much closer to the characteristics of Earth. It has a fall, winter, summer and spring. North Pole, South Pole, mountains and lots of ice. No one is going to live on Venus; no one is going to live on Jupiter.” -Buzz Aldrin

When a planet passes in front of its star from our point of view, that transiting phenomenon can be detected as a dip in starlight. By surveying some 150,000 stars, the Kepler mission has detected close to 10,000 planetary candidates, many of which have been identified by the stellar wobble technique.

Image credit: NASA Ames.

Image credit: NASA Ames.

But this “wobbling” also sometimes contain rising or falling trend lines, which can indicate a more massive, outer world orbiting the central star. Even though Kepler could normally never detect some of these planets, since their baseline time is so long and their likelihood of transits are lower, the inner worlds that Kepler can find, when we subject them to verification, can sometimes reveal the presence of an outer, massive world. A new record for the longest-period planet ever found as the result of Kepler was just set: 1000 days, with a mass of around six Jupiter masses.

Image credit: D. Huber et al., Science 18 October 2013: Vol. 342 no. 6156 pp. 331-334; DOI: 10.1126/science.1242066.

Image credit: D. Huber et al., Science 18 October 2013: Vol. 342 no. 6156 pp. 331-334; DOI: 10.1126/science.1242066.

Go get the whole story over at Forbes!



from ScienceBlogs http://ift.tt/1SAzgSE

Fatal work injury that killed Gerald Thompson was preventable, MN-OSHA cites DSM Excavating [The Pump Handle]

Gerald Lyle Thompson’s work-related death could have been prevented. That’s how I see the findings of Minnesota OSHA (MN-OSHA) in the agency’s citations against his employer, DSM Excavating.

The 51 year-old was working in June 2015 at a construction site for Ryland Homes in Lakeville, Minnesota. The initial press reports indicated that Thompson and his brother were installing drain tile inside a 6 to 8 foot deep trench. Thompson was trapped at the bottom of the trench when the soil collapsed onto him. I wrote about the incident shortly after it occurred.

Inspectors with MN-OSHA conducted an inspection at the construction site following the fatal incident. The agency recently issued citations to DSM Excavating for eight serious violations and proposed a $55,500 penalty. Among other violations, the company failed to have a system in place to protect against the cave-in, allowed workers to be inside a trench with accumulated water, failed to have a competent person inspect the trench to ensure it was safe for workers to enter it. Two of the serious violations come with a $25,000 penalty. (Under federal OSHA, the maximum penalty for a serious violations is only $7,000.) MN-OSHA’s records posted on-line indicate (as of January 5, 2016) the company is contesting the citations. No citations were issued to Ryland Homes.

When some local press initially reported Gerald Lyle Thompson’s death, they called it an accident. An “accident” suggests the circumstances were unforeseen or could not have been avoided. MN-OSHA’s findings tell a different story. Call it cutting corners, call it poor management, call it breaking the law. Whatever you want to call it, Thompson’s work-related death could have been prevented, it was no accident.



from ScienceBlogs http://ift.tt/1n4JLS5

Gerald Lyle Thompson’s work-related death could have been prevented. That’s how I see the findings of Minnesota OSHA (MN-OSHA) in the agency’s citations against his employer, DSM Excavating.

The 51 year-old was working in June 2015 at a construction site for Ryland Homes in Lakeville, Minnesota. The initial press reports indicated that Thompson and his brother were installing drain tile inside a 6 to 8 foot deep trench. Thompson was trapped at the bottom of the trench when the soil collapsed onto him. I wrote about the incident shortly after it occurred.

Inspectors with MN-OSHA conducted an inspection at the construction site following the fatal incident. The agency recently issued citations to DSM Excavating for eight serious violations and proposed a $55,500 penalty. Among other violations, the company failed to have a system in place to protect against the cave-in, allowed workers to be inside a trench with accumulated water, failed to have a competent person inspect the trench to ensure it was safe for workers to enter it. Two of the serious violations come with a $25,000 penalty. (Under federal OSHA, the maximum penalty for a serious violations is only $7,000.) MN-OSHA’s records posted on-line indicate (as of January 5, 2016) the company is contesting the citations. No citations were issued to Ryland Homes.

When some local press initially reported Gerald Lyle Thompson’s death, they called it an accident. An “accident” suggests the circumstances were unforeseen or could not have been avoided. MN-OSHA’s findings tell a different story. Call it cutting corners, call it poor management, call it breaking the law. Whatever you want to call it, Thompson’s work-related death could have been prevented, it was no accident.



from ScienceBlogs http://ift.tt/1n4JLS5

adds 2