A longer-than-usual gap between recap posts, but thanks to some kid illnesses and the Thanksgiving holiday, not all that many new physics posts over at Forbes:
— Physics Demands Many Kinds Of Literacy: Some musings about the many different ways physicists process information, prompted by graphs generated for the previous item.
— Football Physics: Safety Rules and Unintended Consequences: One sort of popular suggestion for fixing the concussion problem is to remove pads, using rugby as an example. While rugby has fewer ultraviolent collisions than American football, that’s because of fundamental differences in the game. Differences that, ironically, are partly due to rule changes meant to improve safety.
So, there you go: the usual assortment of random stuff. I will now return to fretting about preparing for my TEDx talk tomorrow.
from ScienceBlogs http://ift.tt/1lvx0Pw
A longer-than-usual gap between recap posts, but thanks to some kid illnesses and the Thanksgiving holiday, not all that many new physics posts over at Forbes:
— Physics Demands Many Kinds Of Literacy: Some musings about the many different ways physicists process information, prompted by graphs generated for the previous item.
— Football Physics: Safety Rules and Unintended Consequences: One sort of popular suggestion for fixing the concussion problem is to remove pads, using rugby as an example. While rugby has fewer ultraviolent collisions than American football, that’s because of fundamental differences in the game. Differences that, ironically, are partly due to rule changes meant to improve safety.
So, there you go: the usual assortment of random stuff. I will now return to fretting about preparing for my TEDx talk tomorrow.
New research that used mitochondrial DNA from living Adélie penguins to retrace the birds’ demographic history for the past 22,000 years revealed that the numbers of penguins swelled as Earth thawed from the last ice age. The study suggests that as Antarctic glaciers and sea ice shrink as global temperatures rise, Adélie penguins can thrive in a less icy Antarctic landscape, but for how long is unclear.
The study, published November 19, 2015 in BMC Evolutionary Biology, found that the penguins’ abundance grew pretty much in reverse proportion to the decline of Antarctic glacial and sea ice following the last ice age. Over the past 14,000 years, the East Antarctic population increased 135-fold, reports the researchers.
Photo credit: Louise Emmerson
Adélie penguins live along much of Antarctica’s coast. The penguins need three environmental conditions to flourish, said Steven Emslie, an ornithologist from the University of North Carolina:
They need ice-free terrain for their nests, they need open water access to their beaches, and they need a consistent food supply so they can forage and return to their colonies.
East Antarctica met those conditions, says the study, as glacial ice retreated inland and sea ice melted from penguin foraging grounds. In the past 30 years, as climate change ramped up and conditions continued to improve for the penguins, the Adélie population nearly doubled. Scientists estimate the current Adélie population at around 1.14 million breeding pairs, with 30% of them coming from the East Antarctic region. Emslie said:
But, the researchers say, even though the East Antarctic Adélie penguin population has grown abundantly, that doesn’t necessarily mean the population will continue to do so. Scientists don’t know how the penguin’s prey, Antarctic krill, will react to warming temperatures, and the temperatures might become too warm for the penguins themselves. Emslie said:
The warming trend does reach a threshold where it can be beneficial at first but then start having negative impacts.
Bottom line: A new study published November 19, 2015 in BMC Evolutionary Biology, found that populations of Adélie penguins in East Antarctica grew pretty much in reverse proportion to the decline of Antarctic glacial and sea ice following the last ice age.
New research that used mitochondrial DNA from living Adélie penguins to retrace the birds’ demographic history for the past 22,000 years revealed that the numbers of penguins swelled as Earth thawed from the last ice age. The study suggests that as Antarctic glaciers and sea ice shrink as global temperatures rise, Adélie penguins can thrive in a less icy Antarctic landscape, but for how long is unclear.
The study, published November 19, 2015 in BMC Evolutionary Biology, found that the penguins’ abundance grew pretty much in reverse proportion to the decline of Antarctic glacial and sea ice following the last ice age. Over the past 14,000 years, the East Antarctic population increased 135-fold, reports the researchers.
Photo credit: Louise Emmerson
Adélie penguins live along much of Antarctica’s coast. The penguins need three environmental conditions to flourish, said Steven Emslie, an ornithologist from the University of North Carolina:
They need ice-free terrain for their nests, they need open water access to their beaches, and they need a consistent food supply so they can forage and return to their colonies.
East Antarctica met those conditions, says the study, as glacial ice retreated inland and sea ice melted from penguin foraging grounds. In the past 30 years, as climate change ramped up and conditions continued to improve for the penguins, the Adélie population nearly doubled. Scientists estimate the current Adélie population at around 1.14 million breeding pairs, with 30% of them coming from the East Antarctic region. Emslie said:
But, the researchers say, even though the East Antarctic Adélie penguin population has grown abundantly, that doesn’t necessarily mean the population will continue to do so. Scientists don’t know how the penguin’s prey, Antarctic krill, will react to warming temperatures, and the temperatures might become too warm for the penguins themselves. Emslie said:
The warming trend does reach a threshold where it can be beneficial at first but then start having negative impacts.
Bottom line: A new study published November 19, 2015 in BMC Evolutionary Biology, found that populations of Adélie penguins in East Antarctica grew pretty much in reverse proportion to the decline of Antarctic glacial and sea ice following the last ice age.
Today I proof-read the annual index for Fornvännen, the archaeology journal I co-edit. And I took the opportunity to look at our gender stats for full-length papers. There are 16 of these in this year’s four issues. Only 31% have female first authors. An additional 31% have a male first author and at least one female author. So women are involved as authors in 62% of this year’s full-length papers. That seems reasonably fair since several papers have only one author, so it would be impossible for each gender to be involved in all of them.
But you might wonder what a female author does to the chance of getting a full-length paper into Fornvännen. So I looked at the stats for March 2014 to now (because it takes 8-9 months from submission to publication). During this period 32% of full-length papers written by men were turned down. Only 9% of full-length papers written by women were turned down. (Here a paper co-written by two women counts as 1 paper by women and a paper co-written by one woman and one man counts as 0.5 paper by women and 0.5 by men.)
This means that although women are submitting fewer papers than men do to Fornvännen, when they do submit they have a greater chance of getting published. This suggests that either Fornvännen has a pro-female bias, or that women are less likely than men to submit poor work. Given what patriarchy does do female self-confidence, I feel pretty safe in assuming that the latter is the case.
from ScienceBlogs http://ift.tt/1Rmkmik
Today I proof-read the annual index for Fornvännen, the archaeology journal I co-edit. And I took the opportunity to look at our gender stats for full-length papers. There are 16 of these in this year’s four issues. Only 31% have female first authors. An additional 31% have a male first author and at least one female author. So women are involved as authors in 62% of this year’s full-length papers. That seems reasonably fair since several papers have only one author, so it would be impossible for each gender to be involved in all of them.
But you might wonder what a female author does to the chance of getting a full-length paper into Fornvännen. So I looked at the stats for March 2014 to now (because it takes 8-9 months from submission to publication). During this period 32% of full-length papers written by men were turned down. Only 9% of full-length papers written by women were turned down. (Here a paper co-written by two women counts as 1 paper by women and a paper co-written by one woman and one man counts as 0.5 paper by women and 0.5 by men.)
This means that although women are submitting fewer papers than men do to Fornvännen, when they do submit they have a greater chance of getting published. This suggests that either Fornvännen has a pro-female bias, or that women are less likely than men to submit poor work. Given what patriarchy does do female self-confidence, I feel pretty safe in assuming that the latter is the case.
Astrophotographers Jonathan Green and Amit Kamble in New Zealand collaborated on this photo, and submitted it to EarthSky. Jonathan captured the image data and Amit processed the image in PixInsight. Amit wrote:
Small Magellanic Cloud is a satellite galaxy of the Milky Way. It’s thought to be about 200,000 light-years from the sun, and about 75,000 light-years from the Large Magellanic Cloud. That’s pretty close by galactic standards …
The Small Magellanic Cloud is classified as an irregular dwarf galaxy, and careful observations of the proper motions of its stars show that it’s stretched out along the line of sight, probably due to gravitational interactions with the Milky Way and the Large Magellanic Cloud. The Small Magellanic Cloud is an important object in astronomical history, it was by measuring the brightness of stars in this galaxy from photographic plates that Henrietta Leavitt discovered the period-luminosity relation of Cepheid variables.
To the right of the Small Magellanic Cloud, you will see the second-brightest globular cluster in the sky 47 Tucanae. When you look at 47 Tucanae, you’re seeing the light of one million stars packed into a volume of space just 120 light-years across. That makes the heart of 47 Tucanae a very crowded place indeed! 47 Tucanae is thought to be around 16,000 light-years away from our sun, so as you can see it is completely unrelated to the Small Magellanic Cloud and just happens to occupy the same area of sky as the much more distant dwarf galaxy.
Canon 60da at ISO1250 through a Canon 200 mm lens set at f/3.2
Astrophotographers Jonathan Green and Amit Kamble in New Zealand collaborated on this photo, and submitted it to EarthSky. Jonathan captured the image data and Amit processed the image in PixInsight. Amit wrote:
Small Magellanic Cloud is a satellite galaxy of the Milky Way. It’s thought to be about 200,000 light-years from the sun, and about 75,000 light-years from the Large Magellanic Cloud. That’s pretty close by galactic standards …
The Small Magellanic Cloud is classified as an irregular dwarf galaxy, and careful observations of the proper motions of its stars show that it’s stretched out along the line of sight, probably due to gravitational interactions with the Milky Way and the Large Magellanic Cloud. The Small Magellanic Cloud is an important object in astronomical history, it was by measuring the brightness of stars in this galaxy from photographic plates that Henrietta Leavitt discovered the period-luminosity relation of Cepheid variables.
To the right of the Small Magellanic Cloud, you will see the second-brightest globular cluster in the sky 47 Tucanae. When you look at 47 Tucanae, you’re seeing the light of one million stars packed into a volume of space just 120 light-years across. That makes the heart of 47 Tucanae a very crowded place indeed! 47 Tucanae is thought to be around 16,000 light-years away from our sun, so as you can see it is completely unrelated to the Small Magellanic Cloud and just happens to occupy the same area of sky as the much more distant dwarf galaxy.
Canon 60da at ISO1250 through a Canon 200 mm lens set at f/3.2
I am 99.9 per cent the same as every single human on the planet.
By that definition, I’m nothing special. But the remaining 0.01 per cent makes me as unique and special as my mother told me I am.
That 0.01 per cent is just a few thousand switched A, T, C or Gs in the 3 billion ‘letter’ code that is my human genome.
These variations – called single nucleotide polymorphism or SNPs – are tiny inherited chemical changes in your DNA – and they can make all the difference. Some alter your appearance, while others can affect your chances of developing diseases, like cancer.
Scientists have been hunting for these cancer-related differences in our DNA for the past few decades. And the more of them we find, the more accurate a picture we build up about how cancer risk is linked to our genes.
Over the years, there have been some important findings. Recently, for example, our scientists found seven genetic changes that affect the risk of prostate cancer. And just a couple of years ago, around 80 more were linked to breast, prostate and ovarian cancers – and we continue to find more still.
There are also other variants linked to things like our smoking behaviour or how our body responds to exercise, that could also, indirectly, affect a person’s chances of diseases like cancer later in life.
In fact, researchers have discovered hundreds of gene variations that seem to affect cancer risk, either directly or indirectly. Theoretically, looking at a person’s DNA could reveal how likely they are to get cancer and so has the potential to help them change their behaviour to minimise their risk.
Although your DNA is no guarantee of cancer or any disease, that doesn’t stop companies from trying to sell tests that claim to predict your future.
To find out if these tests could show anything more useful than a horoscope, I volunteered to take an over-the-counter test, made by a US company called 23andMe… and this is what happened.
Prep
“Why do you want to do this?” asked Anna Middleton, a genetic counsellor from the Wellcome Trust Sanger Institute. “Other than for the article,” she added.
People get their DNA tested for many reasons, such as family planning, or to take preventative health measures – but I was just curious.
Anna asked me a few more questions about the small print of the test’s ‘Terms of Service’, which I hadn’t then read. But I made a mental note to myself to actually read them – after all, this wasn’t like downloading a new version of iTunes.
Then she asked something that took me by surprise.
“Well have you talked to your parents?”
I hadn’t.
“I mean, say you did find out you’re at risk for Alzheimer’s disease, how do you think your parents would feel about that? If you have it, then both or either could have passed it on to you and would also have an increased risk. Are they worried about that? How do you feel about telling them about that?”
She brought up a good point. My DNA is their DNA, and very similar to my brother’s.
After asking my family if they wanted to know the test results, they all agreed that they wanted to know the good, bad and the ugly.
But not everyone wants to know. For some, knowledge might not be power but rather a constant source of stress.
Can any good come from knowing you’re at risk of something you can do little to prevent , or may never happen? Or does it just cause people to worry for no reason?
On one hand, there’s evidence that knowing might not actually cause stress.
For example, a study from 2010 randomly assigned 162 people who had parents with Alzheimer’s to find out if they had the genetic markers for the disease or not. Results showed that there were no differences in anxiety or test-related distress between the groups who found out their risk, and those who didn’t. The researchers concluded that there was no “significant short-term psychological risks”.
And for some, the knowledge can actually give peace of mind, especially when it comes to stigmatised conditions such as obesity. There is also evidence that even with cardiovascular diseases there can be a sense of relief.
Dr Susanne Meisel, Cancer Research UK-funded research psychologist at Kings College London, explains: “Knowing one’s genetic make-up seems to provide an ‘explanation’ for their condition, even if the actual genetic risk is very small.”
But many experts feel that these tests are an unnecessary burden, a murky stain on an otherwise clean bill of health. Others argue that the studies that claim predictive genetic testing causes “no harm” are poorly designed, and have serious limitations, which could be skewing the results. And now some evidence is beginning to show that there might be long-term consequences to knowing your genetic risk for certain diseases – such as depression.
But I was still curious, so – having spoken to Anna – I forged ahead with the test.
The test
The test – which involves spitting into a test tube – arrived on an unassuming Tuesday and, heeding science journalist Ed Yong’s words after his experience, I decided not to do it at work. Instead, I took the tube home and – ironically – found myself spitting into it while both a repairman and my flatmate awkwardly watched.
But before the spitting starts, you need to register your kit and fill out some consent forms. I decided to actually read the terms of service, which mostly seemed to be legal jargon to protect 23andMe, such as:
“We do not provide medical advice.”
“The Services are not intended to be used by you for any diagnostic purpose and are not a substitute for professional medical advice.”
But embedded in the extensive legal document were a couple of things that caused me to sit up and pay attention.
For example, halfway through reading I came across this:
“Many of the genetic discoveries that we report have not been clinically validated, and the technology we use, which is the same technology used by the research community, to date has not been widely used for clinical testing.”
Also, at the time I took the test, none of the tests had been licensed – neither by the US’s Food and Drug Administration (FDA), nor the European Medicines Authority – this was pretty much ‘recreational genetics’.*
But what truly struck me was this: “Genetic Information you share with others could be used against your interests.”
This essentially refers to insurance companies’ ability to discriminate against you, based on your test results .
In the UK, there is no legislation preventing insurance companies or employers from discriminating on the basis of genetic differences. There is, however, a voluntary moratorium between the government and the insurance industry limiting access to genetic results – but it ends in 2019.
And while very few companies ask for genetic test results, as genetic testing becomes more commonplace requesting the results could become standard, and it could have negative consequences.
The document ended in ‘shouty’ capitals saying it didn’t make any promises about the services, just in case I still hadn’t got the message.
Despite all this, I signed the consent form, spat in the tube and posted my kit back to the company. Eight weeks later an email told me my results were ready.
But it still took me three days to build up the courage to look at them.
To see what the results section looks like watch this video
Results
The 23andMe website breaks down your results into four categories:
Traits – physical traits like eye colour or if you’re likely to be lactose intolerant.
Inherited conditions – conditions, like cystic fibrosis, that you can pass on to your children if your partner also a carrier.
Drug response – how gene variants are linked to how you react to different medicines.
Genetic risk factors – genetic variants that predict how likely you are to get a certain disease in the future, such as breast cancer.
Within the categories, all the things they test for are laid out in a list with a star rating next to it, indicating how robust the evidence is that a genetic variant is a good indicator of what it’s linked to.
You can expand and read more about the SNP they tested, and the different studies they used to validate it. There is also a section that helps you understand what your results mean.
For the genetic risk factors there’s an added step where you need to ‘unlock’ some of the results, so you’re actively choosing to find out your genetic risk – and the site provides advice on what to do with your results and a link to a genetic counselling.
On the whole everything is well communicated and quite clear. My results, thankfully, showed that I was pretty average. The only area where I deviated from the norm was how I respond to certain types of medicine.
But how good a risk predictor is it? Also do these results really make a difference?
Answer Sheet
“Most SNPs for disease are only very weakly associated,” says Cancer Research UK’s Professor Doug Easton, who – as well as helping identify hundreds of cancer linked SNPs – also helped track down the notorious BRCA1 gene in the 1990s.
While a combination of SNPs is better at predicting disease risk, 23andMe usually only looks at one or two variants for each disease they test for. That means that, for most people who take the test and show a variation, the risk of them getting that disease is only somewhat above or below the population average.
“SNPs are only a small percentage of the difference in a given trait, and only explain a proportion of the genetic risk of disease,” adds Prof Easton, explaining that there are many other risk factors that need to be taken into account. For example, family history, which could point to an inherited genetic fault in genes like BRCA1 or BRCA2 (which increases a persons risk of getting breast cancer by up to 90 per cent) or the person’s overall health and lifestyle.
And this point becomes obvious after I speak with Sir David Spiegelhalter, Winton Professor for the Public Understanding of Risk in the Statistical Laboratory at University of Cambridge, who took the 23andMe test back when they tested for diabetes, which was removed to appease the FDA.
“They told me, amongst other information, that I had an increased risk of diabetes,” he said. “They said that 31 out of 100 European men who share my genotype would develop Type 2 diabetes between 20 and 79, compared to an average of 26 out of 100.”
In other words, the test’s ability to predict what will actually happen to someone – and thus their practical usefulness – isn’t that great.
“Of course I have already lived to 62 without getting diabetes,” Speigelhalter points out.
Variation vs mutation
As well as looking at SNPs – which are variations in the letters that make up genes, the 23andMe test also looks at certain gene mutations – letters are missing from, or added to a gene. For example, it tests for certain mutations in the BRCA1 and BRCA2 genes, which cause the gene to stop working properly.
So are any of these mutations better predictors of risk? Not necessarily.
Prof Easton explains: “These mutations are special, in that they are quite common in Ashkenazi Jews. And in that population, a test for these mutations makes some sense. But in other populations, including the UK, that would be a poor test. If there are reasons to test BRCA1 and BRCA2, you’d need to do an analysis of the entire gene, not just one or two spelling mistakes.”
There are hundreds of different mutations in BRCA1 and BRCA2, and to look at all of those you would need to get the full DNA sequence – which means looking at all the letters not just the ones that might be switched. At the moment such sequencing is hugely complex and very expensive – so 23andMe doesn’t offer this.
So if you’re of Ashkenazi Jewish descent this test is useful to you. But if you’re of any other ethnic background this test tells you next to nothing.
And herein lies a problem. Many 23andMe’s variants are often only for certain populations – most often those of European backgrounds. But these tests are advertised to help people make healthier choices and “plan for the future”, which is pretty hard to do if the results don’t apply to you.
Make a change
But even if the results are relevant to you personally does anyone actually change their behaviour?
“The crucial issue is whether these risk numbers are useful – relevant to your current situation, and different enough from average to be interesting in their own right, or to motivate changes in behavior,” argues Spiegelhalter.
One would assume that, if you found out that you were at increased risk of developing cancer, you would change your behaviour to try and reduce that risk. Maybe you would quit smoking, drink less, eat more healthily or start exercising more but that doesn’t seem to be the case.
“Unfortunately, studies have shown that although people intend to change their behaviour, they don’t actually change it,” says Dr Meisel. She highlighted a 2010 analysis by the Cochrane Collaboration that combined the results of 20 different studies on genetic risk, and found no evidence that knowing about genetic risk affected a person’s behaviour.
But Dr Meisel suggests that it’s not all bad. “On the other hand, people don’t become more reckless when they know their genetic risk, or lack of risk, for a condition,” she says.
So where does that leave us?
Advice
If you’re thinking about having your genes tested for curiosity’s sake go for it. These tests can be interesting, just like a Buzzfeed quiz can be interesting.
But if you’re taking the test for health planning purposes, or to see if you’re at an increased risk of cancer, you need to be aware of the shortcomings. For example, if you belong to any ethnic minority group, many of the results of this test are a waste of money, because most of the results won’t apply to you.
On top of this, most of the things you can actually do about your results – giving up smoking, eating healthily, cutting down on alcohol and all the rest are just sensible things to do for anyone, regardless of their genes.
And – most important of all – genetics isn’t a guaranteed indication of your risk, but rather a small piece of the big picture that is your overall health. Rather than spending a hundred quid on a test, our advice is: if you’re worried about your health, your GP is a better bet. There are also excellent genetic counselling services that can help people in families that have a higher risk of developing cancer.
As for my final take away: there may be science behind 23andMe’s test – but I reckon I might as well have read my horoscope.
from Cancer Research UK - Science blog http://ift.tt/1Owpq2A
I am 99.9 per cent the same as every single human on the planet.
By that definition, I’m nothing special. But the remaining 0.01 per cent makes me as unique and special as my mother told me I am.
That 0.01 per cent is just a few thousand switched A, T, C or Gs in the 3 billion ‘letter’ code that is my human genome.
These variations – called single nucleotide polymorphism or SNPs – are tiny inherited chemical changes in your DNA – and they can make all the difference. Some alter your appearance, while others can affect your chances of developing diseases, like cancer.
Scientists have been hunting for these cancer-related differences in our DNA for the past few decades. And the more of them we find, the more accurate a picture we build up about how cancer risk is linked to our genes.
Over the years, there have been some important findings. Recently, for example, our scientists found seven genetic changes that affect the risk of prostate cancer. And just a couple of years ago, around 80 more were linked to breast, prostate and ovarian cancers – and we continue to find more still.
There are also other variants linked to things like our smoking behaviour or how our body responds to exercise, that could also, indirectly, affect a person’s chances of diseases like cancer later in life.
In fact, researchers have discovered hundreds of gene variations that seem to affect cancer risk, either directly or indirectly. Theoretically, looking at a person’s DNA could reveal how likely they are to get cancer and so has the potential to help them change their behaviour to minimise their risk.
Although your DNA is no guarantee of cancer or any disease, that doesn’t stop companies from trying to sell tests that claim to predict your future.
To find out if these tests could show anything more useful than a horoscope, I volunteered to take an over-the-counter test, made by a US company called 23andMe… and this is what happened.
Prep
“Why do you want to do this?” asked Anna Middleton, a genetic counsellor from the Wellcome Trust Sanger Institute. “Other than for the article,” she added.
People get their DNA tested for many reasons, such as family planning, or to take preventative health measures – but I was just curious.
Anna asked me a few more questions about the small print of the test’s ‘Terms of Service’, which I hadn’t then read. But I made a mental note to myself to actually read them – after all, this wasn’t like downloading a new version of iTunes.
Then she asked something that took me by surprise.
“Well have you talked to your parents?”
I hadn’t.
“I mean, say you did find out you’re at risk for Alzheimer’s disease, how do you think your parents would feel about that? If you have it, then both or either could have passed it on to you and would also have an increased risk. Are they worried about that? How do you feel about telling them about that?”
She brought up a good point. My DNA is their DNA, and very similar to my brother’s.
After asking my family if they wanted to know the test results, they all agreed that they wanted to know the good, bad and the ugly.
But not everyone wants to know. For some, knowledge might not be power but rather a constant source of stress.
Can any good come from knowing you’re at risk of something you can do little to prevent , or may never happen? Or does it just cause people to worry for no reason?
On one hand, there’s evidence that knowing might not actually cause stress.
For example, a study from 2010 randomly assigned 162 people who had parents with Alzheimer’s to find out if they had the genetic markers for the disease or not. Results showed that there were no differences in anxiety or test-related distress between the groups who found out their risk, and those who didn’t. The researchers concluded that there was no “significant short-term psychological risks”.
And for some, the knowledge can actually give peace of mind, especially when it comes to stigmatised conditions such as obesity. There is also evidence that even with cardiovascular diseases there can be a sense of relief.
Dr Susanne Meisel, Cancer Research UK-funded research psychologist at Kings College London, explains: “Knowing one’s genetic make-up seems to provide an ‘explanation’ for their condition, even if the actual genetic risk is very small.”
But many experts feel that these tests are an unnecessary burden, a murky stain on an otherwise clean bill of health. Others argue that the studies that claim predictive genetic testing causes “no harm” are poorly designed, and have serious limitations, which could be skewing the results. And now some evidence is beginning to show that there might be long-term consequences to knowing your genetic risk for certain diseases – such as depression.
But I was still curious, so – having spoken to Anna – I forged ahead with the test.
The test
The test – which involves spitting into a test tube – arrived on an unassuming Tuesday and, heeding science journalist Ed Yong’s words after his experience, I decided not to do it at work. Instead, I took the tube home and – ironically – found myself spitting into it while both a repairman and my flatmate awkwardly watched.
But before the spitting starts, you need to register your kit and fill out some consent forms. I decided to actually read the terms of service, which mostly seemed to be legal jargon to protect 23andMe, such as:
“We do not provide medical advice.”
“The Services are not intended to be used by you for any diagnostic purpose and are not a substitute for professional medical advice.”
But embedded in the extensive legal document were a couple of things that caused me to sit up and pay attention.
For example, halfway through reading I came across this:
“Many of the genetic discoveries that we report have not been clinically validated, and the technology we use, which is the same technology used by the research community, to date has not been widely used for clinical testing.”
Also, at the time I took the test, none of the tests had been licensed – neither by the US’s Food and Drug Administration (FDA), nor the European Medicines Authority – this was pretty much ‘recreational genetics’.*
But what truly struck me was this: “Genetic Information you share with others could be used against your interests.”
This essentially refers to insurance companies’ ability to discriminate against you, based on your test results .
In the UK, there is no legislation preventing insurance companies or employers from discriminating on the basis of genetic differences. There is, however, a voluntary moratorium between the government and the insurance industry limiting access to genetic results – but it ends in 2019.
And while very few companies ask for genetic test results, as genetic testing becomes more commonplace requesting the results could become standard, and it could have negative consequences.
The document ended in ‘shouty’ capitals saying it didn’t make any promises about the services, just in case I still hadn’t got the message.
Despite all this, I signed the consent form, spat in the tube and posted my kit back to the company. Eight weeks later an email told me my results were ready.
But it still took me three days to build up the courage to look at them.
To see what the results section looks like watch this video
Results
The 23andMe website breaks down your results into four categories:
Traits – physical traits like eye colour or if you’re likely to be lactose intolerant.
Inherited conditions – conditions, like cystic fibrosis, that you can pass on to your children if your partner also a carrier.
Drug response – how gene variants are linked to how you react to different medicines.
Genetic risk factors – genetic variants that predict how likely you are to get a certain disease in the future, such as breast cancer.
Within the categories, all the things they test for are laid out in a list with a star rating next to it, indicating how robust the evidence is that a genetic variant is a good indicator of what it’s linked to.
You can expand and read more about the SNP they tested, and the different studies they used to validate it. There is also a section that helps you understand what your results mean.
For the genetic risk factors there’s an added step where you need to ‘unlock’ some of the results, so you’re actively choosing to find out your genetic risk – and the site provides advice on what to do with your results and a link to a genetic counselling.
On the whole everything is well communicated and quite clear. My results, thankfully, showed that I was pretty average. The only area where I deviated from the norm was how I respond to certain types of medicine.
But how good a risk predictor is it? Also do these results really make a difference?
Answer Sheet
“Most SNPs for disease are only very weakly associated,” says Cancer Research UK’s Professor Doug Easton, who – as well as helping identify hundreds of cancer linked SNPs – also helped track down the notorious BRCA1 gene in the 1990s.
While a combination of SNPs is better at predicting disease risk, 23andMe usually only looks at one or two variants for each disease they test for. That means that, for most people who take the test and show a variation, the risk of them getting that disease is only somewhat above or below the population average.
“SNPs are only a small percentage of the difference in a given trait, and only explain a proportion of the genetic risk of disease,” adds Prof Easton, explaining that there are many other risk factors that need to be taken into account. For example, family history, which could point to an inherited genetic fault in genes like BRCA1 or BRCA2 (which increases a persons risk of getting breast cancer by up to 90 per cent) or the person’s overall health and lifestyle.
And this point becomes obvious after I speak with Sir David Spiegelhalter, Winton Professor for the Public Understanding of Risk in the Statistical Laboratory at University of Cambridge, who took the 23andMe test back when they tested for diabetes, which was removed to appease the FDA.
“They told me, amongst other information, that I had an increased risk of diabetes,” he said. “They said that 31 out of 100 European men who share my genotype would develop Type 2 diabetes between 20 and 79, compared to an average of 26 out of 100.”
In other words, the test’s ability to predict what will actually happen to someone – and thus their practical usefulness – isn’t that great.
“Of course I have already lived to 62 without getting diabetes,” Speigelhalter points out.
Variation vs mutation
As well as looking at SNPs – which are variations in the letters that make up genes, the 23andMe test also looks at certain gene mutations – letters are missing from, or added to a gene. For example, it tests for certain mutations in the BRCA1 and BRCA2 genes, which cause the gene to stop working properly.
So are any of these mutations better predictors of risk? Not necessarily.
Prof Easton explains: “These mutations are special, in that they are quite common in Ashkenazi Jews. And in that population, a test for these mutations makes some sense. But in other populations, including the UK, that would be a poor test. If there are reasons to test BRCA1 and BRCA2, you’d need to do an analysis of the entire gene, not just one or two spelling mistakes.”
There are hundreds of different mutations in BRCA1 and BRCA2, and to look at all of those you would need to get the full DNA sequence – which means looking at all the letters not just the ones that might be switched. At the moment such sequencing is hugely complex and very expensive – so 23andMe doesn’t offer this.
So if you’re of Ashkenazi Jewish descent this test is useful to you. But if you’re of any other ethnic background this test tells you next to nothing.
And herein lies a problem. Many 23andMe’s variants are often only for certain populations – most often those of European backgrounds. But these tests are advertised to help people make healthier choices and “plan for the future”, which is pretty hard to do if the results don’t apply to you.
Make a change
But even if the results are relevant to you personally does anyone actually change their behaviour?
“The crucial issue is whether these risk numbers are useful – relevant to your current situation, and different enough from average to be interesting in their own right, or to motivate changes in behavior,” argues Spiegelhalter.
One would assume that, if you found out that you were at increased risk of developing cancer, you would change your behaviour to try and reduce that risk. Maybe you would quit smoking, drink less, eat more healthily or start exercising more but that doesn’t seem to be the case.
“Unfortunately, studies have shown that although people intend to change their behaviour, they don’t actually change it,” says Dr Meisel. She highlighted a 2010 analysis by the Cochrane Collaboration that combined the results of 20 different studies on genetic risk, and found no evidence that knowing about genetic risk affected a person’s behaviour.
But Dr Meisel suggests that it’s not all bad. “On the other hand, people don’t become more reckless when they know their genetic risk, or lack of risk, for a condition,” she says.
So where does that leave us?
Advice
If you’re thinking about having your genes tested for curiosity’s sake go for it. These tests can be interesting, just like a Buzzfeed quiz can be interesting.
But if you’re taking the test for health planning purposes, or to see if you’re at an increased risk of cancer, you need to be aware of the shortcomings. For example, if you belong to any ethnic minority group, many of the results of this test are a waste of money, because most of the results won’t apply to you.
On top of this, most of the things you can actually do about your results – giving up smoking, eating healthily, cutting down on alcohol and all the rest are just sensible things to do for anyone, regardless of their genes.
And – most important of all – genetics isn’t a guaranteed indication of your risk, but rather a small piece of the big picture that is your overall health. Rather than spending a hundred quid on a test, our advice is: if you’re worried about your health, your GP is a better bet. There are also excellent genetic counselling services that can help people in families that have a higher risk of developing cancer.
As for my final take away: there may be science behind 23andMe’s test – but I reckon I might as well have read my horoscope.
Of all the slick woo peddlers out there, one of the most famous (and most annoying) is Deepak Chopra. Indeed, he first attracted a bit of not-so-Respectful Insolence a mere 10 months after this blog started, when Chopra produced the first of many rants against nasty “skeptics” like me that I’ve deconstructed over the years. Eventually, the nonsensical nature of his pseudo-profound blatherings inspired me to coin a term to describe it: Choprawoo. Unfortunately, far too many people find Deepak Chopra’s combination of mystical sounding pseudo-profundity, his invocation of “cosmic consciousness” and rejection of genetic determinism, and his advocacy of “integrating” all manner of quackery into real medicine (a.k.a. “integrative medicine, formerly “complementary and alternative medicine,” or CAM) to the point of getting actual legitimate medical school faculty to assist him with an actual clinical trial compelling. He is, alas, one of the most influential woo peddlers out there. Worse, he was once a legitimate MD; now he’s a quack. Indeed, as I’ve described before, of all the quacks and cranks and purveyors of woo whom I’ve encountered over the years, Deepak Chopra is, without a doubt, one of the most arrogantly obstinate, if not the most arrogantly obstinate. Right now he’s pushing his latest book, Supergenes: Unlock the Astonishing Power of Your DNA for Optimum Health and Well-Being, which asserts that you can control the activity of your genes.
So it was greatly amusing to me to see Deepak Chopra and his pseudo-profound bullshit (and I use the term because the source I’m about to look at uses the term) featured so prominently in a new study by Pennycook et al entitled On the reception and detection of pseudo-profound bullshit. The study was performed at the Department of Psychology, University of Waterloo, and the School of Humanities and Creativity, Sheridan College. Indeed, Deepak Chopra’s pseudo-profound bullshit is a key component of the study. I love the way the abstract starts, too:
Although bullshit is common in everyday life and has attracted attention from philosophers, its reception (critical or ingenuous) has not, to our knowledge, been subject to empirical investigation. Here we focus on pseudo-profound bullshit, which consists of seemingly impressive assertions that are presented as true and meaningful but are actually vacuous.
First, what do the authors mean by pseudo-profound bullshit? I might as well quote their definition in full, even at the risk of a large block of quoted text:
The Oxford English Dictionary defines bullshit as, simply, “rubbish” and “nonsense”, which unfortunately does not get to the core of bullshit. Consider the following statement:
Hidden meaning transforms unparalleled abstract beauty.
Although this statement may seem to convey some sort of potentially profound meaning, it is merely a collection of buzzwords put together randomly in a sentence that retains syntactic structure. The bullshit statement is not merely non- sense, as would also be true of the following, which is not bullshit:
Unparalleled transforms meaning beauty hidden abstract.
The syntactic structure of a), unlike b), implies that it was constructed to communicate something. Thus, bullshit, in contrast to mere nonsense, is something that implies but does not contain adequate meaning or truth. This sort of phenomenon is similar to what Buekens and Boudry (2015) referred to as obscurantism (p. 1): “[when] the speaker… [sets] up a game of verbal smoke and mirrors to suggest depth and insight where none exists.” Our focus, however, is somewhat different from what is found in the philosophy of bullshit and related phenomena (e.g., Black, 1983; Buekens & Boudry, 2015; Frankfurt; 2005). Whereas philosophers have been primarily concerned with the goals and intentions of the bullshitter, we are interested in the factors that pre- dispose one to become or to resist becoming a bullshittee. Moreover, this sort of bullshit – which we refer to here as pseudo-profound bullshit – may be one of many different types. We focus on pseudo-profound bullshit because it rep- resents a rather extreme point on what could be considered a spectrum of bullshit. We can say quite confidently that the above example (a) is bullshit, but one might also label an exaggerated story told over drinks to be bullshit. In future studies on bullshit, it will be important to define the type of bullshit under investigation (see Discussion for further comment on this issue).
This is about as fantastic an introduction to a scientific paper as I’ve ever seen. It also defines a form of BS at whose production Deepak Chopra is expert at. But how does one measure the inherent “BS-ness” of a statement? The way the authors did this was absolutely hilarious. Some of you might be aware of a website, The Wisdom of Chopra, which is a random Deepak Chopra quote generator. As the generator tells us, each “quote” is generated from a list of words that can be found in Deepak Chopra’s Twitter stream randomly stuck together in a sentence. This was one source of raw material for the authors. The other was the New Age Bullshit Generator, which was also inspired by Deepak Chopra and works on similar principles, but uses a list of profound-sounding words compiled by its creator, Seb Pearce. Examples include sentences like “Imagination is inside expo- nential space time events” and “We are in the midst of a self-aware blossoming of being that will align us with the nexus itself.” These sites were used to produce ten meaningless sentences.
Next, Waterloo University undergraduate students were asked to rate the sentences using the following 5-point scale: 1= Not at all profound, 2 = somewhat profound, 3 = fairly profound, 4 = definitely profound, 5 = very profound. Before the study started, the same students answered demographic questions and completed five cognitive tasks intended to assess components of cognitive ability. They also answered questions designed to assess religious beliefs. These students rated the ten meaningless pseudo-profound statements. This first study was to assess the BS potential of the statements and validate the internal consistency of the measures, specifically the new measure, dubbed the “Bullshit Receptivity” (BSR) scale, which had good internal consistency. Basically, the higher the BSR values attributed to these statements, the higher the, well, receptivity to BS demonstrated by the subject. The authors found that BSR was “strongly negatively correlated with each cognitive measure except for numeracy (which was nonetheless significant)” and that “both ontological confusions and religious belief were positively correlated with bullshit receptivity.”
The next study looked at some real world examples. Participants were recruited for pay from Amazon’s Mechanical Turk. In addition to the ten meaningless statements used in the above study, ten novel items were generated by the two websites, and the authors also obtained 10 items from Deepak Chopra’s Twitter feed; e.g.:
Subjects were also assessed by additional instruments, such as the Paranormal Belief Scale and measures of wealth distribution and ideology. In contrast to the first study, participants evaluated the meaningless statements before completing the cognitive tasks, and the items from Chopra’s TWitter feed folowed directly after the meaningless statements. This time around, Chopra’s Twitter items were rated as slightly more “profound” than the nonsense items, but the mean ratings for the two scales were very correlated. It also turned out that the BSR scale significantly correlated with each variable tested, except for the Need for Cognition. Specifically, BSR was negatively correlated with performance on the heuristics and biases battery and positively correlated with Faith in Intuition. As in the first study, cognitive ability measures were negatively correlated with BSR.
Finally, in the remaining two studies included in this paper, the authors wanted to test whether some people might be particularly sensitive to pseudo-profound BS because they are less capable of detecting conflict during reasoning. Basically, they wanted to try to get some insight into why some people are particularly prone to pseudo-profound BS and others aren particularly resistant to it. To test this, they did more studies in which they created a scale using ten motivational quotations that are conventionally considered to be profound (e.g., “A river cuts through a rock, not because of its power but its persistence”) because they are written in plain language and don’t contain the vague buzzwords characteristic of statements in the first two studies. They also included mundane statements that had clear meaning but wouldn’t be considered “profound” (e.g., “Most people enjoy some sort of music”). They then compared the correlations they found before.
They found that those more receptive to bullshit are “less reflective, lower in cognitive ability (i.e., verbal and fluid intelligence, numeracy), are more prone to ontological confusions and conspiratorial ideation, are more likely to hold religious and paranormal beliefs, and are more likely to endorse complementary and alternative medicine (CAM).” The authors also assessed the same correlations using a measure of sensitivity to pseudo-profound BS determined by computing a difference score between profundity ratings for pseudo-profound BS and legitimately meaningful motivational quotations. Thus, people who rated the truly profound statements a lot higher than the pseudo-profound BS will have higher scores in this measure, which the authors propose as an estimate of how sensitive an individuals “bullshit detector” is. They found that BS sensitivity was associated with better performance on mesures of analytic thinking and lower paranormal belief. It was not, however, correlated with increased conspiratorial ideation or acceptance of CAM, which surprised the authors, who noted:
This was not predicted as all three forms of belief are considered “epistemically suspect” (e.g., Pennycook, et al., in press). One possible explanation for this divergence is that supernatural beliefs are a unique subclass because they entail a conflict between some immaterial claim and (presumably universal) intuitive folk concepts (Atran & Norenza- yan, 2004). For example, the belief in ghosts conflicts with folk-mechanics – that is intuitive belief that objects cannot pass through solid objects (Boyer, 1994). Pennycook et al. (2014) found that degree of belief in supernatural religious claims (e.g., angels, demons) is negatively correlated with conflict detection effects in a reasoning paradigm. This result suggests that the particularly robust association be- tween pseudo-profound bullshit receptivity and supernatural beliefs may be because both response bias and conflict detection (sensitivity) support both factors.
The authors make a point about different kinds of open-minded thinking, an uncritical open mind versus a more reflective open mind:
As a secondary point, it is worthwhile to distinguish uncritical or reflexive open-mindedness from thoughtful or reflective open-mindedness. Whereas reflexive open- mindedness results from an intuitive mindset that is very accepting of information without very much processing, re- flective open-mindedness (or active open-mindedness; e.g., Baron, Scott, Fincher & Metz, 2014) results from a mindset that searches for information as a means to facilitate critical analysis and reflection. Thus, the former should cause one to be more receptive of bullshit whereas the latter, much like analytic cognitive style, should guard against it.
Overall, the authors have made a significant contribution by coming up with their Bullshit Receptivity scale and Bullshit Sensitivity scale, but it is not without its limitations. For one thing, the authors focused on very brief statements, generally less than Twitter-length, which limits the statements to 140 characters. It isn’t clear whether these results can be generalized to what the authors refer to as more “conversational” BS, which can be quite different than that of pseudo-profound BS. More importantly, this is preliminary work. The scales used contained relatively few items, and there was arguably too much focus on one person’s work or pseudo-profound BS inspired by one person: Deepak Chopra.
Despite these differences, I think this study is an interesting, albeit flawed, first step at elucidating what factors contribute to receptivity and resistance to BS. As the authors put it:
The construction of a reliable index of bullshit receptivity is an important first step toward gaining a better understand- ing of the underlying cognitive and social mechanisms that determine if and when bullshit is detected. Our bullshit re- ceptivity scale was associated with a relatively wide range of important psychological factors. This is a valuable first step toward gaining a better understanding of the psychology of bullshit. The development of interventions and strategies that help individuals guard against bullshit is an important additional goal that requires considerable attention from cognitive and social psychologists. That people vary in their receptivity toward bullshit is perhaps less surprising than the fact that psychological scientists have heretofore neglected this issue. Accordingly, although this manuscript may not be truly profound, it is indeed meaningful.
I tell ya, social scientists are far more tolerant of self-deprecating humor than biomedical scientists are. There’s no way a statement like the last sentence would make it into a basic or clinical science paper.
Be that as it may, this study seems to confirm much that is instinctively known (or at least has been assumed): analytic thinking probably decreases susceptibility to BS; paranormal beliefs go hand-in-hand with such susceptibility. It also tells us that susceptibility to nonsense is quite widespread in the population, who tend to be far more easily persuaded by emotional, vague, seemingly “profound” appeals than they are by data, science, and evidence. The question that a study of this type always raises, of course, is whether correlation indicates causation in this case. Can deficiencies in analytic thinking and reasoning be remedies to decrease one’s susceptibility to BS, and if so what is the best way to go about this?
These are the sorts of questions skeptics have been asking for a long time. They are questions with real world consequences, because BS is everywhere.
from ScienceBlogs http://ift.tt/1O2W33n
Of all the slick woo peddlers out there, one of the most famous (and most annoying) is Deepak Chopra. Indeed, he first attracted a bit of not-so-Respectful Insolence a mere 10 months after this blog started, when Chopra produced the first of many rants against nasty “skeptics” like me that I’ve deconstructed over the years. Eventually, the nonsensical nature of his pseudo-profound blatherings inspired me to coin a term to describe it: Choprawoo. Unfortunately, far too many people find Deepak Chopra’s combination of mystical sounding pseudo-profundity, his invocation of “cosmic consciousness” and rejection of genetic determinism, and his advocacy of “integrating” all manner of quackery into real medicine (a.k.a. “integrative medicine, formerly “complementary and alternative medicine,” or CAM) to the point of getting actual legitimate medical school faculty to assist him with an actual clinical trial compelling. He is, alas, one of the most influential woo peddlers out there. Worse, he was once a legitimate MD; now he’s a quack. Indeed, as I’ve described before, of all the quacks and cranks and purveyors of woo whom I’ve encountered over the years, Deepak Chopra is, without a doubt, one of the most arrogantly obstinate, if not the most arrogantly obstinate. Right now he’s pushing his latest book, Supergenes: Unlock the Astonishing Power of Your DNA for Optimum Health and Well-Being, which asserts that you can control the activity of your genes.
So it was greatly amusing to me to see Deepak Chopra and his pseudo-profound bullshit (and I use the term because the source I’m about to look at uses the term) featured so prominently in a new study by Pennycook et al entitled On the reception and detection of pseudo-profound bullshit. The study was performed at the Department of Psychology, University of Waterloo, and the School of Humanities and Creativity, Sheridan College. Indeed, Deepak Chopra’s pseudo-profound bullshit is a key component of the study. I love the way the abstract starts, too:
Although bullshit is common in everyday life and has attracted attention from philosophers, its reception (critical or ingenuous) has not, to our knowledge, been subject to empirical investigation. Here we focus on pseudo-profound bullshit, which consists of seemingly impressive assertions that are presented as true and meaningful but are actually vacuous.
First, what do the authors mean by pseudo-profound bullshit? I might as well quote their definition in full, even at the risk of a large block of quoted text:
The Oxford English Dictionary defines bullshit as, simply, “rubbish” and “nonsense”, which unfortunately does not get to the core of bullshit. Consider the following statement:
Hidden meaning transforms unparalleled abstract beauty.
Although this statement may seem to convey some sort of potentially profound meaning, it is merely a collection of buzzwords put together randomly in a sentence that retains syntactic structure. The bullshit statement is not merely non- sense, as would also be true of the following, which is not bullshit:
Unparalleled transforms meaning beauty hidden abstract.
The syntactic structure of a), unlike b), implies that it was constructed to communicate something. Thus, bullshit, in contrast to mere nonsense, is something that implies but does not contain adequate meaning or truth. This sort of phenomenon is similar to what Buekens and Boudry (2015) referred to as obscurantism (p. 1): “[when] the speaker… [sets] up a game of verbal smoke and mirrors to suggest depth and insight where none exists.” Our focus, however, is somewhat different from what is found in the philosophy of bullshit and related phenomena (e.g., Black, 1983; Buekens & Boudry, 2015; Frankfurt; 2005). Whereas philosophers have been primarily concerned with the goals and intentions of the bullshitter, we are interested in the factors that pre- dispose one to become or to resist becoming a bullshittee. Moreover, this sort of bullshit – which we refer to here as pseudo-profound bullshit – may be one of many different types. We focus on pseudo-profound bullshit because it rep- resents a rather extreme point on what could be considered a spectrum of bullshit. We can say quite confidently that the above example (a) is bullshit, but one might also label an exaggerated story told over drinks to be bullshit. In future studies on bullshit, it will be important to define the type of bullshit under investigation (see Discussion for further comment on this issue).
This is about as fantastic an introduction to a scientific paper as I’ve ever seen. It also defines a form of BS at whose production Deepak Chopra is expert at. But how does one measure the inherent “BS-ness” of a statement? The way the authors did this was absolutely hilarious. Some of you might be aware of a website, The Wisdom of Chopra, which is a random Deepak Chopra quote generator. As the generator tells us, each “quote” is generated from a list of words that can be found in Deepak Chopra’s Twitter stream randomly stuck together in a sentence. This was one source of raw material for the authors. The other was the New Age Bullshit Generator, which was also inspired by Deepak Chopra and works on similar principles, but uses a list of profound-sounding words compiled by its creator, Seb Pearce. Examples include sentences like “Imagination is inside expo- nential space time events” and “We are in the midst of a self-aware blossoming of being that will align us with the nexus itself.” These sites were used to produce ten meaningless sentences.
Next, Waterloo University undergraduate students were asked to rate the sentences using the following 5-point scale: 1= Not at all profound, 2 = somewhat profound, 3 = fairly profound, 4 = definitely profound, 5 = very profound. Before the study started, the same students answered demographic questions and completed five cognitive tasks intended to assess components of cognitive ability. They also answered questions designed to assess religious beliefs. These students rated the ten meaningless pseudo-profound statements. This first study was to assess the BS potential of the statements and validate the internal consistency of the measures, specifically the new measure, dubbed the “Bullshit Receptivity” (BSR) scale, which had good internal consistency. Basically, the higher the BSR values attributed to these statements, the higher the, well, receptivity to BS demonstrated by the subject. The authors found that BSR was “strongly negatively correlated with each cognitive measure except for numeracy (which was nonetheless significant)” and that “both ontological confusions and religious belief were positively correlated with bullshit receptivity.”
The next study looked at some real world examples. Participants were recruited for pay from Amazon’s Mechanical Turk. In addition to the ten meaningless statements used in the above study, ten novel items were generated by the two websites, and the authors also obtained 10 items from Deepak Chopra’s Twitter feed; e.g.:
Subjects were also assessed by additional instruments, such as the Paranormal Belief Scale and measures of wealth distribution and ideology. In contrast to the first study, participants evaluated the meaningless statements before completing the cognitive tasks, and the items from Chopra’s TWitter feed folowed directly after the meaningless statements. This time around, Chopra’s Twitter items were rated as slightly more “profound” than the nonsense items, but the mean ratings for the two scales were very correlated. It also turned out that the BSR scale significantly correlated with each variable tested, except for the Need for Cognition. Specifically, BSR was negatively correlated with performance on the heuristics and biases battery and positively correlated with Faith in Intuition. As in the first study, cognitive ability measures were negatively correlated with BSR.
Finally, in the remaining two studies included in this paper, the authors wanted to test whether some people might be particularly sensitive to pseudo-profound BS because they are less capable of detecting conflict during reasoning. Basically, they wanted to try to get some insight into why some people are particularly prone to pseudo-profound BS and others aren particularly resistant to it. To test this, they did more studies in which they created a scale using ten motivational quotations that are conventionally considered to be profound (e.g., “A river cuts through a rock, not because of its power but its persistence”) because they are written in plain language and don’t contain the vague buzzwords characteristic of statements in the first two studies. They also included mundane statements that had clear meaning but wouldn’t be considered “profound” (e.g., “Most people enjoy some sort of music”). They then compared the correlations they found before.
They found that those more receptive to bullshit are “less reflective, lower in cognitive ability (i.e., verbal and fluid intelligence, numeracy), are more prone to ontological confusions and conspiratorial ideation, are more likely to hold religious and paranormal beliefs, and are more likely to endorse complementary and alternative medicine (CAM).” The authors also assessed the same correlations using a measure of sensitivity to pseudo-profound BS determined by computing a difference score between profundity ratings for pseudo-profound BS and legitimately meaningful motivational quotations. Thus, people who rated the truly profound statements a lot higher than the pseudo-profound BS will have higher scores in this measure, which the authors propose as an estimate of how sensitive an individuals “bullshit detector” is. They found that BS sensitivity was associated with better performance on mesures of analytic thinking and lower paranormal belief. It was not, however, correlated with increased conspiratorial ideation or acceptance of CAM, which surprised the authors, who noted:
This was not predicted as all three forms of belief are considered “epistemically suspect” (e.g., Pennycook, et al., in press). One possible explanation for this divergence is that supernatural beliefs are a unique subclass because they entail a conflict between some immaterial claim and (presumably universal) intuitive folk concepts (Atran & Norenza- yan, 2004). For example, the belief in ghosts conflicts with folk-mechanics – that is intuitive belief that objects cannot pass through solid objects (Boyer, 1994). Pennycook et al. (2014) found that degree of belief in supernatural religious claims (e.g., angels, demons) is negatively correlated with conflict detection effects in a reasoning paradigm. This result suggests that the particularly robust association be- tween pseudo-profound bullshit receptivity and supernatural beliefs may be because both response bias and conflict detection (sensitivity) support both factors.
The authors make a point about different kinds of open-minded thinking, an uncritical open mind versus a more reflective open mind:
As a secondary point, it is worthwhile to distinguish uncritical or reflexive open-mindedness from thoughtful or reflective open-mindedness. Whereas reflexive open- mindedness results from an intuitive mindset that is very accepting of information without very much processing, re- flective open-mindedness (or active open-mindedness; e.g., Baron, Scott, Fincher & Metz, 2014) results from a mindset that searches for information as a means to facilitate critical analysis and reflection. Thus, the former should cause one to be more receptive of bullshit whereas the latter, much like analytic cognitive style, should guard against it.
Overall, the authors have made a significant contribution by coming up with their Bullshit Receptivity scale and Bullshit Sensitivity scale, but it is not without its limitations. For one thing, the authors focused on very brief statements, generally less than Twitter-length, which limits the statements to 140 characters. It isn’t clear whether these results can be generalized to what the authors refer to as more “conversational” BS, which can be quite different than that of pseudo-profound BS. More importantly, this is preliminary work. The scales used contained relatively few items, and there was arguably too much focus on one person’s work or pseudo-profound BS inspired by one person: Deepak Chopra.
Despite these differences, I think this study is an interesting, albeit flawed, first step at elucidating what factors contribute to receptivity and resistance to BS. As the authors put it:
The construction of a reliable index of bullshit receptivity is an important first step toward gaining a better understand- ing of the underlying cognitive and social mechanisms that determine if and when bullshit is detected. Our bullshit re- ceptivity scale was associated with a relatively wide range of important psychological factors. This is a valuable first step toward gaining a better understanding of the psychology of bullshit. The development of interventions and strategies that help individuals guard against bullshit is an important additional goal that requires considerable attention from cognitive and social psychologists. That people vary in their receptivity toward bullshit is perhaps less surprising than the fact that psychological scientists have heretofore neglected this issue. Accordingly, although this manuscript may not be truly profound, it is indeed meaningful.
I tell ya, social scientists are far more tolerant of self-deprecating humor than biomedical scientists are. There’s no way a statement like the last sentence would make it into a basic or clinical science paper.
Be that as it may, this study seems to confirm much that is instinctively known (or at least has been assumed): analytic thinking probably decreases susceptibility to BS; paranormal beliefs go hand-in-hand with such susceptibility. It also tells us that susceptibility to nonsense is quite widespread in the population, who tend to be far more easily persuaded by emotional, vague, seemingly “profound” appeals than they are by data, science, and evidence. The question that a study of this type always raises, of course, is whether correlation indicates causation in this case. Can deficiencies in analytic thinking and reasoning be remedies to decrease one’s susceptibility to BS, and if so what is the best way to go about this?
These are the sorts of questions skeptics have been asking for a long time. They are questions with real world consequences, because BS is everywhere.
You are probably thinking, whose bird-brained idea was that?
Well, as it turns out, a new study published in PLOS ONE shows that pigeons can be trained to accurately differentiate cancerous versus healthy tissue biopsies. This is because the process of diagnosing cancer involves visual screening of MRIs an biopsies and pigeons use similar visual processing as humans. Moreover, according to the article, pigeons are able to learn and memorize over 2000 images, a skill that likely helps in identifying cancerous cells.
In a quote from Scientific American, study author Dr. Richard Levenson (University of California, Davis) said, “The birds might be able to assess the quality of new imaging techniques or methods of processing and displaying images without forcing humans to spend hours or days doing detailed comparisons to figure out if certain innovations are in fact better or worse than current methods.”
Sources:
Levenson RM, Krupinski EA, Navarro VM, Wasserman EA. Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images. PLOS ONE. November 18, 2015. DOI: 10.1371/journal.pone.0141357
You are probably thinking, whose bird-brained idea was that?
Well, as it turns out, a new study published in PLOS ONE shows that pigeons can be trained to accurately differentiate cancerous versus healthy tissue biopsies. This is because the process of diagnosing cancer involves visual screening of MRIs an biopsies and pigeons use similar visual processing as humans. Moreover, according to the article, pigeons are able to learn and memorize over 2000 images, a skill that likely helps in identifying cancerous cells.
In a quote from Scientific American, study author Dr. Richard Levenson (University of California, Davis) said, “The birds might be able to assess the quality of new imaging techniques or methods of processing and displaying images without forcing humans to spend hours or days doing detailed comparisons to figure out if certain innovations are in fact better or worse than current methods.”
Sources:
Levenson RM, Krupinski EA, Navarro VM, Wasserman EA. Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images. PLOS ONE. November 18, 2015. DOI: 10.1371/journal.pone.0141357