The news is filled with scientific studies that range from the lifesaving to the bizarre – and every week there seems to be a new alarming truth about something that might cure or kill us. How do we figure out what to take heed of and what we might happily disregard? Rebekah White explains how to sort the good research from the bad.
More than one million studies are published internationally every year. There are studies that claim hitchhiking is easier for blondes than brunettes, that men holding guns are perceived as physically larger than men not holding guns, that looking at Botticelli paintings is more effective at easing pain than gazing at empty spaces, and that people will buy more books if the bookstore smells like chocolate.
So far, so fascinating. But not all research is of equal authority. Some claims made in the media are drawn from years of research and a body of carefully considered evidence; others are based on a five-minute clipboard survey of half-a-dozen people at the shops. And certain trendy topics seem to attract conflicting information. A glance over recent studies on coffee reveals claims that it makes you fat, helps you lose weight, prolongs your life, increases the risk of child cancer and decreases the risk of breast cancer recurring. And that’s just the first handful. There are equally confusing and contradictory claims made about the perils of sugar, gluten, microwaves and cow farts.
How can we navigate this morass of information? Just because something is called a ‘study’ and is conducted by people with advanced degrees doesn’t automatically mean that it’s worth paying attention to, or that its findings have been reported correctly. And just because something is repeated ad nauseam in the media and online – like how you should drink eight cups of water per day, or that vaccinations can trigger autism – does not necessarily make it true. They might all be drawing on the same dubious source.
Getting it right matters. It’s one thing for a study of scented bookstores to contain dodgy data, but the stakes are much higher when the subject in question is a large-scale public health intervention, such as the addition of fluoride to the public water supply or a national immunisation programme against debilitating diseases.
Ultimately it’s about knowing whom we can trust. And for that, we need a bit of process. Here’s a simple guide to figuring out what to pay attention to and what to ignore …
1 Where has the study been published?
If a claim hasn’t been published in a reputable scientific journal, you can fairly confidently disregard it. You should be able to look up the publication details online, even if you can’t read the whole study. ‘Peer review’ means that before a study is published, it is routinely checked by a group of qualified scientists who work in the same field and who are able to check the data, conclusions and method to ensure the authors haven’t made any mistakes or overlooked confounding factors along the way. For bonus points, double-check the journal it’s been published in to make sure it’s well reputed.
2 Who participated in the study, and how were they chosen?
Look for the study’s ‘sample size’, or how many participants were involved. The bigger the sample size, the more its results are trustworthy. Research performed on a handful of people doesn’t automatically translate to the wider population, plus small studies are more susceptible to random occurrences, or ‘confounding factors’, affecting the outcome (more on that later).
For example, ‘Smelling the Books: The Effect of Chocolate Scent on Purchase-Related Behaviour in a Bookstore’ might have been published in the perfectly decent Journal of Environmental Psychology, but it only surveyed one store.
By contrast, the ongoing Dunedin Study of more than 1,000 babies born in Dunedin between 1972-1973 surveys a wide cross-section of the New Zealand population over a long period of time. It has uncovered a number of fascinating trends and its findings are well regarded internationally.
Small studies also possess less statistical significance, as British doctor Alicia White writes in a National Health Service guide to understanding health news. “We know that if we toss a coin the chance of getting a head is the same as that of getting a tail – 50/50,” she explains. “However, if we didn’t know this and we tossed a coin four times and got three heads and one tail, we might conclude that getting heads was more likely than tails. But this chance finding would be wrong. If we tossed the coin 500 times – gave the experiment more ‘power’ – we’d be much more likely to get an even number of heads and tails, giving us a better idea of the true odds.”
Finally, is the sample ‘representative’? It’s misleading to assume a study performed on men will have similar results for women – and the same goes for adults and children, animals and humans, or people with multiple diseases compared to people with just one. (Studies conducted on animals don’t directly translate to humans; they might suggest what the effects in people could be, but they don’t prove them.)
3 How were biases and confounding factors avoided?
Unfortunately for science, we don’t live in isolation, and our bodies are complex organisms. There are many factors that can affect the outcome of a study, so scientists need to take extra precautions to ensure any results they measure are caused by the chemical or issue in question rather than by something else.
There are two main ways of conducting research: observational studies and controlled experiments. The first involves simply watching and collecting information, without intervening; the second involves changing something, such as giving participants a drug, then seeing what happens.
Moreover, a controlled study involves two groups of people, one which receives the treatment, and one which doesn’t. This second group, the ‘control’ group, might be given a placebo or nothing at all. Including a control group means that unexpected effects, or ‘confounding factors’, are equally distributed between the two groups.
Observational science tells us about how something plays out in the real world, but there isn’t a control group. It involves watching behaviour: coffee-drinking habits, what a group of people eats over a certain period of time, or which children out of a group born in a certain year develop allergies.
Finally, ‘blinding’ a study is important to ensure testers aren’t biased in the way they treat participants. In ‘double-blinded’ studies, neither experimenters nor participants know who is receiving what kind of treatment, in order to ensure all subjects are treated the same.
If in doubt, a ‘randomised double-blind placebo-controlled clinical trial’ is a pretty good bet.
4 How does the study fit real-world circumstances?
Laboratory studies offer an isolated environment that’s very different from the real world, while various confounding factors exist depending on where in the world a study was conducted.
For example, in 2012 Harvard University published a systematic review (more on that later) of 27 studies that showed an association between fluoride consumption and low IQ in children. This is widely quoted by anti-fluoride campaigners as evidence of the dangers of fluoride.
However, a closer look reveals that the Chinese children studied were consuming water with higher levels of fluoride than that available in a regulated New Zealand water supply.
What’s more, most of these studies didn’t control for any confounding factors, says Dr Jonathan Broadbent, senior lecturer in dentistry at the University of Otago. “There’s quite a bit of research that shows there’s so many different things that affect IQ, such as breastfeeding – you’ve got to account for those things in your analyses as well,” he says. “You also have to consider the rural and urban differences in IQ; there have been 50 years of research showing a slight difference in IQ between people who live in rural and urban areas. In China these areas have very high fluoride naturally. There are also frequently high levels of lead in the water – lead can be neurotoxic, so if there’s a lot of minerals dissolved in the water, that could potentially be the cause.”
5 Has anyone repeated the study?
Peer review isn’t a perfect system. “Errors, intentional or accidental, do appear in the literature,” said the Prime Minister’s chief science advisor Sir Peter Gluckman in an April 2013 discussion paper, ‘Interpreting Science’. “Replication is the key protection and for these reasons when a surprising result is found, the scientific community needs to remain skeptical until there is independent replication.”
It’s very rare that one study will change the course of thinking, points out Ben Goldacre in Bad Science. “If one study is produced that differs widely from other studies on the subject, that’s not taken as a conclusion but an invitation for other scientists to try and replicate the results to make sure it wasn’t a random blip.”
That’s why it’s important to look at all the literature around the subject – not just cherry-pick some studies and ignore others (unless they are faulty in one of the ways described above). That’s why a ‘meta-analysis’ or ‘systematic review’ is so important. These are studies of studies, summarising the results of tens or even hundreds. The criteria used to select or exclude studies from the review is also included so other scientists can judge whether or not it’s a fair summary.
“Reporting on new, single studies actually isn’t the best way to tell you about what’s happening in science,” writes health journalist Julia Belluz in Maclean’s. “A single study will not change clinical practice or thinking about health. Many studies, in different contexts, using different methods, on different populations, will.”
Fluoride is a naturally occurring chemical which can be found in drinking water and food around the world at different levels, depending on a region’s geology. Some parts of the world have naturally occurring high levels of fluoride in water, which leads to health problems – just as high levels of other minerals do.
The average concentration of fluoride in ocean water is 1.3 parts per million; New Zealand has naturally low levels of fluoride in its water, so it is added in some places to top it up to between 0.7 and 1.0 parts per million.
Several systematic reviews of fluoride studies found that safely fluoridated water supplies result in a reduction in incidences of dental cavities, and no adverse health effects beyond dental fluorosis, a cosmetic mottling of tooth enamel. Down under, the 2009 New Zealand Oral Health Survey found fewer instances of dental cavities (also known as ‘caries’) in areas with access to fluoridated water than areas without. (Rates of caries have also been decreasing in unfluoridated areas as access to dental health services improve.)
Contrary to a popular theory, fluoride was not added to the water supply of concentration camps or ghettos by Nazis in World War II in order to make Jewish prisoners more docile. As part of a Pulitzer Prize-winning investigation into fluoride, Florida’s Tampa Bay Times newspaper contacted historians at the US Holocaust Memorial Museum who confirmed there was no evidence to support this. (In fact, Jewish prisoners hardly had access to water at all).
Dentists do not receive financial rewards for supporting fluoridation, says Dr Jonathan Broadbent, current president of the Otago branch of the New Zealand Dentists’ Association. “I don’t receive any money to promote fluoridation. I work in dental public health research; if the evidence pointed somewhere else, I wouldn’t promote it.”
Fluoride is often described as a byproduct of the phosphate fertiliser industry, as though it’s simply waste that’s collected, then added to the water supply. This isn’t the case. Hydrofluorosilicic acid (a form of fluoride) is deliberately made by causing a reaction with silicon tetrafluoride, a chemical left over from fertiliser manufacture (other leftover chemicals include water and carbon dioxide). This is seen as an environmentally friendly way of treating silicon tetrafluoride – transforming it from a waste product into something useful. Perhaps it would be more accurate to describe hydrofluorosilicic acid as ‘upcycled’.
When journals get things wrong
The Lancet is one of the most respected scientific journals in the world. In 1998, it published a study by a British doctor, Andrew Wakefield, which suggested a link between the measles, mumps and rubella vaccine (MMR) and the development of autism and bowel disease in children.
The study was initially criticised both for the way it was conducted and the fact it looked at only 12 kids. More importantly, other researchers were unable to replicate Wakefield’s results. An investigation eventually found Wakefield’s research was fraudulent, and in 2010 he was stripped of his medical license – but not before years of news stories and celebrity figures all questioning the safety of vaccines.
Meanwhile the uptake of the MMR vaccine dropped in Britain from 94 percent in 1995 to 78 percent in 2003. The country experienced a mumps epidemic in 2005, with 28,470 cases recorded in the first third of the year alone (there had been 1,811 in the same period in 2004).
Scientific literature is overwhelmingly in support of the safety of vaccines. Most recently, a 2012 study published in The Journal of Pediatrics, which looked at almost 1,000 children, found that immunisation during the first two years of life was not linked to increased risk of autism.
But vaccine scares keep reappearing, as British doctor and journalist Ben Goldacre points out in his book Bad Science. In the 1970s, another British doctor posited that whooping-cough vaccine caused neurological damage; in France, hepatitis B immunisation has been incorrectly linked to multiple sclerosis; in the 1940s and 1950s, a theory circulated in America that the national immunisation programme was part of a Communist plot to weaken the general population. And in early 2013, the Taliban attacked polio vaccination workers in Afghanistan due to fears it was a plot to sterilise Muslim children.
Why do these fears persist in the face of so much evidence to the contrary? One factor is our inherent sympathy for the underdog and fundamental suspicion of the system. As Goldacre points out, it’s the perfect story – complete with a charismatic maverick fighting against a universal system of care. It “involves the government, and needles going into children; and it offers the opportunity to blame someone, or something, for a dreadful tragedy.”
When science gets confused with values
We don’t all have science degrees, we’re not all statisticians or experts on test design, and we don’t have time to wade through the often murky, convoluted language of abstracts or conclusions. So at some point we need to trust others to interpret results for us.
The scientific method is a process, not a series of concrete conclusions; new evidence and information is constantly arising and being incorporated into the body of knowledge. When you’re choosing whose analysis to believe, look for a person or organisation’s willingness to accept new, contradictory information.
The problem is, ideology and science sometimes get confused, and sometimes research becomes tossed about in a debate that’s really about personal values. “The language of evidence-based medicine can be confusing to the non-expert and easily exploited,” writes Sir Peter Gluckman. He adds in ‘Interpreting Science’: “Too often a piece of science is misunderstood, misused or overstated … Values-driven pressure groups will ‘cherry-pick’ studies they can present as credible and convincing to support their particularly advocacy agenda.”
Let’s use the fluoride debate as an example once more. Arvid Carlsson, winner of the 2000 Nobel Prize in Physiology or Medicine, campaigned against water fluoridation in Sweden because he was philosophically opposed to the concept; it went against his belief that medications should be tailored to the individual rather than applied en masse.
On the other side are those who argue that the common good of a population is more important than individualised care. “We have very little in the way of a safety net for adults who cannot afford dental care,” says Dr Jonathan Broadbent. “I’m all in favour of having a health-consicous population but a lot of new innovative health-care strategies that require individual action can actually increase inequalities, since uptake is greatest among those who have better jobs and education.”
Sir Peter Gluckman says the debate about fluoride is really about how to balance the common good of a population with individual rights, and whether or not it’s okay to use food as a way of delivering health care. “There is no scientific issue here,” he says. “It is purely an issue of values.”