A week or so ago a news story came out about a new study, which found that people who take their coffee black are more likely to be psychopaths. Unsurprisingly, it went viral. But within a day the study had been debunked.
It’s not that drinking black coffee makes you a psychopath. It’s that, of the people tested, those who said they like bitter foods (like black coffee or grapefruit) also scored a tiny bit higher on a 50-question survey designed to detect personality traits like narcissism and Machiavellianism—traits thought to be tied to psychopathy.
If it sounds like that’s a lot of weasel words, congrats, you are correct. Scientists use phrases like “associated with” or “linked to” when the data shows that the variables increase or decrease at the same time, or that one increases as the other decreases. Those are called correlations. But the fact that two things are correlated does not mean that one causes the other. They might not actually be related to each other at all. For example, the number of films Nicolas Cage appeared in per year is correlated with the number of people who drowned by falling into a pool, but I think we can all agree that’s not Cage’s fault. Science is all about relationships, and relationships are complicated. But the complicated, somewhat weasel-y language scientists use to describe their findings doesn’t fit into a headline well.
I’m not just blaming the media for this one, though. Yeah, the headlines were sensationalist, and yeah, claims were exaggerated, but part of the job of being a scientist is being able to accurately tell people what you found in an understandable way. It’s a two-way street, and it’s a lazy scientist who blames it all on the media without thinking about if they did a good job explaining themselves.
Even if the news articles had been completely accurate, there’s still a problem: the study itself. The researchers were looking at people’s food preferences without thinking about the fact that taste is incredibly subjective. It’s so subjective that the researchers and the participants didn’t even agree on what foods were bitter. The authors said grapefruit was a bitter food, but some participants said it wasn’t. So that’s a problem.
Taste is a very complicated sense, and it can be influenced by many factors. For example, cilantro tastes like soap to me because I have a genetic mutation that affects how my brain perceives the taste of cilantro. When I ask my friends what cilantro tastes like to them, the best description they can come up with is cilantro-y. It’s like trying to describe colors to someone who can’t see colors. You just can’t.
Taste can even be influenced by our other senses. No matter how good a stew is, if it has the misfortune of looking like vomit, you’ll most likely find it less tasty than you would if you ate it blindfolded. Same thing goes if you are told that a glass of wine is very expensive. You can throw Franzia in a fancy bottle and say it’s extremely expensive and people will think it’s absolutely amazing, because they are expecting that an expensive wine in a fancy bottle will taste better than cheap wine that comes in boxes.
And that’s not news to scientists. The researchers on the coffee study probably heard about that phenomenon when they were undergrads. But scientists are human, and humans forget things and make mistakes. Being a professional scientist does not mean you can always design a flawless experiment and always interpret the results accurately. Science is designed to catch as many of those errors as possible, but those aren’t perfect either. Scientists work together to catch each others’ mistakes. We have our peers read and critique our findings before they’re published, and we fully expect that other scientists will tell us if we messed up or missed something. Criticism is necessary for good science, but sometimes it just fails to show up for the party.
So how are we supposed to know what science news is legit and what science news is a giant cluster of communication breakdowns?
Remember that science is all about complicated relationships, so it’s very uncommon to be able to say that you are 100 percent sure that this thing causes that thing. Also, studies need to be replicated before we can be sure their results are legit. If someone else does exactly what you did but has different results, your findings probably aren’t that strong. It could be that your results were a fluke. It could be that somebody accidentally sneezed into a petri dish and contaminated it with their nose germs. No matter what, you need to be able to replicate your results in order to call them valid.
Also, dishonest scientists exist. A while back I was reading a study that compared several drugs and noticed that one of the graphs was missing a column. It seemed weird that the authors compared three drugs in all of the graphs except one, so I checked the financial disclosures section. Sure enough, the missing column was for a drug made by the company that the head researcher works for. I’ve also seen papers where the authors straight-up misquoted other research to back up their findings. Never mind that the paper they cite says the exact opposite of what the authors claim it says, and that anyone could figure it out just by reading the cited paper.
The most important thing is to be skeptical. Very few things in science are absolute, especially when you start talking about living things.
A welcome article. As a scientist I came to the conclusion that what is wrong with science, is that scientists are people/humans. As such they have the same propensity to bullshit as any other humans. Add to that that scientists often work for or have to suck for money from non-scientists who control the money and it becomes the dance of the bullshitters. Example: Recent WHO claim that cured and red meats are called carcinogens. While this is not news, the reaction predictable. If we like the science, it is – we must follow the science” . If we don’t like the science, we ignore what the science says. And beware of the politician, actor or corporate spokesthingy who has no scientific training proclaiming that ‘we must follow THE SCIENCE!”
Here is an example of science double-talk: ” Estimates of Food-borne Illness in Canada.
The Public Health Agency of Canada estimates that each year roughly one in eight Canadians (or four million people) get sick due to domestically acquired food-borne diseases. This estimate provides the most accurate picture yet of which food-borne bacteria, viruses, and parasites (“pathogens”) are causing the most illnesses in Canada, as well as estimating the number of food-borne illnesses without a known cause.
In general, Canada has a very safe food supply;” — http://www.phac-aspc.gc.ca/efwd-emoha/efbi-emoa-eng.php
What is the implied definition of safe? In general I think One of the biggest bullshit indicators is the use of ‘safe’. I like the statement of British Medical Association on back cover of their Guide Living with Risk (1987) “Nothing in life is safe”. to that I add, not even a condom!
If only we could get the social scientist to go live under a bridge, look a criminal in the eye and get to know them (before they are in jail), or to go live in the third world country being analyzed. Then rather than throwing out bubble sheets or trying to isolate the real world and a very dynamical process with many factors involved into a lab we’d instead make real OBSERVATIONS (which is supposed to be king in science) which could come much closer to identifying a cause. Instead the social sciences deal strictly with the end result and are full of correlations just as this article spoke of. No wonder problems don’t get solved in society.
As for GMO’s all I want to say is yes, they feed hungry people in Africa, but this is not Africa and our problem is almost exactly opposite of theirs- so please save the hero speech (to all you chem majors out there).
Good article Ms. Clark.