Spotting bullshit in science news: A how-to

A week or so ago a news story came out about a new study, which found that people who take their coffee black are more likely to be psychopaths. Unsurprisingly, it went viral. But within a day the study had been debunked.

It’s not that drinking black coffee makes you a psychopath. It’s that, of the people tested, those who said they like bitter foods (like black coffee or grapefruit) also scored a tiny bit higher on a 50-question survey designed to detect personality traits like narcissism and Machiavellianism—traits thought to be tied to psychopathy.

If it sounds like that’s a lot of weasel words, congrats, you are correct. Scientists use phrases like “associated with” or “linked to” when the data shows that the variables increase or decrease at the same time, or that one increases as the other decreases. Those are called correlations. But the fact that two things are correlated does not mean that one causes the other. They might not actually be related to each other at all. For example, the number of films Nicolas Cage appeared in per year is correlated with the number of people who drowned by falling into a pool, but I think we can all agree that’s not Cage’s fault. Science is all about relationships, and relationships are complicated. But the complicated, somewhat weasel-y language scientists use to describe their findings doesn’t fit into a headline well.

I’m not just blaming the media for this one, though. Yeah, the headlines were sensationalist, and yeah, claims were exaggerated, but part of the job of being a scientist is being able to accurately tell people what you found in an understandable way. It’s a two-way street, and it’s a lazy scientist who blames it all on the media without thinking about if they did a good job explaining themselves.

Even if the news articles had been completely accurate, there’s still a problem: the study itself. The researchers were looking at people’s food preferences without thinking about the fact that taste is incredibly subjective. It’s so subjective that the researchers and the participants didn’t even agree on what foods were bitter. The authors said grapefruit was a bitter food, but some participants said it wasn’t. So that’s a problem.

Taste is a very complicated sense, and it can be influenced by many factors. For example, cilantro tastes like soap to me because I have a genetic mutation that affects how my brain perceives the taste of cilantro. When I ask my friends what cilantro tastes like to them, the best description they can come up with is cilantro-y. It’s like trying to describe colors to someone who can’t see colors. You just can’t.

Taste can even be influenced by our other senses. No matter how good a stew is, if it has the misfortune of looking like vomit, you’ll most likely find it less tasty than you would if you ate it blindfolded. Same thing goes if you are told that a glass of wine is very expensive. You can throw Franzia in a fancy bottle and say it’s extremely expensive and people will think it’s absolutely amazing, because they are expecting that an expensive wine in a fancy bottle will taste better than cheap wine that comes in boxes.

And that’s not news to scientists. The researchers on the coffee study probably heard about that phenomenon when they were undergrads. But scientists are human, and humans forget things and make mistakes. Being a professional scientist does not mean you can always design a flawless experiment and always interpret the results accurately. Science is designed to catch as many of those errors as possible, but those aren’t perfect either. Scientists work together to catch each others’ mistakes. We have our peers read and critique our findings before they’re published, and we fully expect that other scientists will tell us if we messed up or missed something. Criticism is necessary for good science, but sometimes it just fails to show up for the party.

So how are we supposed to know what science news is legit and what science news is a giant cluster of communication breakdowns?

Remember that science is all about complicated relationships, so it’s very uncommon to be able to say that you are 100 percent sure that this thing causes that thing. Also, studies need to be replicated before we can be sure their results are legit. If someone else does exactly what you did but has different results, your findings probably aren’t that strong. It could be that your results were a fluke. It could be that somebody accidentally sneezed into a petri dish and contaminated it with their nose germs. No matter what, you need to be able to replicate your results in order to call them valid.

Also, dishonest scientists exist. A while back I was reading a study that compared several drugs and noticed that one of the graphs was missing a column. It seemed weird that the authors compared three drugs in all of the graphs except one, so I checked the financial disclosures section. Sure enough, the missing column was for a drug made by the company that the head researcher works for. I’ve also seen papers where the authors straight-up misquoted other research to back up their findings. Never mind that the paper they cite says the exact opposite of what the authors claim it says, and that anyone could figure it out just by reading the cited paper.

The most important thing is to be skeptical. Very few things in science are absolute, especially when you start talking about living things.