How much can you really trust splashy study findings?

When I wrote about a clinical trial last week that found multivitamins have a modest protective effect against cancer, I wanted to compare how vitamins stacked up against other lifestyle measures like regular exercise or maintaining a healthy body weight. Previous studies have shown that exercising vigorously for three or more days a week could reduce a woman’s breast cancer risk by about 30 percent, and the trial on multivitamins that came out last week found that the supplements lowered cancer risk by 8 percent compared to placebos.

Clearly exercise trumps supplements, right?

Well, not exactly. Study author Howard Sesso, an associate epidemiologist in the Brigham’s Division of Preventive Medicine, told me that population studies that track people’s behaviors and disease diagnoses and to find statistical trends often have more dramatic results than clinical trials where study participants are assigned to two groups and given a specific pill to take or activity to perform.

Advertisement—Continue Reading Below

That’s because researchers need to make a lot of calculations and adjustments to their data to take into account differences between the two groups—such as body weight—in a study designed to measure the effects of exercise. All too often, those adjustments aren’t perfect leading to flawed or exaggerated results.

Sometimes, though, even clinical trials can yield misleading findings. A new analysis published in the Journal of the American Medical Association examined more than 228,000 clinical trials and found that about 9 percent of them had “very large” findings—showing that a particular treatment was five times or 500 percent more likely to help someone or hurt them compared to a placebo.

These were mostly from small studies with very few measured outcomes such as heart attacks or deaths. If 10 heart attacks occurred in one group and two heart attacks in another, for example, the researchers would have found a five-fold difference between the two groups. But the laws of statistics dictate that smaller samples have larger margins of error—we see this with political polling—so usually larger studies yield more reliable results.

What’s more, these small studies with dramatic findings were often the first one of their kind to be published, and repeat studies typically found more modest effects. And practically none of the studies found large differences when comparing the number of deaths between two groups; that outcome is considered the most important measure of how well a medical treatment works or how much something harms.

Bottom line: “Genuine large effects with extensive support from substantial evidence appear to be rare in medicine,” write the Brazilian study authors, and large benefits for reducing deaths are “virtually nonexistent.”

That means you should take any study showing something for the first time with a grain of salt —especially those that have dramatic findings that are too good (or too bad) to be true.

Share