### "why most published research results are false"

With a title like that, you'd expect this article to be controversial, and it doesn't dissapoint.

The article seems to focus on the field of medicine.

Basically, the author comes up with some statistical models of how research proceeds. He's concerned with how studies find "relationships," and defines some variables like the probability of finding a relationship when none exists (alpha) and the probability of not finding a relationship that does, in fact, exist (beta). Then the goal becomes measuring a metric called the PPV, the post-study probability that the study was "true."

His model gets kind of complicated when he introduces parameters like "bias." Bias seems to be sort of a fudge factor because it has to be provided from "outside" the model-- just like alpha and beta.

Anyway, once this is all laid out, he starts computing some things based on study sizes, and finds some disturbing patterns. Apparently based on these purely statistical arguments, a lot of medical research could be false.

I'm not sure if I really agree with the contention that "most published research results are false." If that were really true, why would we bother funding medicine at all? But it may well be true that we need to re-examine some of the methodologies here. For example, small study sizes are inherently dangerous because they tend to distort the statistics.

Another problem is that medicine is increasingly focusing more and more on making smaller and smaller incremental gains. This makes things a lot harder on study designers... to take an extreme example, if a medicine decreases your chance of cancer by 0.0001%, is that even measurable by current statistical techniques? The author has some more ideas on how to make the situation better which he presents at the end of the paper-- the most memorable one for me is the idea that we should try to estimate the pre-study odds that a relationship is true.

I'm not sure if I agree with the author's assessment of the effect of having multiple teams work on the same problem. His model seems to say that the more studies there are on a given phenomenon, the less helpful each study is, to the point where more studies actually hurt. To quote:

"With increasing number of independent studies, PPV tends to decrease, unless 1-beta < alpha"

This seems very counterintuitive. I think it's because he's treating the studies as an aggregate: he wants to know if anyone claimed a result that wasn't true. So he more or less straightforwardly substitutes beta^n for beta and alpha^n for alpha in table 3 (vs. table 1). This is a little misleading, because reproducibility is at the heart of science. Suggesting that doing more studies on an issue leads to having less knowledge is a pretty radical statement.

Looking at this paper makes medicine seem like kind of a dismal field. Everything result is statistical in nature-- nothing is really certain-- and often the companies and organizations funding the research have vested interests. I guess that's the down side of the field. There are probably a lot of up sides as well, like the chance to help people with their medical problems.

## 0 Comments:

Post a Comment

<< Home