Chart of the Decade: Why You Shouldn’t Trust Every Scientific Study You See – Mother Jones


27 bookmarks. First posted by ghaff 4 weeks ago.


Before pre-registration: 60% of published studies on heart disease treatment showed benefits. After pre-registration: 10%.
science  ScientificMethod  via:reddit 
yesterday by mcherm
Stop science research gaming the system.
research  science 
25 days ago by raygrasso
Chart of the decade: Why you shouldn't trust every scientific study you see via
from twitter_favs
25 days ago by romac
This chart is three years old, but it may be one of the greatest charts ever produced. Seriously. Via John Holbein, here it is: Let me explain.
Archive 
4 weeks ago by edgaron

The authors collected every significant clinical study of drugs and dietary supplements for the treatment or prevention of cardiovascular disease between 1974 and 2012. Then they displayed them on a scatterplot.

Prior to 2000, researchers could do just about anything they wanted. All they had to do was run the study, collect the data, and then look to see if they could pull something positive out of it. And they did! Out of 22 studies, 13 showed significant benefits. That’s 59 percent of all studies. Pretty good!

Then, in 2000, the rules changed. Researchers were required before the study started to say what they were looking for. They couldn’t just mine the data afterward looking for anything that happened to be positive. They had to report the results they said they were going to report.

And guess what? Out of 21 studies, only two showed significant benefits. That’s 10 percent of all studies. Ugh. And one of the studies even demonstrated harm, something that had never happened before 2000
statistics  science  data  health 
4 weeks ago by jefframnani
Chart of the Dec…
from twitter_favs
4 weeks ago by dr3wster
Then, in 2000, the rules changed. Researchers were required before the study started to say what they were looking for. They couldn’t just mine the data afterward looking for anything that happened to be positive. They had to report the results they said they were going to report.

Before 2000, researchers cheated outrageously. They tortured their data relentlessly until they found something—anything—that could be spun as a positive result, even if it had nothing to do with what they were l...
research  science  statistics  healthcare  health  life  idiots  scams  best 
4 weeks ago by hellsten
Prior to 2000, researchers could do just about anything they wanted. All they had to do was run the study, collect the data, and then look to see if they could pull something positive out of it. And they did! Out of 22 studies, 13 showed significant benefits. That’s 59 percent of all studies. Pretty good!

Then, in 2000, the rules changed. Researchers were required before the study started to say what they were looking for. They couldn’t just mine the data afterward looking for anything that happened to be positive. They had to report the results they said they were going to report.

And guess what? Out of 21 studies, only two showed significant benefits. That’s 10 percent of all studies. Ugh. And one of the studies even demonstrated harm, something that had never happened before 2000
dataviz  Research  Science 
4 weeks ago by 1luke2