The failure of scientists & statistics

cover image of economistThere have been a recent spate of news around bad science – results that can’t be replicated – and as a young scientist, I’ve been reading these with much interest and concern.  As a Ph.D. student, I’ve come to see the “sausage making” process of science firsthand and sometimes it is an ugly and broken process.  I thought I’d highlight some points here, but one should not walk away and give up on the scientific process.. and one should not walk away thinking they can ignore major tenets of modern science that have been replicated and supported over the years (I’m thinking of things like evolution, climate change, etc)..

The first news that caught my attention was a report of some researchers only able to replicate 6 out 53 “landmark” cancer research results (11% !!):

http://www.nature.com/nature/journal/v483/n7391/full/483531a.html

And because people don’t generally publish negative results, we always suffer from “publication bias”:

http://www.sciencemag.org/content/342/6154/68

” The oncologist, whom Begley declines to name, had an easy explanation. “He said, ‘We did this experiment a dozen times, got this answer once, and that’s the one we decided to publish.’ “

Recently, this topic made it to the cover of the Economist:

http://www.economist.com/news/leaders/21588069-scientific-research-has-changed-world-now-it-needs-change-itself-how-science-goes-wrong

with more details here:

http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble

As the economist article highlights, there may be many causes.. from bad statistics, to cherry-picking results, to no one having the time or money to replicate research before they build on it.  Unfortunately, the solutions are complex and will involve many players.

As I’ve come to realize, most life scientists are terrible at statistics, with only a weak grasp of even the most basic of techniques and how to apply them.  Thus, even during peer review, many mistakes are not caught:

http://www.nature.com/neuro/journal/v14/n9/full/nn.2886.html

We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (ScienceNatureNature NeuroscienceNeuron and The Journal of Neuroscience) and found that 78 used the correct procedure and 79 used the incorrect procedure. An additional analysis suggests that incorrect analyses of interactions are even more common in cellular and molecular neuroscience.

Here, the editors at Nature lament on the misuse of error bars and more: http://www.nature.com/nature/journal/v492/n7428/full/492180a.html

The latest chapter is an article published in Nature that highlights the almost arbitrary value of p=0.05 that most scientists use as unacceptable… and offers a potential solution:

http://www.nature.com/news/weak-statistical-standards-implicated-in-scientific-irreproducibility-1.14131?WT.ec_id=NEWS-20131112

It seems like in the next couple years or decades, people will start moving away from traditional p value testing, and more advanced techniques like bayesian & information theory approaches (AIC) partly for the issues discussed in this post, and partly because our computers have become powerful enough to enable these new methods to be applied:

http://www.sciencemag.org/content/341/6144/343.1.full

For some more on this debate..courtesy of xkcd.com..

xkcd comic: bayesian vs frequentist

If you aren’t quite sure what I’m taking about with all this statistics stuff, this video does a great job of explaining it:

http://www.economist.com/blogs/graphicdetail/2013/10/daily-chart-2

PS: sorry many of the links are behind journal paywalls… a topic for another post on open access in science :)

About these ads
This entry was posted in science and tagged . Bookmark the permalink.

4 Responses to The failure of scientists & statistics

  1. Paul says:

    apparently standards in animal studies are not very strict and may contribute additional bias:

    http://www.sciencemag.org/content/342/6161/922.full

  2. Paul says:

    I forgot to include another classic xkcd: http://www.xkcd.com/882/

  3. John says:

    Related story?

    http://www.nbcnews.com/science/journal-retracts-study-tying-genetically-modified-corn-rat-cancer-2D11673669

    The study was roundly discredited soon after it came out (http://dotearth.blogs.nytimes.com/2012/10/19/six-french-science-academies-dismiss-study-finding-gm-corn-harmed-rats/?_r=0), though I recall listening to NPR with some fellow graduate students last September when they did a ‘scientists-divided-on-GMO-safety’-type story, which was enough to start a hyperbolic conversation on the evils of GMOs and Montsanto. Interestingly, if you read the abstract of the study (http://www.sciencedirect.com/science/article/pii/S0278691512005637), it doesn’t really pass the ‘sniff test’- no description of methods (# treatments, replicates), results summarized without any mention of statistical significance (tumors observed in treatments “almost always more often than” in controls???), and ending with an extremely forceful but vague statement (“These results can be explained by the non linear endocrine-disrupting effects of Roundup, but also by the overexpression of the transgene in the GMO and its metabolic consequences”) that probably could never be adequately supported with a single study, suggesting that the authors are at a minimum overplaying their hand.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s