Tag Archives: p-value

Two tired misconceptions about null hypotheses

Comic: xkcd #892, by Randall Munroe

 For some reason, people seem to love taking shots at null-hypothesis/significance-testing statistics, despite its central place in the logic of scientific inference.  This is part of a bigger pattern, I think:  it’s fun to be iconoclastic, and the more foundational the icon you’re clasting (yes, I know that’s not really a word), the more fun it is.  So the P-value takes more than its share of drubbing, as do decision rules associated with it.  The null hypothesis may be the most foundational of all, and sure enough, it also takes abuse.

I hear two complaints about null hypotheses – and I’ve been hearing the same two since I was a grad student.  That’s mumble-mumble years listening to the same strange but unkillable misconceptions, and when both popped their heads up again within a week, I gave myself permission to rant about them a little bit.  So here goes. Continue reading

Advertisements

Statistics and significant digits

(My writing pet peeves – Part 1)

Image: completely fake “data”, but a real 1-way ANOVA; S. Heard.

I read a lot of manuscripts – student papers, theses, journal submissions, and the like.  You can’t do that without developing a list of pet peeves about writing, and yes, I’ve got a little list*.

Sitting atop my pet-peeve list these days: test statistics, P-values, and the like reported to ridiculous levels of precision – or, rather, pseudo-precision.  I’ve done it in the figure above: F1,42 = 4.716253, P = 0.0355761.  I see numbers like these all the time – but, really? Continue reading

Is “nearly significant” ridiculous?

Graphic: Parasitoid emergence from aphids on peppers, as a function of soil fertilization. Analysis courtesy of Chandra Moffat (but data revisualized for clarity).

“Every time you say ‘trending towards significance’, a statistician somewhere trips and falls down.” This little joke came to me via Twitter last month. I won’t say who tweeted it, but they aren’t alone: similar swipes are very common. I’ve seen them from reviewers of papers, audiences of conference talks, faculty colleagues in lab meetings, and many others. The butt of the joke is usually someone who executes a statistical test, finds a P value slightly greater than 0.05, and has the temerity to say something about the trend anyway. Sometimes the related sin is declaring a P value much smaller than 0.05 “highly significant”. Either way, it’s a sin of committing statistics with nuance.

Why do people think the joke is funny? Continue reading

In defence of the P-value

(graphic by Chen-Pan Liao via wikimedia.org)

The P-value (and by extension, the entire enterprise of hypothesis-testing in statistics) has been under assault lately. John Ioannadis’ famous “Why most published research findings are false” paper didn’t start the fire, but it threw quite a bit of gasoline on it. David Colquhoun’s recent “An investigation of the false discovery rate and the misinterpretation of P-values” raised the stakes by opening with a widely quoted and dramatic (but also dramatically silly) proclamation that “If you use P=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time.”* While I could go on citing examples of the pushback against P, it’s inconceivable that you’ve missed all this, and it’s well summarized by a recent commentary in Nature News. Even the webcomic xkcd has piled on. Continue reading