Every now and again, you see a critique of a manuscript that brings you up short and makes you go “Huh”.
A student of mine defended her thesis a while ago, and one of her examiners commented on one of her chapters that “the Results section is too short”*. Huh, I said. Huh.
I’m quite used to seeing manuscripts that are too long. Occasionally, I see a manuscript that’s too short. But this complaint was more specific: that the Results section in particular was too short. I’d never heard that one, and I just couldn’t make sense of it. Or at least, not until I realized that it fits in with another phenomenon that I see and hear a lot: the suggestion that nobody should ever, ever do their statistics in Excel. Continue reading
Image: This is what 1300 g (2.8 lb) of basil looks like.
Yesterday (as I write) I bought some basil at my local farmer’s market. Quite a lot of basil, actually – almost 3 pounds of it – because it was my annual pesto-making day*. My favourite vendor sells basil by the stem (at 50¢ each), and I started pulling stems from a large tub. Some stems were quite small, and some were huge, with at least a five-fold difference in size between smallest and largest (and no, I didn’t get to just pick out the huge ones). So how many stems did I need? Or to put it the other way around, given that I bought 49 stems, how many batches of pesto would I be making, and how many cups of walnuts would I need?
My undergrad students – like a lot of biology students – don’t like statistics. Continue reading
Photo: Two giraffes by Vera Kratochvil, released to public domain, via publicdomainpictures.net. Two giraffes are definitely better than one.
Ecologists are perennially angst-ridden about sample size. A lot of our work is logistically difficult, involves observations on large spatial or temporal scales, or involves rare species or unique geographic features. And yet we know that replication is important, and we bend over backwards to achieve it.
Sometimes, I think, too far backward, and this can result in wasted effort. Continue reading
Comic: xkcd #892, by Randall Munroe
For some reason, people seem to love taking shots at null-hypothesis/significance-testing statistics, despite its central place in the logic of scientific inference. This is part of a bigger pattern, I think: it’s fun to be iconoclastic, and the more foundational the icon you’re clasting (yes, I know that’s not really a word), the more fun it is. So the P-value takes more than its share of drubbing, as do decision rules associated with it. The null hypothesis may be the most foundational of all, and sure enough, it also takes abuse.
I hear two complaints about null hypotheses – and I’ve been hearing the same two since I was a grad student. That’s mumble-mumble years listening to the same strange but unkillable misconceptions, and when both popped their heads up again within a week, I gave myself permission to rant about them a little bit. So here goes. Continue reading
Warning: gets a bit wonkish near the end.
Have you ever noticed that the mayor of a small town is fairly often a bonehead? There’s a simple reason we’d expect that to be true – and that simple reason has implications for academic searches, the traits we analyze in ecology and systematics, and lots of other things, too (please add to my list in the Replies). The simple reason is this: it’s really hard to estimate extremes. It’s also really hard to understand why so many people act as if they’re unaware of this.
Let’s start with those mayors. Continue reading
Image: Excerpt from Heard et al. 1999, Mechanical abrasion and organic matter processing in an Iowa stream. Hydrobiologia 400:179-186.
Nearly every paper I’ve ever written includes a sentence something like this: “All statistical analyses were conducted in SAS version 8.02 (SAS Institute Inc., Cary, NC)”* But I’m not quite sure why.
Why might any procedural detail get mentioned in the Methods? There are several answers to that, with the most common being: Continue reading
Image: Plaque commemorating Fisher on Inverforth House. Peter O’Connor via flickr.com, CC BY-SA 2.0
Do you know Fisher’s method for combining P-values? If you do, move along; I’ve got nothing for you. If you don’t, though, you may be interested in what’s surely the most useful statistical test that – despite the fame of Fisher himself – nobody knows about.
Fisher’s method is the original meta-analysis. When I was a grad student, and nobody had heard of meta-analysis (or cell phones, or the internal-combustion engine), I had a supervisory committee member who liked to make strong statements. One of his favourites was “A bunch of weak tests don’t add up to a strong test!” Continue reading