Photo: This meeting will never end; courtesy Rylee Isitt.
Warning: I sat through a frustrating meeting last week. And now you’re going to hear about it.
We all hate meetings. And yet, at the same time, we love calling meetings. In academia, at least, they’re part of the very foundation of our organizations, which we insist are distinguished from other enterprises by our use of collegial governance. (I’ve argued elsewhere, heretically, that we try to be quite a lot more collegial than is good for us, but that’s not my point today.) In universities, we want to govern ourselves from the bottom up, with the faculty rather than administrators making the decisions. The way we know how to do that is by holding meetings – big ones, and lots of them.
My home department’s Fall 2018 seminar series wraps up soon, and I’m looking forward to next semester’s. We’ve got an interesting lineup of speakers with lots of variety, and I’m very grateful to our seminar organizers for that. Today’s question: who were those organizers? And who should they be? Continue reading
This is a guest post by JC Cahill, of the Department of Biology at the University of Alberta.
Steve is an old friend from grad school, and just yesterday [as I write] he gave a well-received lecture on writing, here at the University of Alberta. The enthusiasm and interest expressed by our early career scientists seemed genuine, and even as an old prof myself I can’t help but believe Steve is having some success in humanizing science writing. But, also as an old prof I can’t help but feel a bit disheartened by the seemingly endless cycle of writing challenges, delays, and strategic failures I see in a nearly daily way. Choosing optimism rather hopelessness, I wish to tell my writing story with the intent of encouragement.
When I was a graduate student, I was a bad writer. Continue reading
Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars, by Deborah G. Mayo. Cambridge University Press, 2018.
If there’s one thing we can all agree on about statistics, it’s that there are very few things we all agree on about statistics. The “statistics wars” that Deborah Mayo would like to help us get beyond have been with us for a long time; in fact, the battlefield and the armies shift but they’ve been raging from the very beginning. Is inference about confidence in a single result or about long-term error rates? Is the P-value essential to scientific inference or a disastrous red herring holding science back? Does model selection do something fundamentally different from null-hypothesis significance testing (NHST), and if so, what? If we use NHST, is the phrase “nearly significant” evidence of sophisticated statistical philosophy or evil wishful thinking? Is Bayesian inference irredeemably subjective or the only way to convert data into evidence? These issues and more seem to generate remarkable amounts of heat – sometimes (as with Basic and Applied Social Psychology’s banning of the P-value) enough heat to seem like scorched-earth warfare*. Continue reading
Two weeks ago, I reported my run-in with a reviewer who wanted me to scrub common English contractions (like it’s, doesn’t, or we’re) from a manuscript. There’s a common belief that contractions mustn’t be used in scientific writing, although the genesis of this “rule” is unclear. So is the rationale. One that’s commonly suggested is that contractions make writing informal, and that that’s inappropriate – to which I say only “Harumph”. Another is much more important: the claim that they make writing less accessible to readers of English as an additional language (EAL).
I’ve been skeptical of that hard-for-EAL claim, but not being an EAL reader myself makes it hard for me to claim authority on the issue. So, I asked EAL readers of Scientist Sees Squirrel to weigh in – and they did. Today, poll results, and a couple of additional points raised by some folks who think about writing for EAL readers. Continue reading
Image: Deadline, by geralt CC 0 via pixabay.com.
Warning: I’m a bit grumpy today.
I’m back tilting at one of my favourite windmills today: requests for manuscript reviews with unreasonably short deadlines. I’ve explained elsewhere that one should expect the process of peer review to take a while. Journals would love to compress the process by reducing the time the manuscript spends on the reviewer’s desk – and so they ask for reviews to be returned in 2 weeks, or in 10 days, or less. As a reviewer, I don’t play this game any more: I simply refuse all requests with deadlines shorter than 3 weeks.
I’ve asked a few editors and journal offices why they give such short deadlines, and they give two kinds of answers: one outcome-based, and one process-based. Continue reading