From one big midterm to weekly quizzes: what I expected, and what actually happened

You can sometimes teach an old dog new tricks.  Last semester, I made a significant change in my teaching, in one of my courses*: I dumped the traditional high-stakes midterm exam in favour of small weekly quizzes.  I know, it’s not a breathtakingly original idea.  I was persuaded to try it not because I’m a brilliant pedagogical experimentalist (I’m not), but because I was lucky enough to get an advance copy of Terry McGlynn’s new book, The Chicago Guide to College Science Teaching. You can read Terry’s book too, as soon as it’s released this summer; in the meanwhile, you can read a little bit about it here.

If The Chicago Guide has one theme, I’d say it’s using respect for your students to make navigating your courses easier them and also for you.  Who wouldn’t want to do that? The book makes lots of suggestions I’m likely to adopt; but one I jumped on right away was that move from a big high-stakes midterm to small weekly quizzes. It’s not that I’d never thought of that, or seen it done – it’s that Terry does a wonderful job of selling the idea.  The Chicago Guide convinced me that the weekly quiz could have lots of advantages, both for my students and for me.  (It slices! It dices!).

It didn’t quite work out the way I expected.

So what did I expect** to happen? And what actually did***?

  1. What I expected: Students would be less stressed and anxious, without the worry that one bad day’s performance could ruin their grade What happened: They were less stressed, all right – but that wasn’t entirely a good thing. Very quickly they did the basic arithmetic: if there are 10 quizzes, and together they’re worth 20% of the grade, then each one is worth 2%.  So they focused on the 2% number, not the 20%, and for each individual quiz they were so much not worried that they chose not to spend much time preparing.  The result, of course? Underperformance on the quizzes – iterated, and hence cumulated, through the semester.
    This, by the way, brings up an important point.  If students were studying the material only because of the big scary exam, well, that suggests my course wasn’t very well designed: all the motivation was extrinsic to the material, not intrinsic.  So: can I teach in such a way that the students study simply because the material is fascinating, or obviously important?  I’d like to think so, but I’m not optimistic, partly because I’d be competing with all their other courses that feature heavy extrinsic motivation.  Plus, I know how my own brain works with respect to motivation, and I’m not sure why I should expect my students to be any more enlightened than I am.
  2. What I expected: Better learning (educational research suggests that reducing test anxiety leads to better learning outcomes). What happened: As I frequently tell students, I’m a lazy, lazy man; and that means questions from a quiz or a midterm often end up repeated on the final exam. If the move to smaller quizzes led to better learning, then performance on these repeated items should have gone up, compared to past years and compared to new questions on the final.  It did not, or at least not obviously – probably as a direct consequence of point (1).
  3. What I expected: Students reduce reliance on cramming because each week’s studying is more manageable. What happened: Every quiz day, I had a room full of students poring over lecture notes five minutes before the quiz – my students relied more on cramming. I suspect this was partly because point (1) led them to assign low priority to studying until the quiz was imminent, and partly because they (quite reasonably) expected cramming to be a more reasonable strategy when there’s less material to master.
  4. What I expected: I’d catch student misconceptions earlier. What happened: This worked! It was terrific (although it only requires that one do frequent quizzing, not that those frequent quizzes replace a bigger exam).
  5. What I expected: Students get grade feedback earlier, so they can know how they’re dealing with the course material. What happened: They got feedback, but because they didn’t study hard (point 1), and they knew that they hadn’t, they (correctly) judged the feedback uninformative. They knew they’d study more and do better on the next quiz… until it actually happened, anyway. On top of that: a high-stakes midterm is a relevant dry run for the even-higher-stakes final exam; a series of weekly quizzes, at least in students’ minds, is not.
  6. What I expected: With enough graded items (12) that I could drop everyone’s lowest 2 scores, I wouldn’t have to worry about providing deferments, makeup exams, regrades, etc. What happened: I had to provide even more deferments and makeups, because everyone missed a quiz or two. As the students reasoned, quite correctly: not having a makeup would still penalize them for illness (etc.) – a student who was sick and missed a quiz has their grade based on their best 10 of 11 quizzes, while everyone else has their grade based on the best 10 of 12.  Sure, it’s a smaller penalty that missing a midterm without makeup; but it’s still a penalty, and that’s not fair.  So makeups are still needed.
  7. What I expected: Short quizzes on a couple of recent lectures would be easier to write and grade than a big midterm. What happened: Indeed they were! Much easier, which was a big win for me. However, it was somewhat harder to be consistent through the course.  (Not a bad tradeoff.)

In sum, I think the quizzes improved the course, but not as overwhelmingly as I expected.  Quizzes may be much better in theory, but some of that theory is lost when confronted by students showing normal human behaviour and not taking full advantage of the opportunity.  It’s definitely my job to provide my students with opportunity.  The extent to which it’s my job to force them to take advantage of opportunity is too big a topic for this post.

By the way, you may have noticed that some my students’ responses to being offered weekly quizzes betray mutually contradictory beliefs.  For example, they believed that it wasn’t worth studying hard for a quiz because it didn’t count for much, but they also believed that it was super important to make up a quiz that they missed.  I can explain this: students are humans, and humans believe sets of contradictory things all the time.****

Now, don’t get the wrong idea here: I’m not complaining because the weekly-quizzes move wasn’t quite the panacea I’d been promised.  I didn’t really expect it to be.  If there were simple moves that dramatically improved learning while simultaneously making life easier for both students and instructors, I’m pretty sure we’d have adopted them long ago.  Instead, students (like all humans) are complex and each one is unique; and teaching well is therefore difficult and complicated.  Moving to weekly quizzes was a step forward for me, I think, and it’s one of many such steps forward I expect to take as a result of reading Terry’s excellent book. You can read his book too (well, you can soon), and take steps of your own.  Sure, they won’t likely be seven-league-boots steps; but the seven league boots are a fable. And (as I say somewhere in Charles Darwin’s Barnacle and David Bowie’s Spider, small steps advance a journey too.

And now I’d like to hear from you: if you’ve made the move away from midterms and toward weekly quizzes, what was your experience like?  Do you have tips for mitigating some of the unintended consequences I experienced?

© Stephen Heard  April 21, 2020.  Thanks to Terry McGlynn for comments on an early draft of this post – and for writing such a thought-provoking book!

Image: Midterm exam room – stressful much?  © Prenn CC BY-SA 3.0 via Wikimedia.org


*^Which I won’t identify, because I suppose some of what’s to come might sound like criticism of the students.  It isn’t meant to be – I had an absolutely terrific group of students and I loved every minute with them.  They were, however, human just like me.

**^The expectations I outline here are mostly based on The Chicago Guide, but not entirely.  I hope you’ll forgive me for not giving precise page references here – but really, it’s a short book and a good one, and when it comes out you should read the whole thing anyway.

***^Anecdotal.  I didn’t keep or analyze precise data, and of course I don’t have knowledge of my students’ motivations, so don’t ascribe too gospel a level of truth to the outcomes I describe.

****^When my son was young and I spent a lot of time preparing him snacks, I was utterly convinced that a jam sandwich was junk food, basically candy, but that toast with jam was a nutritious snack.  I should patent that magic toaster that adds nutrients to bread by exposing it to heat.  And the funniest thing: I understand completely how foolish this belief was; but unless I stop and think it through deliberately, I still hold it. Humans are like that.

21 thoughts on “From one big midterm to weekly quizzes: what I expected, and what actually happened

  1. Marco Mello

    Nice testimony! I have been considering using quizzes myself, and your post is really helpful.

    By the way, talking about intrinsic vs extrinsic motivation, what to you think about the old dilemma “undergrad vs grad courses”? Independently of the lecturer and their pedagogy, there is always a huge difference in intrinsic motivation between undergrad and grad students.

    Seriously, in over 20 years of teaching, I have tried different approaches myself after following new suggestions in the educational literature, in addition to discussing pedagogy with colleagues who have different perspectives. Yes, I’m talking about a myriad of passive and active learning techniques, from the chalkboard to YouTube, from field practices to games (naturally, always including books and papers).

    No matter what we do, only less than 10% of the students in each undergrad course show real motivation to study the material for itself, and not only for the exams. In contrast, that percentage goes beyond 90% in grad courses. Surely, one reason might be the stronger specialization of grad courses, which leads to stronger filtering of interests even in the admission. In other words, in grad courses, more students know why they got there.

    Assuming this huge difference is indeed consistent (are there any hard data on this?), what is wrong? Why do millennial and iGen undergrad students find most courses so uninteresting, despite efforts all over the world to modernize HE pedagogy? Shouldn’t we rethink not only pedagogy but undergrad courses as a whole, especially in the case of more academic disciplines, such as those in the natural sciences?

    Liked by 1 person

    Reply
    1. ScientistSeesSquirrel Post author

      Marco – I agree with you that no matter what we do, many students are motivated only extrinsically. To some extent, I think this is unescapable. After all, we structure degree programs so that students have to take courses they aren’t a priori interested in – as we should! Doing otherwise would presume that the students a priori motivation is a better judge of relevant curriculum than our experience. That would be a silly presumption (as evidence from my own past would demonstrate). And that’s not even considering students who are at university for reasons other than intrinsic motivation.

      So I think the question is less “why aren’t students intrinsically motivated” and more “how can we reach students better who aren’t intrinsically motivated”. Heck if I know…. and like you, I’ve tried a lot of stuff. We can reach *more* students by doing various things. We will never reach *all* students. I’ll keep trying anyway, though!

      Liked by 1 person

      Reply
      1. Marco Mello

        Thank you, Steve, I totally agree. And, yes, it’s impossible to reach 100% do the students in a class. My goal has always been modest: going slightly beyond the mythical 10% threshold. I don’t know, maybe motivating 11% of the class would be asking too much?

        Liked by 1 person

        Reply
  2. sleather2012

    At the risk of being regarded as an old fogey/reactionary the problem of the majority undergraduates only performing for tests as Marco points out has been around in the UK for at least 27 years. I give this date with such precision because it was in 1992 that I returned to academia (Imperial College, an Ivy League equivalent) after a ten year spell in a research environment and on being given my own module to run made the assumption that final year students at an elite institution would be as equally keen and motivated to learn as I was when I was in their position. I was very disappointed when I found out that they would only turn up to practical classes if they were going to be assessed. The following year I had to adjust my teaching accordingly. I blame this on our pre-university schooling which is entirely test based 😦

    Liked by 1 person

    Reply
    1. Marco Mello

      Hi Simon, you have a good point! I taught my first undergrad class in 1998, still as an undergrad TA. Yes, here in the third world we resort to weird improvisation due to the lack of personnel. Here too we train students to perform for tests during the first 15 years of their school life. Then, suddenly, when they get to HE, we expect them to be creative and show initiative. Maybe the solution involves changing the whole system, and not only HE.

      Liked by 1 person

      Reply
  3. Jan Murie

    Much of the student reactions that you describe strongly reflects the thrust of Kahneman’s Thinking Fast and Slow. We humans tend to think fast all too often, thus succumbing to flawed logic. I did try the weekly quiz approach once and was similarly dissatisified. I abandoned it as I decided it just put too much emphasis on testing rather than on the material.

    Liked by 1 person

    Reply
  4. mooerslab

    Hi Steve,
    I have been using weekly quizzes for years now in my third year evolution class. Students seem generally appreciative, but that may be because I tell them they partially replace the final. That is not true of course because the quizzes test different sorts of material than a longer form final, so I have had to replace the final with projects. Note, I drop the worst two/missed quizzes with no make ups or deferments. Without comparative data it seems impossible to gauge learning outcomes, but my repeat questions from the quizzes on the midterms do seem familiar to the students. And the marking is easier. So, I am sticking to quizzes.

    Like

    Reply
    1. ScientistSeesSquirrel Post author

      I’m sticking with them too – but I’ve decided that “no make ups or deferments” just isn’t fair. Fortunately, with the stakes lower, I can worry less about whether the makeup is equivalent to the original!

      Like

      Reply
  5. Daisy

    Hi Steve
    So refreshing to see this, makes me feel better knowing it is not just me…
    I’ve been teaching large classes with required courses for a long time now and I’ve tried every possible permutation of quiz/midterm/final/projects… My classes are the kind that are required for external assessment/certifications, and cover material that can literally kill people if students do not come out with a good working knowledge of terminology and applications so they are fairly structured. Here are some examples with motivations and results…
    1) Weekly in-class quizzes consisting of 5 questions only, overall 10 – 20% of final grade, with lowest 2 dropped for occasional non-attendance (because low stakes reinforcement etc.) – students hated everything about them, found it stressful, did poorly, and only paid attention to the formal exams because of the 2% problem you mention… no difference in marks from other years. Poor correlation between quiz marks and exam marks.
    2) Pre-lab quizzes designed to test comprehension of basic lab terms (because they were supposed to pre-read the labs before coming…) – students hated them, found it incredibly stressful, no difference in lab performance because most simply do not read the labs before coming… Excellent correlation between lab quiz mark and lab performance unfortunately.
    3) Pre-lab assignments (because the quizzes were too stressful) – most simply copied them from the first lab section of the week… no improvement in lab performance and no correlation with lab marks eventually.
    4) Weekly post-lecture online Moodle quizzes, lowest two dropped, no make-ups (switched from in-class because student numbers went above 150 and I couldn’t mark them anymore) – reasonable uptake, but because they are online they were basically open book, so no correlation between good quiz marks and good test/final marks. Basically everyone had good quiz marks, but even when questions were copied into tests exactly as written from quizzes many did not get them. Small boost in grades because of bonus-mark kind of aspect of open book quizzes. No major disadvantages, but no huge improvement either.
    5) Weekly non-graded and/or peer-graded end-of-lecture review questions (because no requirement for grading, low-stakes, reinforcement of lecture material at end of day) – most students simply left before doing them, or did not answer and simply waited for their partners to give them the answers. No difference in results at end of year…

    The best years for teaching/engagement have been the ones where I did literally everything on the board, with no ppts or handouts, and everyone simply learned along with me as I drew things, derived equations, and built every lecture from the ground up with a marker and whiteboard and had only two midterms and a final (with lab and lab marks always) – the most low-tech, low assessment versions had not only the best grades overall, but also the largest number of students engaged and enjoying the material, and my best evaluations out of any years. And best results on external assessments… Also best retention when I got the same students again in upper years…
    Not sure what this all says, but there is no magic that works for everything…

    Liked by 1 person

    Reply
  6. Jeremy Fox

    Am slightly surprised to learn that your students see the quizzes as low stakes (in the sense that they only prepare for them by cramming for a few minutes before class begins). Here at Calgary, I’ve heard that in some courses with frequent low-stakes quizzes, students report feeling under constant pressure. Although, in those courses, the frequent quizzes are in addition to a high stakes midterm, rather than a replacement for the midterm, so perhaps that’s the difference.

    I second the remarks of Marco and others that, at many institutions, many of the students are going to attend class and study because they’re “forced” to by the course structure and the grading system. What I’d add is that at least some students *want* that. They *want* to have a schedule and deadlines imposed on them, because they know that they’d struggle to self-impose the discipline needed to stay on top of a full course load. When I polled students about their worries about my big intro biostats course moving online earlier this term, that was one of the most commonly-expressed worries. Variants on “I won’t be able to make myself keep up with the course if I don’t have to show up for class, and if I have fewer assignment deadlines to meet.” And that’s come to pass–I have more students really struggling badly than I usually do, unfortunately. And it’s mostly because, by their own admission, they find it hard to self-motivate and impose a schedule on themselves (to which: I find it hard too!). You see the same thing in the abysmal completion rates of fully-online open courses (MOOCs).

    This isn’t a failing on the part of these students; it’s not that the fairly rare students who are able to impose schedules on themselves are somehow “better” than others. And it’s not to say it’s some brute law of nature that things are like this. I dunno, maybe in some hypothetical world where kids were raised and taught differently before university, all students would be completely self-motivated (or maybe not!) And it’s not that other ways of trying to motivate and engage students are ineffective or worthless or whatever. I, and most instructors I know, use a wide range of techniques to keep students engaged and on track, some of which involve grades and some of which don’t.

    Like

    Reply
    1. ScientistSeesSquirrel Post author

      I agree about the helpfulness of deadlines. Many of the students in my writing course are Honours students, and what they report liking the most is that my frequent deadlines for bits and pieces of a thesis “force” them to be on top of their Honours thesis well before that deadline.

      And there’s nothing particular to students about this. I know I’m the same. And I know most of us are – because of granting agency experience that when you remove submission deadlines, submission rates go way DOWN.

      Like

      Reply
  7. Hannah Davis

    I’ve been reading a bit of the learning literature recently, just to satisfy my curiosity (it’s not my field, so I’m not going to pretend I understood most of it) – and one point that I took away from it is that people are REALLY bad at choosing effective learning/productivity strategies for themselves. For example, when people can choose their own deadlines, they tend to pick ones that actually make them less productive. They also tend to choose relatively ineffective studying strategies. I’m definitely guilty of this myself – turns out I got good grades DESPITE the strategies that I used, not because of them.

    And people often make those objectively terrible decisions for entirely sensible reasons. In the case of your students, they’ve learned over the years to prioritise high-stakes tests/assignments over low-stakes ones – a strategy that not only works, but is essential for success… until the rules change. I’m not sure how to get them to adjust their strategy here (especially when it still works for other classes), though it might help to just talk openly with them about it. I remember it took me a while to get used to not having ANY graded assignments until the final exam when I started studying in Germany. I was used to the Canadian system, so I initially thought ungraded = optional (and a poor use of my time), and I had to unlearn that idea.

    Like

    Reply
    1. ScientistSeesSquirrel Post author

      Thanks – this analysis sounds spot on to me, and it’s reassuring to know that the learning literature backs it up! (If you have a citation or two I’d be grateful, but I suspect you’re summarizing broadly)

      Like

      Reply
  8. Pingback: Friday links: t-shirts vs. coronavirus, cafeteria menus vs. scholarly papers, and more | Dynamic Ecology

  9. Andrea Phillott

    Hi Stephen, I have 3 quizzes spaced throughout the term instead of a mid-term or weekly quizzes and a final exam for first year ecology (majors and non-majors). Students seem less stressed before the quizzes when compared to the traditional mid-term, but there’s still enough material on each quiz that they can’t rely on a 10min before-class-cram to get them through. They also complete the quiz as an individual (80% of the grade for each quiz) and in a randomly assigned group (20% of the grade for each quiz) to reduce anxiety and provide an opportunity for peer-teaching. Students have given great feedback about this format.

    I rely heavily on my version of clicker questions in each class to detect and immediately address misconceptions. (I don’t have clickers so every students gets A/B, C/D, and Yes/No cards printed on coloured paper at the beginning of each term and return them at the end for re-use.) Each class starts with a few MCQ (higher Blooms) about previous content and I ask additional questions throughout the class. I encourage them to talk to the person next to them and consult their class notes when answering. As well as identifying the correct response, we discuss why the other possibilities were incorrect. This takes time but has proven invaluable for learning.

    Like

    Reply
  10. Kate

    I remember how nice this strategy was from a student perspective. I first took a weeder OChem class taught by a professor who was a wonderful teacher but who graded solely on high-stakes, memorization-heavy tests. I actually liked the material but did miserably in the class. I wound up re-taking it later with a Big Name Chemist who taught once in a blue moon. Every other week we had “take home quizzes” with extra credit practice problems. It was of course basically homework, but by calling it a quiz and actually grading it, it encouraged most of us students to put some effort into it. Unsurprisingly, since we were required to practice, the entire class did MUCH better on the tests than any other section.

    Liked by 1 person

    Reply
  11. Pingback: I wish my students were motivated by love of subject. But I shouldn’t. | Scientist Sees Squirrel

  12. Jeff Offutt

    Interesting. My experience was somewhat different from his … and almost completely positive.
    – Students feel less stress.
    – Students perform better. Every semester!
    – Students perform better on the final exam.
    – Students comment that they are forced to keep up instead of cram, which they hated at first but appreciated by the end.
    – I have a much better idea of what the students are learning.
    On the downside:
    – I have more weekly work to make the quizzes.

    Like

    Reply
    1. ScientistSeesSquirrel Post author

      These are all the things I hoped would happen! #1 definitely did, but it worked against all the rest. So, of course, the question is what I (and others) can do to push the system towards your outcomes. Any ideas?

      Like

      Reply
  13. Pingback: What my online Entomology course look like | Scientist Sees Squirrel

Comment on this post:

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.