In teaching we regularly change our class structures and routines and we implement new “interventions” in hopes of changing classroom dynamics or reaching more students. I know that most of the time I make these decisions based upon anecdotal evidence, perhaps after glancing at a handful of exit tickets from my students or based upon how I “felt” the class went. Recently, though, I’m finding myself a little more hesitant when making a claim about my class. I require that my students support their claims with evidence, so why wouldn’t I also support mine with evidence?
Does “it felt like it went better” constitute as evidence to say that a particular intervention was successful? Increasingly I would argue that it doesn’t. I propose that we reframe the way we think about our science classes so that they parallel a research lab.
With our overwhelmingly busy teaching schedules, I don’t necessarily suggest that every claim is backed by a full research study (although considering methodology and forms of data collection is important). Rather, I am suggesting that we are mindful of the data we can collect and use to support claims about interventions or the efficacy of lessons. What are the questions we are asking? Namely, is a particular intervention successful or not and how do we define “success” in this particular context? How can we look at our classes from numerous angles in order to answer these questions? This can be as straightforward as analyzing student performance on one assessment question of interest. We can then disaggregate the data based upon conditions like student attendance or homework turn-in rate. Or we can disaggregate the data based upon student characteristics.
A couple of years ago I had a hunch that my female non-native English speakers were performing particularly well in my physics classes (this was especially exciting since female non-native English speakers are traditionally underrepresented in the field of science). I sought to make more concrete claims on the basis of student growth on the pre and post assessments in the course. This particular group of students did in fact demonstrate learning gains that were greater than the other three subgroups I analyzed (female and male native English speakers and male non-native English speakers). [In a future blog post I’ll share the details of this study.]
For now I’d like to emphasize the importance of us being science teachers and researchers. When we make claims about our class “going well” or a particular lesson being effective at “reaching students,” I encourage us to back up these claims with evidence. This can be so powerful when we are communicating with administrators, colleagues, parents, and students! How are you collecting evidence and supporting claims about your own instruction?