This year, I've finally decided (more accurately, finally got around to doing it) to undertake a Postgraduate Certificate in Tertiary Teaching. In plain English, that means do some training that actually prepares me to teach at university. "What?" I hear you say - "You mean you haven't got any qualification to teach at university?". Nope. And the same is true of most lecturers, in most universities. They are appointed because they are good at their research, and it is just assumed that if you stick them in front of a class of students the students will somehow come out better off. And, as every student or ex-student (i.e. me) knows, sometimes it happens, and sometimes it doesn't.
Anyway, as part of the PGCert (Tertiary Teaching), I'm thinking about ways that I can know whether or not (probably the latter) my students have understood what I have been teaching. A brief look at the literature shows that there are numerous ways of doing this in the context of physics. Very crudely, you can test your class before you teach something, and test them again afterwards. The improvement equals their learning.
Or does it? First of all, learning, if not exercised, diminishes over time. Test them a week after you taught it, and their performance may be good. Give them a similar test at the end of the semester, or in the next semester, or in two years' time, and the scores will be lower. However, the learning doesn't usually decay to zero - usually something sticks for good.
But the thing that grabbed me on reading this recent paper by Sayre and Heckler is the effect of 'interference' by a similar, but different topic. After learning about the topic of interest (in this case electric fields) the students do well in a short test. But a couple of weeks later, they do poorly, not because of the passing of time, but because, at that time, they are being taught another topic that they are confusing with the first (in this case electric potential). Tested again later on, when the confusing interfering topic has been taken away, their performance has risen again.
So it's not all that easy to assess whether learning has taken place. With this example, have the students REALLY learned about electric fields properly? One could perhaps argue that, if the learning had been deep enough, their understanding would not have been confused by a related, but different, topic. One could perhaps test them yet again, on both electric fields and electric potential, at a still later date, and see if the confusion has remained. What I thought would be a very simple procedure suddenly gets rather complicated.
Sayre, E. C. and Heckler, A. F. (2009) Peaks and decays of student knowledge in an introductory E&M [electricity and magnetism] course, is the effect of 'interference'. Physical Review Special Topics - Education Research, 5, 013101