The University of Waikato - Te Whare Wānanga o Waikato
Faculty of Science and Engineering - Te Mātauranga Pūtaiao me te Pūkaha
Waikato Home Waikato Home > Science & Engineering > Physics Stop
Staff + Student Login

December 2013 Archives

PhysicsStop wishes you a Happy Christmas and peaceful 2014, as he ponders whether England will put up a better show than the West Indies. Enjoy the beach, the cricket, the weather, or all three.

Back early in the New Year

| | Comments (0)

Recently there's been a bit of discussion in our Faculty on how to get a reliable evaluation of people's teaching. The traditional approach is with the appraisal. At the end of each paper the students get to answer various questions on the teacher's performance on a five-point Likert Scale (i.e. 'Always', 'Usually', 'Sometimes', 'Seldom', 'Never'.)  For example: "The teacher made it clear what they expected of me." The response 'Always' is given a score of 1, 'Usually' is given 2, down to 'Never' which is given a score of 5. An averaged response of the questions across students gives some measure of teaching success - ranging in theory from 1.0 (perfect) through to 5.0 (which we really, really don't want to see happening). 

We've also got a general question - "Overall, this teacher was effective". This is also given a score on the same scale. 

A question that's been raised is: Does the "Overall, this teacher was effective" score correlate well with the average of the others? 

I've been teaching for several years now, and have a whole heap of data to draw from. So, I've been analyzing it (for 2008 onwards), and, in the interests of transparency, I'm happy for people to see it.  For myself, the question of "does a single 'overall' question get a similar mark to the averaged response of the other questions?" is a clear yes. The graph below shows the two scores plotted against each other, for different papers that I have taught. For some papers I've had a perfect score - 1.0 by every student for every question. For a couple scores have been dismall (above 2 on average):

Capture1.JPG

What does this mean? That's a good question. Maybe it's simply that a single question is as good as a multitude of questions if all we are going to do is to take the average of something. More interesting is to look at each question in turn. The questions start with "the teacher..." and then carry on as in the chart below, which shows the responses I've had averaged over papers and years.
Capture2.JPG
Remember, low scores are good. And what does this tell me? Probably not much that I don't already know. For example, anecdotally at any rate, the question "The teacher gave me helpful feedback" is a question for which many lecturers get their poorest scores (highest numbers). This may well be because students don't realize they are getting feedback. I have colleagues who, when they give oral feedback, will prefix what they say with "I am now giving you feedback on how you have done" so that it's recognized for what it is. 
 
So, another question. How much have I improved in recent years? Surely I am a better teacher now than what I was in 2008. I really believe that I am. So my scores should be heading towards 1.  Well, um, maybe not. Here they are. There are two lines - the blue line is the response to the question 'Overall, this teacher was effective', averaged over all the papers I took in a given year; the red line is the average of the other questions, averaged over all the papers. The red line closely tracks the blue - this shows the same effect as seen on the first graph. The two correlate well. 
 
Capture3.JPG
So what's happening. I did something well around 2010 but since then it's gone backwards (with a bit of a gain this year - though not all of this year's data has been returned to me yet). There are a couple of comments to make. In 2010 I started on a Post Graduate Certificate of Tertiary Teaching. I put a lot of effort into this. There were a couple of major tasks that I did that were targeted at implementing and assessing a teaching intervention to improve student performance. I finished the PGCert in 2011. That seems to have helped with my scores, in 2010 at least. A quick peruse of my CV, however, will tell you that this came at the expense of research outputs. Not a lot of research was going on in my office or lab during that time.  And what happened in 2012? I had a period of study leave (hooray for research outputs!) followed immediately by a period of parental leave. Unfortunately, I had the same amount of teaching to do and that got squashed into the rest of the year. Same amount of material, less time to do it, poorer student opinions. It seems a logical explanation anyway.
 
Does all this say anything about whether I am an effective teacher? Can one use a single number to describe it? These are questions that are being considered. Does my data help anyone to answer these questions? You decide.
 
| | Comments (1)

I absolutely have to put this on my blog. I found it in a presentation put together by Ako Aotearoa drawing from the Carl Wieman Science Education Initiative. First, the content of the lecture.

The Montillation of Traxoline.

It is very important that you learn about traxoline.  Traxoline is a new form of zionter. It is montilled in Ceristanna.  The Ceristannians gristerlate large quantities of fevon and then bracter it to quasel traxoline.  Traxoline may well be one of our most lukized snezlaus in the future, because of our zionter lescelidge.

Now, the assessment questions.

1.  What is traxoline?

2.  Where is traxoline montilled?

3.  How is traxoline quaselled?

4.  Why is it important to know about traxoline?

This is credited in the Ako Aotearoa presentation to Judy Lanier. With a bit of effort, I'm sure most students should be getting 'A' grades in this assignment. And what have they learnt in the process? Absolutely nothing. That's because the assessment is testing English sentence structure, not chemistry, geology, history or whatever we think traxoline falls under.

The point is, how many of the assessments we set for physics (or whatever) are actually testing something entirely different. I know a lot of physics and engineering assessments I've seen are actually testing algebra. Their only use is to help students learn algebra.

| | Comments (1)

In the lab, my summer student has been working on a small device to keep a small piece of equipment at a stable temperature. It uses a Peltier device - in essence it's a solid-state heat pump. Pass through current one way, and heat is drawn from the top surface to the bottom; pass current through the other, heat is drawn from the bottom to the top. Therefore, by putting the equipment on the top surface of the peltier device, we can control how much we heat or cool it by via how much electric current (and in which direction) we pass through. 

There are a few things we need to consider, however, to get this to work well. One is the thermal resistance between the peltier device and the object. We need there to be a good thermal contact between the two, otherwise the flow of heat is going to be hampered. It would be rather like putting insulation around radiators in your house. It will keep the radiators nice and warm but it won't do much to the temperature inside your house. We need to ensure that the glue we use to hold our equipment to the Peltier has high thermal conductivity.

But also we are interested in knowing how quickly the equipment changes its temperature in response to heat input. This is quantified by its heat capacity - how much energy (heat) is required to raise its temperature by a given amount. Something with low heat capacity will change its temperature quickly, something with a high heat capacity will change its temperature only slowly.  A large lump of something, like the water in the university swimming pool, has a large heat capacity, and therefore takes a long time to heat up once it's been filled (and consequently remains very cold until January). Do we want our equipment to have a high or low heat capacity? That's not  entirely obvious. Our aim is for something that remains at fairly stable temperature - that neither heats up nor cools down quickly. Otherwise controlling the temperature becomes very difficult. That would suggest a high heat capacity for it. But we don't want it too big or our Peltier Device would never be able to bring it up to the temperature that we'd like. There's a bit of a balance to be had here.

What struck me this week was the obvious parallel with nappies. Well, I guess it's obvious to any physicist who changes nappies on a regular basis. The perfect nappy needs to take urine away from the skin quickly, and also have a high capacity to hold it. The first task is equivalent to the thermal conductivity, but with water. The fluid needs to be able to flow quickly from the skin to the absorbing bit of the nappy. The second task is the equivalent of the heat capacity - we need the material to absorb lots of water while not getting very wet (equivalent to absorbing lots of heat but not raising its temperature very much). The cloth nappies we use have a two different material textures. The first bit, that is in contact with the skin, sucks water away very quickly. The second part holds onto the water very well. Working together, they keep baby dry for longer, which sounds like a rather corny tag line for a nappy brand. 

And, yes, I've taken a clean dry nappy to the bathroom with a measuring jug and slowly poured water in to see exactly how much one would hold. Could you expect a physicist to do anything else?

 

 

| | Comments (2)