The University of Waikato - Te Whare Wānanga o Waikato
Faculty of Science and Engineering - Te Mātauranga Pūtaiao me te Pūkaha
Waikato Home Waikato Home >Science & Engineering & gt; Physics Stop
Staff + Student Login

October 2011 Archives

Here's a bit more on the NCEA Physics assessments that I heard about at the NZ Institute of Physics Conference last week. I alluded to it very briefly in a previous post.  This comes from my notes of the presentation given by David Lillis, a statistician at the NZ Qualifications Authority.

Unsurprisingly, NZQA throw lots of statistical analyses at the various exams. That's quite reassuring - they are looking at whether each exam is doing its job properly. That is, does it provide a fair and reasonable assessment of a candidate's performance in a particular area? They do things like look for gender/culture bias in particular questions, whether there is redundancy in questions (i.e. two questions on the exam ask basically the same thing - this can be checked through correlations between candidates' scores on different questions), whether the question/exam succeeds in distinguishing 'not achieved', 'achieved', 'merit' and 'excellence', what is the relative 'difficulty' of an exam (e.g. by looking at how candidates in one exam fared in other exams) and so on.

But the analysis that was most interesting for me was the Principal Component Analysis on the students' responses to the  physics assessment standards.

Principal Component Analysis is a pretty mathematical thing to work through, but, explaining in non-mathematical terms, it's about finding out how many and what 'dimensions' are required to explain the variation in the results. So, an example - if you look at how the contestants do on 'Masterchef' or some equivalent competition, you might find that most of their performance is down to how well they can cook (the first, most important dimension), but part of their performance (the second dimension) might be attributable to their ability to select combinations of flavours that work, and a little bit (a third dimension) on their ability to present their dishes. PCA on an exam or assessment helps to draw out what the assessment really is doing. One would hope that it is really assessing what it is meant to be assessing.

Now, here's the thing with physics. For nearly all NCEA assessment standards EXCEPT physics, there is only ever one important dimension. That is, students who do well on (say) question 1, will also do well on question 2, and well on question 3, because, fundamentally, all aspects of all questions are addressing the dimension.  However, for physics assessments, there are usually two important dimensions.  So, for example, if we take the standard 'Understanding waves', we see that the first dimension, the one that explains most of the variation in the students' scores, is how well do the students understand waves? And that's good. But there is another dimension that's apparent, and that's to do with the quantitative or qualitative nature of questions, in other words, how much mathematical work is in a question. To get really good marks in the standard, you have to not only understand about waves, but be able to undertake quantitative calculations too. So really there are two different aspects being examined at the same time.

My understanding from the talk was that nearly all physics assessments show this, but it is rare in another subject (even a science subject).

So, the conclusion that can be made from that is that physics exams don't just assess physics; they also assess to some extent maths.  This probably won't come as a surprise to most physics teachers, but it came as a surprise to me to see it so clearly evidenced through a careful statistical analysis. Is this two-dimensional nature a good or bad thing? I'm not sure on that one. My gut feeling is that physics should be about physics, not mathematics, and our assessments should reflect it, but on the other hand there's no getting away from the fact that physics is a quantitative subject. It is good, though, to see that in NCEA at least the 'Understanding the Physics' is the primary dimension, NOT the 'doing maths'. I'd love to analyze some of the exams we set here to see if that remains the case at university.




| | Comments (0) | TrackBacks (0)

Yesterday, being a warm, sunny Labour Day holiday  (those words don't usually go together) we decided we needed to get out of the house and go somewhere interesting, and chose the Waitomo area. Didn't go into the show caves this time (done those a few times before) but chose to keep the bank account under a bit of restraint and do some of the amazing walks through the area.

The Ruakuri bush walk is a favourite - you see some fantastic limestone scenery, including a natural tunnel through which the river runs. And it's free.  If you get the timing right, you can also watch the intrepid 'blackwater' rafters jump into their rubber tubes and start their journey down the river. Must do that one day.

With some fairly simple physics application, it was easy to tell that the jumping-in-point must be pretty deep. That's because the water flows past very slowly, compared to the raging torrent it is further up the river. The width of the river hasn't changed much, the volume of water going down it won't have changed much, and so, the natural conclusion is that the river must have got deeper.

This argument is based on 'continuity'. Specifically, in this case, the flow of water down the river is constant everywhere (since water doesn't get created or destroyed - at least not in a vast amount over a short distance - and it doesn't squash); the flow rate is given by the product of the cross-section area of the river and the river's velocity (or, for the more mathematically inclined, the surface integral of the area and the velocity); therefore, if the velocity goes down the cross-sectional area must increase. If the width hasn't, then the depth must have.

Sure enough, my conclusion was quickly verified by a blackwater tour guide clad in wetsuit running off the bank at speed and bombing into the river.

The continuity equation raises its head in many forms throughout physics. Fluid flow is the obvious example, but we also see it in electromagnetism. Here, lines of electric and magnetic field don't just end in empty space - electric field lines have to end on charges, and magnetic field lines never end at all. Therefore, the strength of the field very much depends on the area to which it is confined. Squash it into a small region, and the field strength is going to be larger than if it is allowed to spread over a wide volume. 

The flow of heat through an object is another example. In this case, heat energy is the thing that doesn't get created or destroyed, i.e. is 'continuous'. The same mathematical equations apply to all these cases, meaning that solving one physics problem often means you've solved other physics problems.

You can probably think of other examples yourself - some are quite fun and not so obviously physicsy - such as the flow of people down a street.

| | Comments (0) | TrackBacks (0)

Can't resist telling you this...

Tuesday, after the conference sessions had finished and before the conference dinner, I travelled with my students by train from Wellington to Lower Hutt, to do a lightning visit to a couple of labs at Industrial Research Ltd. It was a very informative visit - I need to spend a bit more time there when I have the opportunity. Amongst other deeply technical things which I won't bore you with I got to see official NZ standard time - The Clock that by definition is correct (at least in NZ) - you feel kind of privileged to be able to set your watch from it, knowing for an instant at least, your watch is going to be right. Really right.

But that's not the point of the story. It's the trains. It was the first time that I'd taken a suburban train in Wellington, and boy was it a retro experience. A rickety old carriage with nets for luggage racks, little cardboard tickets, that nostalgic rattle-rattle-rattle sound from underneath you. It was much like 1970's British Rail - took me right back to my childhood.

There's something deeply ironic about travelling to a place that is a world leader in technologies such as superconductivity and nanotechnology, by a mode of transport that is, ....well, ...shall we say, not world-leading.


| | Comments (0) | TrackBacks (0)

Well, the answer to that is, um... well.... it depends....

Now, I'm not suggesting what you've learned at school is not true. Take a point charge (e.g. a proton), and bring it close to another point charge (e.g. another proton) and the two will repel, with an inverse square law  (Let's not take them close enough to exhibit nuclear forces). If we double the distance between the charges, the force of repulsion quarters. That's Coulombs Law.

Things get rather more complicated, however, if the charge is not point-like. Yesterday, at the NZ Institute of Physics conference, John Lekner gave a fascinating account of the force between two charged, conducting spheres. It is actually intensely complicated and far from obvious. This problem was tackled in the nineteenth century by two physics geniuses: James Clerk Maxwell and William Thomson (Lord Kelvin). Their work was all the more remarkable given that they didn't have symbolic algebra computer packages available to them to handle the rather intensive algebra generated by this problem. The problem is now being revisited because it has application in nanotechnology, where the conducting spheres in question are really, really small.

So, in broad terms, what happens? Why is it more complicated than Coulomb's law? To see this, look at the case of two conducting spheres in close proximity, but one with a charge (say positive) and one neutral. The two attract!  The first sphere, with the charge, creates an electric field that is felt by the second sphere. The electrons in the second sphere will be attracted towards the first one, since the electrons are mobile in a conductor, and therefore the second sphere will polarize - its negative charge will move towards the part of the sphere that is nearest the first one, giving a separation in the positive and negative charges in the second sphere. The net force between the two spheres is then no longer zero, because the attractive force between the positive charge in the first sphere and the (nearby)  negative on the second is greater than the repulsive force between the positive charge in the first and the (further away) positive on the second.

So, a charged sphere and a neutral sphere attract. Let's now make the second sphere slightly positive as well. If we only make it a little bit positive, the attractive force due to the polarization still wins out over the extra repulsive force due to more positive charge, and so the two still attract each other! Make it too positive, however, and the Coulomb interaction wins out and the two will repel, as we might expect.

It's actually (much) more complicated than this, since the separation of charge in the second sphere sets up its own field that causes a separation in the first sphere, which influences the second, and so forth, creating, in mathematical terms, an infinite series of interaction terms. It's this series that Maxwell and Kelvin grappled with, with surprising success.

So, in summary, we have a problem that seems pretty simple: "I take a conducting sphere of radius a, charged with a charge q1, place it a distance x away from a second conducting sphere of radius b charged with a charge q2. What is the force between them?" But in practice the solution is really, really nasty and not at all obvious. 

| | Comments (0) | TrackBacks (0)

It was a very interesting day at the NZ Institute of Physics conference. I learned about some of the physics experiments done at the South Pole, how to trap, observe and count atoms (and that high school physics teachers who tell their students that you can't see atoms need to update their knowledge), some results from heavy ion collisions at the Large Hadron Collider and what that tells us about that quark-gluon-soupy-thing , and why the cornea of the eye is transparent (yes, it's to let light through - but what is it about its structure that allows it to do that?) Plus a whole lot more, including some fascinating statistical stuff relating to the NCEA Physics assessments, presented by an NZQA statistician (For you non-NZers, that's the series of assessments that school students do here). Bascially, physics is an odd subject when it comes to the way students perform at NCEA - this is put down to its very quantitative nature.

Proceedings were kicked off by a short opening address by Sir Paul Callaghan. He talked about how industry based on physical science was growing fast in New Zealand - so fast that in ten years' time it will be overtaking dairy and forestry. Wow! Physics really is important here. We are now entering what Sir Paul refers to as Stage Three of New Zealand Science. Stage One was back in Rutherford's time, and a bit later than that, when, if you were a Kiwi who wanted to do science, you left the country. Stage Two, spearheaded by people like Dan Walls, was when NZ said "let's have a go at doing this science thing ourselves". Stage Three (and this is probably a hopeless paraphrase of what SIr Paul said) is when  we stop apologizing to the rest of the country for being scientists.

Looking forward to an equally varied day tomorrow.



| | Comments (2) | TrackBacks (0)

I've been forwarded the following from one of our teaching development staff - it's a transcript of a very recent lecture by Prof Ian Chubb, Australia's Chief Scientist.

In the lecture, Prof Chubb comments on the lack of science understanding in the country as a whole and how this leads to the country being held back in the long-term. The climate change issue is a big example that he talks about - and is of course an extremely political beast in Australia. (It shouldn't be - it's an extremely scientific beast). However, rather than just moaning that this malaise is the fault of short-sighted vote-hungry governments or a failing education system, he throws down the challenge to scientists themselves - it is the responsibility of every scientist (which means me) to get out there and do something about the public's understanding of what science is and is not about, and what it does and does not do.

That's what initiatives like Cafe Scientifique and Sciblogs (  for those not reading this post through that website already) are all about. It's no use us sitting in our offices and looking back to the good-old-days (if, indeed, they every existed) when governments funded science and universities properly - that's not going to change anything. Scientists, says Prof Chubb, need to be advocates of science.

...if science is not properly valued - part of the problem is that we have not been vigorous or vociferous enough in our protection of it or perhaps more importantly, in our communication of it. We need to be advocates."

I'm off to the NZ Institute of Physics Congress in Wellington next week where there will be communication of science between scientists and other scientists and also between scientists and non-scientists.  It should be an interesting conference and it will be nice to escape the university for a few days.



| | Comments (0) | TrackBacks (0)

For some reason I have yet to discover, the flagpole at The University of Waikato on Monday was flying the flag of the Republic of Ireland, at half-mast.

However, observing this for the first time from the Faculty of Science and Engineering tearoom on Monday morning, it was hard to be sure just which flag it was. That was because, as the flag changed angle in the rather stiff breeze, the colours that I saw changed. The white stripe in the middle stayed white, but the green one by the flagpole looked decidedly blue at certain angles, and the orange one at the end took on a red colour. So no-one was quite sure whether it was the Irish flag (green-white-orange), French flag (blue-white-red), or Italian flag (green-white-red). I only knew for sure when I later walked past the building on which the pole sits on my way to a tutorial.

The flag was giving a nice demonstration of the complexity behind the observation of a colour. The colour the eye/brain sees depends on the relative intensities of the different wavelengths of visible light that arrive. What wavelengths arrive from the flag depend on both the illumination of the flag (what light source is illuminating it) and what its reflective characteristics are.

On a sunny day, the illumination would have varied, depending on how closely the flag was oriented with regard to the sun. Was it facing the sun, or end-on to the sun? This would have changed as the wind changed. The spectrum of the sun and of the blue-sky are very different. Secondly, just what wavelengths were being reflected the most? Reflectivity of a surface is generally a function of wavelength and angle of incidence of the light and angle of reflection of the light. This complicated function can be described mathematically through what is called the Bidirectional Reflectance Distribution Function.

Basically, the BRDF depends considerably on the material from which the surface is made. Some materials are very specular in their reflections (like a mirror), other materials are much more diffuse (incoming light is reflected over all directions). It's hard to say what the different portions of the flag would do without actually measuring them.

So, overall, it is no surprise to me that surfaces can look different colours in different orientations and lighting conditions. There's really no such thing as a 'colour' of an object  - it always depends on the conditions under which it is viewed.



| | Comments (0) | TrackBacks (0)

Well, if I knew that I would be busy doing it. Perhaps you'd be better off asking Perlmutter, Schmidt and Riess who have just won the 2011 prize for their discovery of the ever accelerating expansion of the universe. I love the story I've heard (whether it is true or not I don't know) that, on receiving the phone call from a woman with a Swedish accent, Adam Schmidt at the Australian National University assumed it was a prank from some of his graduate students.

It's worth a note that while I was an undergraduate, there was still a lot of debate in cosmology as to whether the universe would expand forever or start contracting again - the data available at the time suggested it was close to the critical point between those two options. Not any more.

There's been a bit of a theme (or two themes) to the Nobel Prizes for physics in the last few years (by which I mean since about 2000). You can split them into two roughly equal groups: (i) Materials (e.g. graphene, giant magneto-resistance, superconductors...) and (ii) Particle physics and Astrophysics (expanding universe, symmetry breaking, microwave background...). Maybe that's three groups. If you want your prize, those two/three areas seem to be the places to be in. No hope for me then.

The one that stands out as not fitting in with the themes is Glauber / Hall & Haensch's prize in 2005 for optics, including in Hall and Haensch's case the development of the optical  'frequency comb' for precision spectroscopy.

An interesting point is that the prizes in recent times are roughly evenly split between those in extremely practical things (often that have changed modern technology and made a lot of money, such as integrated circuits) and those in 'blue-sky' things, that have less obvious application. Physics covers a huge spectrum (if I can use that optical word) of research, and the practical stuff and blue-sky stuff both form a part of it. It's nice to see that both are being recognized in this way. Governments take note.






| | Comments (0) | TrackBacks (0)

Probably every teacher's dream is to do nothing and still have your class engaged with learning. I experienced that at the Scholarship Physics session I ran on Saturday.

I was doing an exercise with the students to help them to present physics answers clearly. Every physics teacher probably knows what I mean here - often, in a response to a written question, a student will produce a pile of equations sprawled across the paper, with equals signs where there shouldn't be equals signs  (e.g.  F = ma = a = F/m ; here that second equals sign should be an 'implies' sign, but equals has sneaked in instead, rendering it silly) , few if any words of explanation, and an answer at the end that is nearly but not quite right - then you feel as a teacher it is your duty to try to decipher what has been written and try to find out where the student has gone wrong. And it ain't easy, I can tell you. My point is that an examiner has piles and piles of scripts to mark - just how much time is he or she going to spend trying to work out what your scribblings mean?

So, for a particular question, I went through it carefully on the whiteboard, identifying all the steps, then I got the students to write down their answer as if they were answering that question in an exam. Then, the last step was to get the students to swap their paper with their neighbour, and see if they could decipher what their neighbour had written. Was it clear? Did it use physics terms correctly? Did it miss things out? etc. At this point the room erupted into conversation, as pairs of students started helping each other as to how to write things properly so they could be understood. It was fantastic to watch and listen to.  I didn't have to go round looking at what fifty students wrote (not that I could do that) - they were 'peer' assessing (e.g. see Eric Mazur's work here), and, from the conversations I heard, doing it really well.

Of course, it doesn't mean I don't have a job to do - getting them to discuss the right questions and situations, and identifying where the class as a whole is falling down still needs to be done, but it does show just how effective a teacher can be by thinking about an activity carefully first, then just being quiet while the students do it.





| | Comments (0) | TrackBacks (0)

On Saturday I ran a session for final year school students who are soon to take the Scholarship Physics exam. There were just over fifty of them, and they were a lively group. I thought that overall the day (well, half-day really) went pretty well, and we managed to cover more than what I've done in previous years. I think that was down to a mix of the students being generally better prepared than I've seen them in previous years (if you read examiners' comments on past exams that is one thing that is repeated time and again - students aren't prepared for the exam), but also me being more efficient in my teaching.

I know this latter one is true because I've been keeping evaluation questionnaires of what students think of the session. It's the fourth time I've run it in Hamilton, and over the last years I've continually had feedback that I was going through things too slowly. Each year I've speeded things up, and cover more, and still been told I'm going too slowly; finally this year I think I've hit it about right. Out of 50 completed questionnaires (that in itself is a good sign - nearly every student completing a questionnaire), there were 5 who said I was too fast, 44 who said I went at the right speed, and 1 who said I was too slow. Seems like a happy balance to me.

The major (quite overwhelming) response is that though the students enjoyed the session and thought it was helpful, they wanted it to go for longer, and do more stuff - e.g. a full day rather than just a day.  It certainly could be done, but it's hard work when you're the only teacher for it (and have to give up a whole Saturday for it).

Anyway, my point is here that I wouldn't know any of these things if I didn't ask the students, and then take the time to go through their questionnaires carefully afterwards (with 50 questionnaires, and a lot of free comments, it is a couple of hours work). But by doing so, I know how to improve things. And that's the next step - if you're not prepared to change anything for next time there isn't a lot of point asking the students about it. 

We are now in the last two weeks of B-semester at the University of Waikato, and I, like many other staff, are about to start giving out appraisal forms to my students. These are a great opportunity to learn what's happening my courses and my teaching (but it's by no means the only opportunity) and it's always interesting to have a look at the results. I know from experience that I get more useful information back from the students if I brief them first on why they are being asked about this stuff, and genuinely believe that I might change things as a result. In fact, on our paper outlines (the summary of a paper that students can look at before they enroll) we are now asked to outline how we've responded to previous student comments. That, I think, is a good thing - if we show to students that we take what they think seriously, then they will be serious about what they comment.















| | Comments (0) | TrackBacks (0)