The University of Waikato - Te Whare Wānanga o Waikato
Faculty of Science and Engineering - Te Mātauranga Pūtaiao me te Pūkaha
Waikato Home Waikato Home >Science & Engineering & gt; Physics Stop
Staff + Student Login

December 2011 Archives

I've had a great reply to an older post about confidence in doing experimental work. Rather than leave it inconspicuously at the bottom of an aging post, I think it's worthwhile a reply in a post of its own. Hope you don't mind the publicity, John. Here's the comment, to the original post.

As a classic unconfident physics student, I would like to suggest that maybe you walk your students through HOW to check their work. My first year physics teacher would do the exact same thing to me (and yes, it can be infuriating.) One day, my teacher finally said, "well, lets take a look at your data." Going through each section of data, he asked if any of this data seemed abnormal. Then he told me to recheck ALL of my equations to make sure they were done correctly. I did this, and he told me to do it again. When I finished checking for a third time, he finally said, "If you've checked it three times and the math is right and the data doesn't seem skewed it's probably right. Now do this for all your work from now on." For some reason, just being given that process of how I should work (you know, being taught) gave me much more confidence in my own work AND I make fewer mistakes since my process is to recheck my work repeatedly. This may not work for everyone, as I imagine there are a lot of people just not willing to put in the work but some of your students might just be lost and in need of a little guidance.

As an expert in something, be it physics, sailing, gardening or whatever, it's very easy to pick up on clues that others aren't going to spot - and forget that you are doing this.  So asking the question of the teacher "How do you know that?" is a good way to go. There are some simple examples in experimental work I've talked about before. If a student has an answer that is around one thousand times too big or small, I'll send them away to check their units very carefully - have they used milliamps rather than amps, etc. It's experience that tells me that, and I do it in my own work too (yes, sometimes I'll make a unit mistake - in dealing with electrophysiology we usually quote membrane potentials in millivolts - and it's easy to let that 'milli' unit slip.)

Another example I remember is when I was looking at an experimental report written by a Masters student. He had a lovely straight line, suggesting he had done something correctly, but the gradient was very wrong. My intuition told me to look at what factor it was wrong by: it was sixteen times too small. And there's a clue. Sixteen times is two to the power four. Factors of two often go astray through confusion between radius and diameter of an object. Sure enough, in the equation he was trying to prove, there was a radius raised to the power four. Bingo - he's measured the diameter, but written it down as the radius. I then went down to the lab and measured this piece of equipment myself, to verify that indeed, he'd measured the diameter, not the radius.

Arguably, a Master's student should be spotting a power of two hint, but I wouldn't expect a typical undergraduate to be able to do it.

It's hard to teach this kind of thing - experience is so important in thinking like an expert. If you're a student, make sure you make the most of the experience you get. One way, as the comment points out, is to discuss your performance with the teacher - constructively, and see how they view it,  and why.

| | Comments (0) | TrackBacks (0)

With Christmas approaching (you can tell from the rapidly deteriorating weather) it's time to take some leave. Physics-stop will be a little quiet until January, when the LHC might have a bit more data and the swimming pool here might have warmed up a touch.

I wish you all a very happy Christmas and wonderful 2012.

| | Comments (0) | TrackBacks (0)

Well, CERN was certainly twittering away last night, though, to be fair, I'm glad I didn't stay up for the press conference. Some things are worth trading in your sleep for, such as an eclipse of the moon (occasionally) or other astronomical event, an Ashes test, a Royal Wedding (just about), but, I'm afraid, not a seminar on particle physics, not even one about glimpses of the Higgs Boson.

So, it transpires that the much-hyped seminar about the 'discovery' of the Higgs turned into something more like a status report in the search, though, perhaps, with a some tantalising hints.

We have not yet found or disproved the Higgs - Rolf Heuer

At the Large Hadron Collider, there are two experiments, ATLAS and CMS, each with vast teams of people, looking for the Higgs. (It's worth emphasizing that this isn't the only thing that these teams are doing - the data from the two experiments is used for other things as well.)

ATLAS reports that, if it exists (oh those words), the Higgs is now corned in a patch of dense forest lying between 116 and 130 GeV in energy.  In the coming months they'll be making further forays into this territory to try to flush it out. The CMS group agrees with them, though reckons the area of interest is between 115 and 127 GeV.

Who'll be the first to bag it? It appears that both groups have made reasonable progress here, though maybe ATLAS is slightly ahead. They're looking carefully at about 125 GeV, reporting a 2.4 sigma result, compared to CMS's 1.9 sigma result. 

What does 'sigma' mean here? It's a measure of how certain (in a statistical sense) you are about something. The experiments have collected a raft of data, and the question that has to be asked is: What is the chance of this dataset having occurred by chance alone, if there is no Higgs? What is the likelihood that that rustling you heard in the bush was just a bit of localized wind, rather than due to the moving beastie? A high sigma result tells us the probability that it was just a chance occurrence is very low; in other words gives you some confidence that you've actually found something. A low sigma result means there's a fair likelihood it was just chance, so nothing to get terribly excited about, though perhaps something to take a closer look at.

Now, 2-sigma is often used in science as a threshold for identifying some effect. If my PhD students were to identify something at a 2-sigma level, I'd be quite happy. I've seen lots of presentations at conferences where people proudly state that because group X and group Y have results that differ at a 2-sigma level, where group X is a control group and group Y has received some intervention, then the intervention is 'proved' to be a success. (One really should be more careful making such blanket statements, there is more to it than that.) For important things, such as when we're talking medical interventions of some form, a 3-sigma level is often used - more confidence is required that the result isn't due to chance.  For particle physics, the standard is a 5-sigma result - which means that it's very unlikely indeed that the results are just a statistical fluctuation.

It's worth remembering that these thresholds are entirely arbitrary. There is no hard scientific reason why particle physics has to use a 5-sigma threshold. It just has, by convention, chosen to do so. As science goes, this is pretty high, denoting a very cautious approach to claiming any discoveries. To get there, ATLAS and CMS are going to need to collect more data.

Assuming, of course, that this Higgs boson beastie actually exists.



| | Comments (0) | TrackBacks (0)

There are some fantastic examples of momentum conservation in everyday life. This week I was attacking the leftovers from a tree removal we had a couple of weeks ago - turning the chainsaw-cut rings the tree surgeons left us into something that could be shoved into the fire come winter time (assuming no bees are in residence).

It's very satisfying to see that log-splitter fall under gravity, hit the target smack in the centre and move smoothly through the wood sending two roughly equal pieces flying in opposite directions at similar speeds. This is a neat collision to analyze from a momentum view point. Momentum is the product of an object's mass and its velocity - and of course has a direction associated with it. Before the collision, we have the axe head travelling downwards. After the collision, we have the axe head stationary (in the chopping block) but two pieces of wood flying horizontally towards opposite ends of the garden.

Let's analyze this by looking at the momentum in the vertical and horizontal directions, before and after the collision. Let's start with the horizontal. So, before the collision, there is no horizontal momentum (it's all vertical), and after the collision there is no net horizontal momentum either - the vector sum of the momenta of the two blocks is zero, since they move in opposite directions. In fact, if the axe falls off centre and I get two unequal pieces, it's clear that the smaller piece (lower mass) squirts sideways much faster (higher velocity) than the larger piece - again  conserving momentum.

What about vertical momentum? The head of the log-splitter clearly has momentum before the collision but doesn't afterwards. What's happened here? Momentum isn't conserved in the vertical direction because there has been a vertical force acting on the log / axe system. This force is exerted on the log / axe system by the chopping block. Hit it hard with an axe, and it exerts a force back again; that's Newton's third law. So, when a physicist talks about momentum being conserved, what he or she means is that it is conserved in the absence of external forces. More mathematically, it can be said that momentum will be conserved when there is translational symmetry. In this system, there is horizontal translational symmetry - basically there's nothing stopping the two halves of the block as they move sideways, but there isn't vertical symmetry - there's a chopping block sitting on top of the ground.

This regard for the finer points of conservation laws in physics is obviously what motivated the cat, after I had turned my back for thirty seconds, to jump on top of the chopping block to survey the situation. He's obviously not read the proverb about curiosity and has species.

| | Comments (0) | TrackBacks (0)

I've been talking today with a PhD student about some measurements he's made in the lab. In physics, like all sciences, when we measure something we don't just make one measurement, but we measure it several times. That way you get a more accurate result. Now, with most physical measurements, we expect there to be a single answer (e.g. the mass of an electron is what it is, we don't expect it to be different tomorrow from what it is today; nor do we expect two different electrons to have different masses. ) Of course, every time we measure the mass, we'll get a slightly different result, because of the uncertainties in our measurement, but we expect there to be a 'true' value.

My student's been measuring a physical property of a biological system. In this case, there won't be a true answer, since every specimen will be different. No two plants are identical, no two animals are identical. He's ended up with data that goes something like this. Don't worry what the measurement exactly is, or what the units are, it's not important. (It's very important to him, but not to this blog entry - that's what I mean.)

0.308   0.327  0.291  0.356  0.328  6.23  0.310  0.451  0.299  0.320

Now, casting your eye over this one there seems to be an obvious problem with one of the data measurements. 6.23? I don't think so. It's nothing like the others. Something has most likely gone a bit awry when this measurement was made, and it most likely doesn't belong in this set. Let's throw it away.

But then we have another problem. What about the 0.451 measurement? That's rather higher than the rest, but it's not outrageously higher (or is it?). Could it be a dodgy measurement? Or is it just that this particular specimen exhibited our property particularly strongly. Can we throw it away? Hard to tell. And if we did, what about the 0.356 measurement? That's a touch higher than typical, too.

Where do we draw the line with saying that one measurement is OK, but another isn't.

I don't think there's a clear cut response to this one. But it serves to point out to the student that when you are collecting data, just think before you write. If we see something that is obviously way out (the 6.23) then query it. What's gone wrong here. Can we repeat the measurement on the same specimen and get a better answer? In fact, in this experiment, it's difficult to do a second measurement on the same specimen. So we need to think carefully about how we know when we've got a good measurement - i.e. what would give us confidence that it's been done correctly?

That's one of the subtleties of doing real science, on real systems - how you do a measurement, how you ensure it's robust and means what you think it means, is really important. That often gets glossed over in the way we teach the subject. So we have some thinking to do here.

| | Comments (0) | TrackBacks (0)

I'm sure anyone who has ever used a top-loading washing machine will have seen this phenomenon occasionally: you lift the lid after the machine has been spinning and you find one item of clothing (such as your favourite most expensive shirt) stretched across the diameter. It happened to me last night. What's happened is, right at the beginning of the cycle, the shirt has been lying most probably across the diameter of the machine (OK, so there's a spindle in the middle so it can be exactly across the diameter) and then as the machine has begun to spin instead of being squashed against the drum  at single point it's been squashed simultaneously at two opposite points. It's probably not doing the clothing much good when this happens. 

It's interesting though to ask the question - if we put a piece of clothing at the centre - and, as I've already said, you can't, because there's the central spindle in the way - but if you did, what force would the clothing experience here? We'll use a rotating frame of reference, what you'd see and measure if you were in the machine with the clothing, so centrifugal force is the dominating factor here. The centripetal force is directly proportional to radius times rotation rate squared, so the larger the radius, the greater the force. This means a bit of clothing right at the centre experiences no centrifugal force at all - so it will stay where it is. Of course, if it is slightly off centre, it will experience a force towards the outside of the drum, and off it moves. It's a bit like balancing a ball right on top of a hill - in theory you can balance it, but in practice, any small displacement away from the top and it will roll down. It's an unstable equilibrium point.

Now, with the shirt that's pulled both ways, it's a bit like having two balls joined together by string at the top of the hill. One ball rolls one way, one the other. We end up with the string stretched out, and the two under tension. So although the centrifugal force at the centre of the machine is low, the shirt is still under tension at that point (and so could be damaged), as it gets pulled two ways at once.

We can contrast this to what happens with gravity as we head towards the centre of the earth, Jules Verne style. This is a fun question to ask students to see if they've grasped the idea of Gauss' Law. (If you don't know what that is, don't worry). What is the strength of the field at the centre of the earth? By symmetry, it's going to be zero. So what would you feel if you were there? (Apart from very warm?). Is it like the washing machine - would gravity pull you all ways at once leaving each arm and leg stretched outwards beyond your control?

In this case, no, because the physical system is different. The gravitational force will always be towards the centre of the earth (not away from the centre as in the washing machine), and, as you go downwards through the earth, it gets less. If the centre of your body were at the exact centre of the earth, your limbs would feel almost no force at all - it would be like weightlessness. Throw your shirt off here, and it wouldn't get pulled in all directions, in fact, the opposite, it would (very slowly) get squashed towards the centre, as the bits further out experience would experience a tiny bit of inwards gravity. Different system, different result.

Unfortunately, unlike the washing machine example, doing this experiment probably is confined to the realm of fiction.




| | Comments (0) | TrackBacks (0)

I've been talking this afternoon with some colleagues from the Faculty of Education here. While FoE is probably best known for teaching school teachers, that's not all they do. In my particular case, I've been interested in how you go about doing research in education - with specific regard here to physics and engineering.

While many of the principles of doing research are the same in Education as they are in science (e.g. maximize your sample set to get better statistics, keep careful track of what you are doing, research things that people want to know about), there are some ways of doing things that don't often turn up in a run-of-the-mill physics experiment - such as use of qualitative data and ethical considerations. It's good to know about how to do a piece of research properly before actually doing it.

In my case I'm interested in doing a small piece of research on the way physics students see mathematics - I've commented before I believe on how many students seem to think that physics equals stuffing numbers into equations (possibly because of bad teaching) and therefore they think they are good or bad at physics depending on how good they believe they are at maths. To start with, I'll be interviewing some practising physicists about how they see the interplay between maths and physics working out (any volunteers here?) - then once the students are back I'll be looking at their opinions on this one. But it's been useful to get advice from the 'experts' in education research, to make sure the research has the best chance of being useful.

What do I mean about being useful?  Probably two things here. First, does it tell me (and others) anything about how to improve the teaching that we offer? In other words, do we give our students the best possible chance of learning physics. Can we improve that learning? Secondly, can it be published? If the answer to the first is yes, then probably the answer to the second will be yes as well. Any reader who works in a tertiary education context will know that publishing is the most important thing they can do to keep their promotion chances high. (One can argue whether that should be the case, but at the moment it is the case). So a well thought-out project in this line should acheve two things at once. How's that for efficiency?

Actually, it's surprised me a bit discovering just how much research in Engineering Education has gone on here in the last few years. There's been some neat little projects - particularly in electronics - for example looking at the way students learn (or don't)  really difficult concepts - or the way that putting fourth years and first years in the lab together influences the perspective of first years. (My memory on the last one is that it doesn't - or at least, not in this case, but it was interesting to see it tried out.)  And it's great - especially for the students - when the outcomes of these projects get fed into the way we teach.




| | Comments (0) | TrackBacks (0)