The shaking here in Hamilton is hardly on the scale of Seddon and Wellington, but it does mean my students aren't going to get anything meaningful out of their measurement of the gravitational constant this afternoon. The Cavendish experiment uses a sensitive torsional pendulum, whose motion is currently more dominated by ground movement than by gravitational attraction. I can't feel anything, but we can clearly see the effect on the equipment. Hope everyone is safe further south.
August 2013 Archives
I've commented before that there are a lot of skills that our science graduates need to have, that don't get explicitly taught at university. That's because they don't neatly fit into compartmentalized degree courses where the structure is dictated by technical knowledge. So things such as how to give a half-decent presentation, how to keep accurate and useful records of your experiments, or how to put together a paragraph of coherent scientific writing often don't get covered at all.
Another one of the skills to add to the list is how to keep control of your computer programmes. I'm not talking about learning a programming language or two, which probably most physics students will do in their studies. Rather, I'm meaning how to organize computer software at the top level. I was talking to one of our PhD students this morning, and she was asking for help in keeping her computer programmes under control. Specifically, she has a set of programmes that she has written, and they were working yesterday, but today they're not. What she has done is change a few things in the code, to try something else out, and then tried to change them back to how they were. However, she's not changed them back to exactly what she had previously, and now she can't remember what they were supposed to be.
That's probably a problem that most early programmers have experienced. She's been taught how to write the language, but not how to manage the programmes. She's swimming in a sea of different versions of the same programme. She has problems with version control.
When working on a computer programme with a team of people, as I've done in my previous job, version control has to be done properly. Otherwise one ends up with a mess of different versions, and no-one knows exactly who's version does what and to what extent it has been verified and validated. Instead, there's a code custodian, who looks after the latest, approved version. Other people can work on copies of the code, to try out modifications and improvements, but when their modifications are tested and approved it's the custodian who formally amends the approved code, and updates the version number. And a written record is kept of exactly what the changes are (even 'trivial' ones) and why they've been done. The result: Only a single programme is ever 'the current' programme, it should 'work', and there will be a formal record of all the tests carried out on it to give confidence that it really does work.
However, when you're writing a programme for your own use and probably your use only (as many PhD students will) it's tempting to think that you can keep track of where you are with it in your head. You don't need formal version numbers, and formal verification tests, and so forth. Unfortunately, most of these students will eventually discover that it is, in fact, a really good idea to do things properly. While we think we'll remember why we changed a few lines here and there, the reality is that in three months' time, we don't. I have trouble just remembering my passwords after coming back from holiday - remembering minor code changes is near on impossible. Looking after your codes in a formal way saves you a lot of that stress.
Unfortunately, the chances are that the students are left to find out this for themselves.
I'll be on holiday for a couple of weeks enjoying the last of a wonderful northern hemisphere summer, and forgetting my passwords and what I did to my computer codes this week. Back in time for a Waikato spring.
Well, today's big story is just perfect for PhysicsStop. Cricket meets physics. What more could I ask for.
In case you've just arrived from Alpha Centauri, there have been accusations flying that both English and Australian batsmen have been trying to defeat the 'Hot Spot' detector by putting silicone tape on their bats. The allegations have been vigorously denied from both sides.
Hot Spot is used as part of a decision review system in professional cricket. The idea is that it will provide evidence as to whether the ball has hit the bat or not when assessing possible dismissals. It uses thermal imaging (infra-red) technology to look for the heat left behind when the ball makes contact with a surface. As the cricket ball just skims the edge of the bat, friction between the two will generate a small amount of heat at the point of contact. The thermal imagers can detect this heat and therefore prove whether the ball hit the bat or not. At least, that is the intention.
So how might silicone tape (a fairly innocuous medical product) give the batsman an advantage? The allegation being made is that a batsman would put tape on the outside edge of the bat, which reduces or eliminates the 'hot spot' left by a ball grazing the edge. Presumably they'd leave off the tape from the inside edge, so as to make sure that a fine edge on to their pads gets detected to counter any appeal for leg-before-wicket. (I admit that anyone who doesn't know cricket will not have a clue what I'm talking about at this point, but hopefully you can still follow the physics part.)
Presumably the thinking is that silicone tape reduces the frictional forces between bat and ball, and therefore reduces the heat generated during a collision between the two. Would it work? One would need to try it out to be sure. But a quick glance at some values for coefficients of friction (e.g. here) will show that there is a vast range of values depending on the two materials. Some combinations surfaces have much more potential for friction (and therefore heating) than others. So it's plausible that a low friction tape might have the effect. (Though one would think there might be more effective methods - e.g. spraying the edge of the bat with a lubricant spray. The thinking might be that applying tape to a bat is, bizarrely as it might sound, actually legal in cricket.)
There's been some discussion on the blogs that it has to do with thermal conductivity, though I'm not convinced by this argument. To defeat Hot Spot in this manner, one would need a material that gets rid of the heat very quickly by spreading it to other areas, so a noticeable hot spot doesn't persist. The problem is that the thermal diffusivities of everyday materials are too low for this to happen. Thermal diffusivity controls how quickly heat spreads out by conduction. Even the very highly diffusive materials, with thermal diffusivities of around 100 mm2/s or so, would have a spot of heat spread out by only 10 mm in a second (The square-root of the product of thermal diffusivity and time tells you roughly how far heat will spread in that time). The Hot Spot frame rate is much shorter than this so there's not time for the heat to diffuse away.
But I can think of another mechanism by which the tape might fool Hot Spot. The amount of infra-red light emitted by a surface doesn't just depend on its temperature. Some surfaces are better emitters than others. A perfect emitter is called a 'black-body' in physics. However, be warned - an object that emits infra-red really well doesn't necessarily look black to the eye - and conversely don't think that because something is white that it doesn't emit infra-red well. Some materials have properties that are very dependent on wavelength. It is possible (I don't know) that silicone tape has a lower emissivity than wood, and therefore the effect, as viewed by an infra-red camera, would be reduced. Possibly it's a combination of reduced friction and reduced emissivity.
Then again, possibly this is just a media propaganda stunt to try to get some interest back into the last two Ashes tests. (Again, non-cricketers won't have a clue about that sentence).
All this would make a great student project. I'm sure there'd be physics graduates queuing up to do a PhD in defeating cricket technology.
Yesterday afternoon I was engaged in a spot of DIY - putting up some shelves. Even for me, as someone who takes to DIY like a duck to mountaineering, it's a fairly simple task, and I'm pleased to say that I got there without the 'do' in DIY turning into 'destroy'. With the help of my trusty stud-finder (Karen - who has a knack of locating those invisible studs behind plasterboard walls just be tapping), I managed to locate two studs by drilling just three holes. The rest of the job took only four tools - a drill, a pencil, a screwdriver and the all-important spirit level.
I've always been fascinated by just how simple a tool the spirit level is. It does a fantastic job of getting things level (level enough for general domestic purposes, anyway), just by using a bubble of air in a liquid. The physical principle by which it works is hardly taxing - the bubble (the lack of fluid) rises to the highest point in its tube, as the liquid sinks down as low as possible to minimize its potential energy. A similarly simple method - the plumb line - gets things vertical - though a second tube turned by 90 degrees on the spirit level turned through 90 degrees can do the same task.
In fact, it is hard to imagine a complicated machine to find where 'vertical' is. If one assumes that 'up' is the opposite direction to the force of gravity, one simply has to measure the direction of the force of gravity, and hanging something on a string is the most obvious method to do it. Sure, one can get technical and enclose the thing in a pipe so that wind doesn't get to it, and so forth, but the basic weight-on-a-string is simple and effective.
There are some hiccups to think about, however. One needs to be sure what one actually means by 'vertical' and 'horizontal'. The force of gravity isn't precisely towards the centre of the earth at all places on the earth's surface. A weight on a string will be affected by the presence of nearby mountains, or large-scale variations in the geology underneath the surface. A quick estimate based on Newton's law of gravitation and the size of mount Te Aroha, for example, suggests that houses in Te Aroha town might have their vertical distorted by a few thousands of a degree. Not a great deal but enough to be detectable with half-decent equipment.
But is the vertical really out? If the definition of a vertical is "the direction of the acceleration due to gravity" then, no, it isn't. If one is putting up shelves in Te Aroha and wants them horizontal (so that a ball placed on the shelf stays on the shelf) one wants them at 90 degrees to the local force of gravity. If that means a few thousands of a degree different from what you'd get in Hamilton, then so be it. It just depends on your definition of 'up'.
[And, of course, it is more than a few thousands of a degree different from Hamilton anyway - being 44km away on a sphere of 6400km radius, that's about 0.4 of a degree due to location.]
I'm sure many people have had a conversation with a school-teacher friend that goes along these lines:
You: "How are you today"
Teacher: "Uh. I'm in a bad mood. I've just had class 8C. Why do they have to be so difficult?"
You: "Is that just normal of year eights?"
Teacher: "No. Last year's lot were really good. Bad class this year for some reason."
Pretty well the same thing happens in the tea room here at university. We talk about 'bad' classes and 'good' classes, 'quiet' classes and 'noisy' classes. I've often wondered whether it's because occasionally there are some really outgoing students in a particular class that lead a class one way or another. But I've though that, once you've got a decent number of students in a class, statistics will take over and one year's class will be much like the previous's.
Well, no. I've just been going through comments on the student appraisal forms for one of my classes. At the end of each paper, students get a formal opportunity to submit feedback on how the paper and teaching went. The comments are often fascinating. In this case, I had a lot of comments concerning the test I set. In the last couple of years, I've been trying out tests that you can talk in. Last year, the response was very positive. The majority of students loved it, though there were a few who really didn't like it. I'd say it was about three-quarters (very) positive, to about a quarter negative. This year however, it was the other way around. About three-quarters hated it, and about a quarter loved it. Why the difference? I think what I did was much the same. So was it the class dynamics that caused the issue?
These students have been together for a while (about two years) so there's been time for the relationships between students to really develop. And, I'm wondering, if different classes develop in different ways, and develop different whole-class expectations of the teaching they get. Maybe there really are 'easy' and 'difficult' classes out there.
I don't know. Someone's probably studied the effect. The question I now have to grapple with is the following: Do I keep the 'test-you-can-talk-in', or abandon it and go back to the traditional style for next time?