The University of Waikato - Te Whare Wānanga o Waikato
Faculty of Science and Engineering - Te Mātauranga Pūtaiao me te Pūkaha
Waikato Home Waikato Home >Science & Engineering & gt; Physics Stop
Staff + Student Login

April 2011 Archives

I'm beginning to wonder how I've ever found time to do the nearly 400 entries that this blog has accumulated over the last two and a half years.   It's Friday already and I've only done one entry this week, on top of not much last week either.

One of the highlights of electricity is the two-way switching circuit.  This is commonly used on staircases. The idea is that two switches (one at the bottom of the stairs and one at the top) can control the same light (e.g. at the top of the stairs). So you can turn it on by flicking the switch at the top of the stairs, then go down the stairs, and turn it off again with the switch at the bottom. This is often coupled into pairs, so there would be two switches at the top, one for the upstairs light, one for the downstairs light, and two at the bottom.

A simple 'truth table' for the switch would be: Top switch 'up', bottom switch 'up', light on; top switch 'up', bottom switch 'down', light off; top switch 'down', bottom switch 'up', light off; top switch 'down', bottom switch 'up', light on.  So changing anything toggles the light from on to off or from off to on.

We have such a circuit in a bedroom. Two switches, one by the bed, and one by the door, control the same light (for people who don't like getting out of bed to turn a light on and off). Except that this light has never worked. I didn't really bother me, because there was another light in the room (on a single switch) which worked just fine, though I have wondered about having an electrician to look at it.

Anyway, I discovered a couple of weeks ago that actually it does work, but in a strange way. The light comes on when both switches are 'up', but only when both switches are 'up'. Any other combination and it's off.  Maybe a wiring fault, I'm not sure. I guess what I did when we moved into the house was that I flicked the switch by the door up and down (and got nothing), flicked the one by the bed up and down (and got nothing), changed the bulb, repeated (and still got nothing) and gave up. I must have missed the one combination out of the four that worked.

So, now I know how to turn the light on, but it is a bit of a mystery as to what's happening. If I feel inclined, I might have a look in the switches themselves, to see if that gives me some clues.

| | Comments (1) | TrackBacks (0)

Thinking back to last week's MasterChef (the chocolate tower of terror - re-live it here), there were a couple of nice examples of cooking being a branch of physics. I've heard it said that cookery is all about managing the flow of heat into (or, in this case, out of) an object, which, of course, requires some physics.

The first example was the tempering of the chocolate. This is a problem in phase transitions and crystallization. The idea is to get the chocolate to crystallize into the most useful form, and to do that you need to get it to go through the right sequence of temperatures. First, hot enough to melt all the six different crystalline phases, then lower to get  phase IV and phase V crystals to grow. The latter are  the ones with the best properties for building things out of. Then the temperature is raised again to remove the phase IV, and letting the phase V crystals grow.  Wikipedia does a nice job of explaining the various phases of chocolate. I hadn't a clue it was so complicated getting chocolate right.  I just eat it without thought towards the physics involved.

The second example was the cracking of the chocolate that had been cooled too quickly. This is quite likely a thermal expansion issue, similar to the way a glass can crack if boiling water is poured in. If the outside of the chocolate disc is much colder than the inside, there are stresses set up in the chocolate as the outside tries to contract, and suddenly our nicely tempered chocolate snaps. Oops.

The one thing I didn't quite grasp about the episode was what one did with the chocolate tower of terror once all the cupcakes had been eaten.  Do you expect your guests to eat the tower, (in which case, how do you serve it?) or do you put it away in a cupboard for a later occasion?

| | Comments (0) | TrackBacks (0)

I've been away in sunny and calm Wellington for a few days (who says the weather is bad there?), which explains the lack of blog entries. Back at work and blogging next week, but in the meantime have a good Easter.

| | Comments (0) | TrackBacks (0)

I'm stuck at home at the moment with a horrible cold (yuk) and a cat with a burst abscess (double yuk). 

In between blowing my nose and mopping up bits of goo emanating from poor kittykat's wound, I've been reading a book I bought last week very cheaply from our university bookshop. It's having a monster sale - which I'm told has nothing to do with the fact that it's in administration and everything to do with it relocating soon to our new very flash library building.

'Assessment for Learning' by Paul Black and others describes a study done in UK secondary schools on Maths and Science teaching in which teachers were encouraged to use formative assessment tools as part of their teaching. Although a secondary-school study, I'm sure a lot of it will carry over to university level. I'm not halfway through it yet, but I'm already fascinated by results (Butler, 1988) that suggest that 'marking' a student's work that through giving them comments and a summative mark (e.g. 7/10) is no better than giving them just the mark, or even no response at all, whereas not giving them a mark at all but just the comments leads to more learning.  They talk a bit about experiences where teachers thought that this was just not possible in the mark-based regime they worked in, but tried it anyway and found the howls of protest they expected from students, parents, colleagues and headteachers didn't materialize. Worth a shot in one of my papers, I'm thinking.

Another major point concerns students being able to respond to feedback. If you provide them with an assessment of where they are and where you want them to be, that is good, but there needs to be some mechanism available by which they can close that gap.  That might involve giving students opportunities to resubmit work or having a comprehensive discussion on an assessment within class. Peer learning can be really strong.

The authors summarize the steps of formative assessment as:

1. Data on the actual level of some measureable attribute

2. Data on the desirable level of that attribute

3. A mechanism for comparing the two levels and assessing the gap between them

4. A mechanism by which the information can be used to alter the gap.

My hunch is that not too many of us are very good on that last point. Certainly I'm not.  Anyway, I'm looking forward to reading the rest of the book - well worth the bargain basement price paid for it.

Black, P., Harrison, C., Lee, C., Marshall, B. & Wiliam, D. (2003) Assessment for Learning. Maidenhead, U.K.: Open University Press.

Butler, R. (1988)  Enhancing and undermining intrinsic motivation: the effects of task-involving and ego-involving evaluation on interest and performance. British Journal of Educational Psychology, 58:1-14.

| | Comments (0) | TrackBacks (0)

I guess a lot of you will have seen this video of the crash at JFK airport this week.

It's almost a perfect example of what I've recently covered in my dynamics class, concerning collisions that result in things spinning, because the forces don't act through the centre of mass. So, in this example, the impulse due to the collision of the wing of the A380 on the tail of the smaller plane sends the smaller plane spinning. (A full analysis of forces is complicated here due to friction between the wheels of the small plane and the ground).

I wouldn't have liked to be on the spinning plane.  I'm not sure what model it is, but I'm guessing it's about 25 or 30 metres long. That would mean people at ends of the cabin would be around 10 m or so from the centre of mass (the axis of rotation).  The film shows it turns about 60 degress in around a second, or approximately 1 radian per second. So a quick application of our centripetal force equation angular velocity squared times radius gives us an acceleration (towards the axis of rotation) of around 10 metres per second squared.

That's the same as the acceleration due to gravity, or '1G'. A roller-coaster ride. But what would be bigger would be the initial acceleration at the point of impact. This one's a touch harder to estimate, but the plane probably get's up to 1 radian a second rotation rate in about a quarter of a second, giving an initial azimuthal acceleration (azimuthal being in the direction around the axis of rotation) of more like 40 metres per second squared, or 4 G.

If you were standing up at the time, you wouldn't be afterwards.

| | Comments (0) | TrackBacks (0)

I've been marking a couple of student assignments today. I won't go into the details, but as part of it they had to process some data and plot some graphs. The graphs showed values that varied considerably - some thousands of times bigger than others.  I had expected (assumed = bad move) that the students would use logarithms to show their data, but, regrettably, they didn't, so I had to look at data points that I could barely distinguish above the axis.

Logarithms crop up a lot in physics as the inverse of the exponential, but they have a lot of neat uses as well. One, as I've said above, is in dealing with data that vary over a huge range. The logarithm, or 'log', for short, says - "to what power would I have to raise 10 to get the number concerned" - or, in mathematical terms, if y = log x, then 10^y = x. So, the log of 1 is 0, the log of 10 is 1, the log of 100 is 2, and the log of 1000000000000 is a mere 12. So we can put large and small numbers on the same graph.

For example, this semester I've been taking a third-year class on electromagnetic waves. These include radio, X-ray, visible light, microwaves etc. These wave phenomena can all be assigned a wavelength, and it turns out that there is a ferocious range of wavelengths that get used in physics. At one end of the spectrum (literally) there are radio waves, which could have wavelengths kilometres long, at the other there are gamma rays, which can have wavelengths of order ten to the power of minus twelve metres or shorter (that's 0.000 000 001 millimetres). To show a nice picture with these on you have to use a logarithmic scale. If you take logs of the wavelength, your axis only has to go from about -12 or so through to about 3.  Much more manageable.

Logs are also useful at finding relationships between quantities that you measure experimentally. Suppose you have two quantities, X and Y. You vary X, and measure Y. How does Y depend on X? A good suggestion is to plot a graph of the logarithm of Y against the logarithm of X. If Y is related to X by a power law, that is, Y  is proportional to X to the power N, then a plot of log Y against log X should be a straight line of gradient N. So we can find N.  Of course, not all relationships are power-law relationships, but in physics a lot are.

A third, and somewhat dubious use, is to hide problems with your data.  Taking logs tends to make data points look 'closer' to each other than they actually are - so a noisy graph looks a lot smoother when you take logs.

As I final topical example of logs, I'll mention the earthquake local magnitude scale. Since earthquakes vary tremendously in energy, a good scale to use is a logarithmic scale. The scale is logarithmic in that an increase of 1.0 means that the amplitude of vibration has increased ten times. In terms of energy, however, an increase in magnitude of 1.0 denotes an increase of 30 times the energy release. So a magnitude 6.0 is thirty times more 'powerful' than one of magnitude 5.0.   Here, the number after the decimal point really does make a difference. So the Canterbury earthquakes (7.1, 6.3) are really small-fry compared to the Sendai earthquake (9.0).

| | Comments (0) | TrackBacks (0)

On Tuesday night, after Cafe Scientifique, I was listening to a radio interview with Motoko Kakubayashi, from the Science Media Centre of Japan. She was talking about some of the hysteria that is brewing regarding the Fukushima nuclear power complex.  (The link will download the interview from the NZ National Radio website - though I'm not sure how long it will remain up there for.)

It was a fascinating interview to listen to. One of her main themes was the lack of really clear science-based advice being received by the Japenese public. Getting news isn't a problem - rather, the problem is discerning the reliable, trustworthy, useful advice in amongst the sea of information that is accumulating on-line and through the media.

So I heard that there are hotels that are refusing to accommodate refugees from the Fukushima area, because other guests fear 'catching' radiation from them. People don't know whether to eat locally grown food or not. They hear a government spokesman say one thing in the morning, and then someone from another government agency somewhere says something else in the afternoon.

Motoko said that partly the difficulty was that the subject area was so specialized - she herself was a physicist but didn't know (as indeed I don't) the details of how the Fukushima complex is put together and operated. At university you learn about the general principles of nuclear fission and roughly how a nuclear power plant is put together, but it is a very specialized area so the details are left out completely. For example, I've been able to comment (here) on what a milliSievert means, but in terms of the details of what dose equivalent does exactly what to you in what way, I'm floundering. Not my area.

Another part to the problem is that there is simply so much stuff out there in cyberspace, so easy to access, that knowing what to believe is really difficult for the non-scientist.

So the Japanese Science Media Centre is doing its best, but it's a tricky job.

| | Comments (1) | TrackBacks (0)

Here's something I learned last night at Cafe Scientifique from one of our chemists, Chris Hendy.

Lake Rotorua produces a significant amount of methane. It just bubbles up to the surface from below. We could harvest it, and fuel a small power station; enough to provide energy to a small town. 

"But Oh No" - I hear you cry - "Not more fossil fuel burning giving more carbon dioxide".  Well, bear in mind that methane is a worse greenhouse gas than carbon dioxide, so you'll actually be reducing global warming by doing this.

In case you're worried, I am not advocating putting a gas rig in the middle of the lake, I just thought it was worth a mention.  Burning methane (natural gas) is not a problem IF the methane would otherwise end up in the atmosphere anyway.

| | Comments (0) | TrackBacks (0)

A week or so back I walked into the lecture room to give a lecture on electromagnetic waves, and was promptly asked: "Marcus, how much statistics do you use in your research?"  My initial reaction was to think "what has this got to do with electromagnetic waves?" and then, realizing that clearly it had nothing to do with EM waves, "what's the ulterior motive to this question?", but  kindly another student spelled it out more transparently. "Why do we have to do the statistics paper next year?".

We require our fourth year engineering students to do a paper on statistics in their fourth year. Obviously some students don't relish this prospect.

The truth is that I don't use much statistics at all in my work, beyond mean, standard deviation, and occasional use of a normal distribution. Once I think I got as far as a t-test.   But that's the nature of the work I do; it's not statistically taxing.  But what is necessary is a fundamental understanding that statistics does matter.

Most physics students have some idea of this, but it's often full of misconceptions. A common one is that the 'error' in a measurement equals the ( 'student-measured value' minus the 'real answer looked up in a databook') divided by the 'real answer' times 100%. That's not an 'error'; that's the percentage difference between  your measurement and a text-book value.  So, when I talk about uncertainly with my experimental physics class, I get them to think about what they would do if they didn't have a textbook to look up the 'right' answer with - i.e. they were the very first people ever to do this experiment. That, of course, is the case for research. No 'right' answer to compare with.

I've found a task that's worked well is to give them a fictional set of data for the acceleration due to gravity on 'Planet Waikato'. I make it fictional so they have to lose the notion that acceleration due to gravity equals 9.81 m s-2, as the textbooks say it does for earth. They only have the data-set I give them to work on.  Then I tell them they are building a rocket to leave Planet Waikato, and they need to know the acceleration due to gravity to within 1% uncertainty so they can select the right amount of fuel.  Does the data given them allow them to know the acceleration due to gravity to within this uncertainty or not?   That tends to get them thinking about how we can analyze results of experiments, and what we can say with confidence (and how much confidence) and what we can't say with confidence.  That's basically what statistics is about.

Just how to do a t-test, ANOVA, chi-squared test, etc, and under what circumstances, I leave out completely. It's something you can look-up, or consult a statistician for when you need to. The key thing is knowing that you need to.

The question is then, do we need an entire paper in year 4 for our engineers (but not our physicists) to instruct them in the way of statistics?  Probably the best people to ask are our graduates, several years after graduation.

| | Comments (2) | TrackBacks (0)