The University of Waikato - Te Whare Wānanga o Waikato
Faculty of Science and Engineering - Te Mātauranga Pūtaiao me te Pūkaha
Waikato Home Waikato Home >Science & Engineering & gt; Physics Stop
Staff + Student Login

September 2011 Archives

This one's a bit old, but it's quite topical. Love the colour. From, of course.

| | Comments (0) | TrackBacks (0)

It's one of those busy weeks - blogging's been pushed to one side a bit, and I'm writing this at home with a cat on my lap who wants to walk all over the keyboard. So any bizarre sllepingh msitkaes or random characters *&fh$f{ are probably not my doing.

I was talking with a student earlier this week about his choice of PhD topic. He wants to disappear off to Australia to research dark matter, and was asking about what would happen to his career opportunities if dark matter met with a sudden demise.  Would he be better off researching something more mainstream?

My view is it is unlikely to make any difference - at least, not long-term.  While it is true that some people use their PhD as a gateway into a particular area of work and make their entire career out of it, I suspect it's more common for someone's area of work to change, maybe several times, after their PhD. That's certainly the case with me. It would be interesting to find out what percentage of people with, say, a PhD acquired 25 years or more ago, are still working in the area that their PhD was in. I suspect it's quite low. For one thing, science research changes a lot in 25 years - new things come up and old areas fall away.

Having a PhD is a statement in itself, regardless of what topic it is in. What it says is that you are able to carry out quality independent research - and that's what employers are going to take note of. So, if your PhD doesn't open up career doors in your particular topic area, don't worry; it will certainly open doors in other areas.

In my opinion a student should go with what they are interested in -and enjoy the experience. So Dark Mattter will be a great topic to research for a PhD, even if it turns out to have serious problems.



| | Comments (0) | TrackBacks (0)

Forget the rugby - the two big stories of the week are both physical science and both Italian: the faster-than-light neutrinos arriving at Gran Sasso and the ludicrous prosecution of Italian seismologists over the 2009 l'Aquila earthquake. In some respects, the two are related, in that they both ask questions about what science is about, and what it does and does not do.

First the neutrinos. The full report is now available online - so you can have a read yourself. I can't add much to what's been said in the media already - this is an extra-ordinary result - utterly unexpected, and, IF VERIFIED (and that's a big IF) means we need to seriously re-think what we thought we understood about Einstein's Special Theory of Relativity.

Now the earthquake. The decision to prosecute scientists for failing to predict the quake (that is what it amounts to) has rightly attracted a massive reaction by geo-scientists worldwide. Earthquakes are unpredictable - that's what makes them so nasty. We would like to be able to predict them - there's been a lot of effort to do so - but so far, no-one has a way of going beyond statistical terms - i.e. so much chance of a magnitude so-much or bigger in a region of so far over a time period of the next so-many months/years. One can't help wondering whether this prosecution is to hide failings elsewhere - who designed the collapsed buildings? - who gave consent for them? - who built them? If these scientists are found guilty of manslaughter, a possible result is the end of all earthquake research in Italy - a region very vulnerable to them - which is just what isn't needed.

How are the two connected? Well, they are both about the nature of science itself. Science is about providing a framework for investigating the world/universe in which we live. It helps us to make hypotheses, and build theories (N.B. not the same thing) and search for evidence to support or deny them. At no time would a scientist claim that they know everything about their subject. When new evidence appears, as the neutrinos have provided, we have to re-consider our theories. When we simply do not know something - e.g. when and where the next big earthquake will hit, we don't ever pretend we do. Unfortunately, however, the nature of science isn't well understood in some quarters and the idea that we should deal in absolute, undeniable, never-to-change truth is forced upon us. I'm sorry, but science isn't like that, and it never will be, no matter how much politicians may wish it to be so.





| | Comments (0) | TrackBacks (0)

It's great to hear that NZ is an integral part of the Australasian bid for a giant radio-telescope network. The Square Kilometre Array promises to produce some great images of southern skies in the radio frequency band. Radio waves are part of the electromagnetic spectrum, just like light waves, and can be used to provide images of what's 'out there' that provide information that visible images don't.

It will (if the bid is successful) comprise of many, small dishes, but scattered over vast distances,across Australasia. There's a simple physics reason for having to do this. That's diffraction. When you have an aperture through which your waves are captured, the waves diffract (bend) and that limits the resolution you can have. Very approximately, your angular resolution in radians is about the wavelength divided by the aperture. For visible light, wavelengths are really small (about 0.6 microns), so there's not much diffraction - a rough estimate for diffraction caused by the pupil of your eye (say 6 mm across) would be one ten thousandth of a radian, or a bit less than one hundredth of a degree. That's pretty small. More likely, your sight will be much worse than this - limited by your ability of the eye to focus. Apertures (the width of the objective lens or mirror) of visible telescopes don't have to be particularly large to give really spectacular images of the planets and distant galaxies, etc. The main mirror of the Hubble Space Telescope is only 2.4 m across - that's plenty to go on.

But radio waves are much, much, much longer in wavelength. At say 100 MHz frequency, radio waves are about 3 m long. Compare that to the 0.6 microns of visible light - a cool five million times bigger. That means to get the same resolution as for the visible light, you need an aperture five million times larger. So, to get the same resolution as the unaided human eye has for light, that would require an aperture about 30 km across. That's why we need to go across continents to get really good images at the long wavelengths.

Effectively, two telescopes placed a distance apart (if they are suitably linked) provides a synthetic aperture of that size - and can produce an image of similar resolution to one with that aperture size. Scatter lots of networked telescopes across a continent and you're talking a pretty decent radio telescope.



| | Comments (0) | TrackBacks (0)

At the end of last week I talked about watching the use of multiple choice scratch cards in an electronics tutorial. Well, on Monday afternoon, I tried them out myself, with two different groups of students. It was a first-year physics class - I have two tutorial groups of these, the first on Mondays from 3-4 pm; the second immediately following, 4-5 pm. 

Now, my experience with the first group was similar to what I saw last week - more talking than usual as students worked out the correct answer and discussed it with each other. And that's good. They all participated - they all got to think through things themselves - and certainly for some of them learning was happening.

However, I was a bit surprised by how the second group took it. They had a different response; they seemed to take it as a very serious individual exercise - hardly any talking or sharing of responses. They were after marks that were better than anyone else's, even though what they got counted for nothing as far as the final course grade goes.

Interestingly, when I asked the two groups whether they wanted the same kind of thing next week, the first group were rather 'yeah-nah-yeah-whatever' about it, while the second were all for it.

Not sure what to make of it all - except I'll do it again next time.

| | Comments (0) | TrackBacks (0)

There have been recent murmurings that Cold Dark Matter (CDM) is in trouble. Dark matter is stuff that is hypothesized to make up a fair chunk (23%-ish) of what is in the universe (as opposed to normal matter - the stuff we 'see' and experiment with - which may make up only 5% what's in the universe). The remaining 72%-ish would be 'Dark Energy'  - more bizarre still. 

The key word here is 'hypothesized'. No-one has seen dark matter - that's one of the problems with it - being dark it is almost by definition undetectable and so very very hard to research.  The reason it is believed to exist is that, if you look at how galaxies move, there just isn't enough visible matter to account for it. Well, not according to our best current theories, anyway.  It's hoped that the Large Hadron Collider will give some evidence for its existence, but, so far, nothing. (Interestingly, the LHC hasn't given us anything startling at all yet, but that might come with time.)

Now there is some evidence which goes against the current CDM hypothesis. Some computer simulations suggest, that if CDM were true, there would be many more dwarf galaxies close to the Milky Way than there actually are.  So we have the hypothesis, we have a prediction from the hypothesis, and we have data that can test this prediction, and, the hypothesis doesn't stack up. That's a neat example of how science works.

Whether this means that CDM is completely off track, or whether it just means a modification of the CDM hypotheses are needed, remains to be seen. If it's the former, I wouldn't want to suggest that massive research time over the last 30 years or so has been 'wasted'. It's simply science doing its job - testing things so as to determine what's what with the world and the universe.



| | Comments (2) | TrackBacks (0)

Recently I mentioned the use of scratch-cards to provide instant feedback for students.  Yesterday, a colleague of mine tried it out on a class of 2nd year electronics students, and I sat in to watch.

The most obvious impact was that students talked to each other. We were in a packed room, set-up linearly, so students didn't face each other - not exactly conducive to conversation - but lots of talking happened. Students instructing students - that's got to be a good thing.

Interestingly, I noticed that students seemed to fall into two types - the first type was absolutely intent on getting the right answer first time - it didn't matter how slowly they progressed - they wanted a card with no mistakes on it. The second type was happy to hazard an intelligent guess and scratch away when they didn't know - and it didn't bother them that they took four scratches to hit on the correct choice. Does this mean anything? I don't know - it was just an unscientific bit of observation.

I'll give scratchies a go in one of my classes very soon.

| | Comments (0) | TrackBacks (0)

A photon walks into a hotel and checks in. "Do you want a hand with your luggage?" asks the receptionist. "No thanks", replies the photon, "I'm travelling light".

Thanks to my friend Julie for that one.   But it got me thinking about the quantum nature of things that may not immediately appear quantum-like. There's a neat little rule that says that the classical behaviour of a system can be 'derived' from the quantum behaviour by taking the limit of Planck's constant going to zero. An example is from the Heisenberg uncertainty principle. It tells us that we can't know position and momentum simultaneously to arbitrary accuracy - the uncertainty in our knowledge of position times the uncertainty on our knowledge of momentum must be greater than Planck's constant divided by 4 pi. So, if we know the position really well (its uncertainty is low) we can't know much about the momentum (its uncertainty is high). Now, let's assume Planck's constant is given the value zero rather than its real value of 6.626 times 10 to the power minus 34 Joule seconds. That tells us we CAN know position and momentum to arbitrary accuracy (since the product of their uncertainties is bigger than zero), which is the result we are intuitively familiar with in 'normal' things. (E.g. the car is travelling through this intersection at 50 km/h - we know both position and speed). It works for other things too - for example, the energy of a photon is given by its frequency times Planck's constant - if we assume Planck's constant is zero then photons would carry no energy at all (i.e. be irrelevant) - and light could be described perfectly by a wave. That's the classical result.

 The reason that we don't experience many quantum effects in everyday life can be put down to Planck's constant being very very small. To get quantum behaviour showing, you need to look at very small length scales. If you want some theoretical physics fun, have a think about what would happen with everyday things if, say, Planck's constant were 1 Js. Life would be a bit different - we'd see quantum effects everywhere.

And finally, thanks to one of my third year students...

Heisenberg and Schrodinger are driving in a car and are pulled over by a traffic cop. "Excuse me sir", the policeman says to Heisenberg, "but do you know how fast you were travelling?"  "No", replies Heisenberg, "but I can tell you exactly where I was".

The officer is not impressed. "Open the boot", he demands. After a look in, he walks round to Schrodinger. "Did you know there's a dead cat in the boot?", he asks.    "I do now..." replies Schrodinger.

| | Comments (0) | TrackBacks (0) which, of course, I mean rugby balls.

To be precise, a rugby ball is a prolate ellipsoid - that is, something that is like a 3d version of an ellipse, but having a cross-section along its long axis of a circle.  (A flying-saucer would be an oblate ellipsoid) 

Rugby balls behave awkwardly. In one sense that's obvious - their awkward shape will lead to awkward movement, on the ground at least. But what about when in the air?  Why is it hard to get a rugby ball to spin nicely and stay in one orientation (like the professionals can do?) Or, put another way, why is it so easy to make a rugby ball tumble all over the place in flight, making it more difficult for the opposing full-back to catch?

Once in the air, the rugby ball undergoes torque-free movement. (We'll neglect things like the Magnus effect). That means there's no external forces trying to create extra spin on the ball.  The behaviour of an object in this situation can be described by Euler's equations (after the Swiss mathematician Leonhard Euler). I won't write them in a blog, but they are not that difficult to interpret if you know about rotational inertia and angular velocities. 

These equations can be solved fairly simply in some cases. It turns out, that if you have a rugby ball that is spinning on its long axis (like you try to do when you pass to one of your teammates) the rotation is stable. That means, once you have established it rotating in this manner, it will remain doing so. If the ball is perturbed slightly off this spin (perhaps by a piece of flying mud) it will return to this direction.

However, the same is not true if you spin the ball about a short axis (so it tumbles end on end). In this case, the motion is  critically stable - meaning that any perturbation is not corrected. The ball can easily lose this manner of motion, and start moving in another way. If you don't start it exactly in this way of movement, it's unlikely to stay there. Find yourself a rugby ball and try it.

Perhaps more interesting is the case when you have an object with three axes of different lengths (e.g. a paperback book). You'll find that if you try to spin the book about its long axis, or its short axis, these are stable, but about the middle axis the rotation is unstable.  Try it. Take a book (a cuboid with three different length axes), hold it with its cover the right way up and facing you, and grip its bottom two corners. Now flip it in the air - try to get it to spin 360 degrees and catch it again, by its bottom two corners.  See what happens.  It's very difficult to do - the book will tumble around, because rotation about this middle axis is unstable. Any small perturbation from it will be magnified as the book spins.  However, you try the same experiment spinning about one of the other axes of the book, and you'll find the motion stable.

Incidentally, if you take an object with three identical axes (e.g. a soccer ball), its rotations are critically stable in all directions, which is one reason why a soccer ball in flight can spin (and bend) in all kinds of whacky ways.

| | Comments (2) | TrackBacks (0)

So it's Friday afternoon and I, like half the people at university today, are more focused on tonight's opening game than they are on work.  So, I'm not feeling too inspired to put in a long blog entry today.

But I will mention that yesterday I tried a 'delayed mark' approach when I returned a piece of work to students. (It was a formal write-up of one of their lab experiments). That's one of the teaching strategies suggested by Phil Race. The point is that if you give students written feedback on their work AND a mark at the same time, the mark negates all the feedback and you might as well not have bothered. It's better to give them the feedback, then, later, give them the mark, when they've had time to read the feedback.

There were a few squeaks (rather than howls) of protest, but I think it was mostly because my poor students had never come across this approach before.

The question, though, as always, is "will it mean they do better in their next piece of work?". They start on another similar piece of work next week, so I'll get to find out.  I guess if there's one thing that doing a Postgraduate Certificate in Tertiary Teaching has inspired me to do, it's to try things out.


| | Comments (0) | TrackBacks (0)

I've been following with a bit of interest the "slow-motion crisis" of the European debt.  One of its consequences is some unhealthy shifts in exchange rates - for example the soaring Swiss Franc. That hurts Swiss exporters. 

In the last couple of days, Switzerland has decided that this isn't acceptable and is taking drastic measures to reduce the value of its currency, by selling it in "unlimited amounts". Basically, they'll flog off as many Swiss francs as it takes for the rate to get back to 1.2 francs to a Euro.

With economics, there are some obvious (that is, if you are a physicist) links to thermodynamics. There's a branch of economics called 'thermoeconomics' which overlaps the two ideas.  For example, if you pump heat into a object it increases its temperature. That's a bit like what is happening to Switzerland - people are wanting swiss francs rather than euros so money (heat) gets pushed its way and its exchange rate (temperature) therefore shifts.  Now, it's difficult to get heat to flow naturally to a hot body.  To do that, you need a hotter body. (Second law of thermodynamics). Likewise, it's difficult to get people to buy swiss products, because they are now very expensive. What happens is that people in Switzerland, with their 'hot' francs, will find it easy to import stuff from overseas. So money leaves the Swiss economy which doesn't do their industry much good.

What the Swiss National Bank is doing with its policy is similar to throwing all the doors and windows of a house open to reduce its temperature. You could think of the situation as a house with heat pumps going full blast (investors converting their euros into 'safe' swiss francs) with all the doors and windows open (the national bank trying to get rid of swiss francs to make them cheaper).  Doesn't sound very clever, does it? One would think, in these terms, the best thing to do would be to turn the heat pumps off and close the doors, but it's probably not as easy as that.

I may have got the economics a bit wrong - I'm not an economist, but the physics is certaintly right.





| | Comments (0) | TrackBacks (0)

I love this article I came across on the BBC website this weekend.

As someone who's travelled on a lot of planes, I can fully understand the motivation to study methods of boarding a plane. Traditionally, boarding is done in this sequence:

1. Those needing special assistance (e.g. those for whom walking is difficult)

2. Families with young children

3. Then, by row number, in blocks (smaller or larger depending on the mood of the person with the microphone), from the back of the plane forward.

And business and first class passengers can board 'at their leisure...' (I hate that phrase).

The 'back-to-forward' makes some sense, in that you shouldn't have people obstructing the aisle at the front of the plane while they throw things in the overhead lockers, while there is a queue of people behind them wanting to get to the back.

But is it the best way of doing it?   According to Jason Steffen, an astrophysicist and computer modeller, the answer is no. He's used computer simulations of different boarding patterns (e.g. window seats first, then middle seats, then aisle seats) to see what might work better. You can read the original article here. One interesting result is that a completely random way of boarding (which seems to happen anyway with half the airlines - passengers just ignore the instructions given to them to wait until their row is called) is better than a back-to-front method!

Now, some of the various methods have been trialled with volunteer 'passengers' in a plane mock-up in Hollywood for a TV programme. It sounds like a great piece of entertainment, but I have a few questions about this from a scientific perspective (and so do those posting comments on the BBC article).

Computer modelling is a really great tool. It enables you to study problems that could otherwise be unreachable. For example, construct computer models of the brain, and it enables you to study phenomena without having to go to the expense of or grappling with the ethics of sticking things into animals. Build a climate model, and you can see how changing the carbon dioxide output of the world would influence the earth's climate in 50 years' time. A computer model of the movement of pesticide droplets over a crop field can tell help determine under what weather conditions a farmer should be spraying his crop and what kind of nozzles to use, etc.

But computer models on their own don't mean a lot. They need validation. That is, they need to be tested against real situations. That can be done in a number of ways - e.g.test each part of the model separately, or test the whole lot at once - and for some models it's harder than others. Any computer model is only as good as the processes it accounts for, and, in practice, there will be many, many other processes going on that the computer model ignores. The art of getting a good model is in capturing the most important processes, and ignoring the insignificant ones.

I reckon boarding a plane has many processes going on that make it a very complicated thing to study and validate well. A Hollywood mock-up probably won't get it right. Does it account for factors such as passenger compliance (e.g. those who don't understand the English in which the instructions are given)? Passengers trying to board with three oversize suitcases and clogging everything up? Families wanting to board together, not in separate waves? Stressed passengers who just want to get on board? Cabin crew running up and down the aisle to fetch that seat-belt extension for the man in 32A? The guy in the window seat who turns up late? And so forth.   I would hazard a guess that if this 'validation' exercise were done at a real airport, with real passengers on a real plane with real cabin staff, there might be a different outcome.

Maybe some airline will pick this up, try it, and some other way of boarding (perhaps the budget airline pile-yourself-on method) will become the norm.  But it will need a proper validation study first.

| | Comments (0) | TrackBacks (0)

We have a problem brewing in the lab.  Recently, we (by which I mean a PhD student or two and a researcher) moved into a new lab. As part of our research we are recording electrophysiological signals (electricity produced by living cells). These are pretty small, often in the microvolt region (a millionth of a volt) - we'd consider a millivolt (a thousandth of a volt) a large signal. Getting good data in this situation isn't easy - it needs some sensitive electronic equipment that is setup very carefully. For example, the way you earth something can make a big difference to the amount of electrical noise on your signal, and has to be done with some thought.  But, set it up correctly and it is possible to get decent recordings, which we are now getting.

Now, here's the trouble. Another staff member is about to work with voltages on a vastly different scale - simulating lightning strikes on pieces of electronic equipment. And this experiment is due to be set-up in the lab next door to ours. The voltages experienced with lightning are in the kilovolt range (a thousand volts), a cool billion or so times bigger than the voltages we are trying to measure.  Needless to say that having that experiment sitting near our electronics doesn't give us a good feeling.  It really is a problem of enormous magnitude - specifically about nine magnitudes.  (A magnitude being a factor of ten.)

The most obvious solution is to keep the two labs apart - perhaps a more practical one, given the  demand for lab space, is to put a Faraday shield around the lightning work. 

On the more light-hearted side, however, it does illustrate that electrical phenomena occur on a vast range of scales.



| | Comments (0) | TrackBacks (0)