The University of Waikato - Te Whare Wānanga o Waikato
Faculty of Science and Engineering - Te Mātauranga Pūtaiao me te Pūkaha
Waikato Home Waikato Home >Science & Engineering & gt; Physics Stop
Staff + Student Login

March 2011 Archives

D'oh. Missed the exploding meteor last night. From the news reports it sounds like a pretty impressive sight.  (N.B. I like the comment on the article that says "Faster than a plane = definitely over 10000 km an hour. I don't know how many planes this guy has travelled in, but doing 10000 km an hour would certainly make a trip from Auckland to London a lot less stress.)

Actually, last night was spent trying to teach our most adorable catty-puss some finer points of mouse-catching etiquette:

1. When one is a cat it is perfectly acceptable to catch small rodents. However, all felines should note the following:

2. All games with one's catch should be undertaken outside. Taking one's toy into the house to play with is considered bad manners.

3. After use, it is polite to kill one's mouse. Bringing it inside in a deceased form and presenting it to one's owner as a gift is acceptable; leaving it half-alive scrabbling about the kitchen floor for one's owner to discover later is poor form.

Somehow I suspect the message hasn't got through.

Back to astronomy. A couple of nights ago I glanced out of the window to see Orion (upside down of course) looking back at me. At least, I think it was Orion; it was hard to tell because I didn't have my glasses on. What I actually saw was a few glowing splodges in the dark sky, roughly making out the shape of the constellation, with a brigher splodge up and to the right, which I assumed to be Sirius.  My eyes are certainly getting worse as I get older. (Though, Waikato drivers should be pleased to here that on putting my glasses on the stars return to their point-like form.)

Each splodge can be considered the 'point-spread function' for my eye. This optics terminology describes how a point source of light, like a star, is mapped imperfectly onto a sensor element (my retina). The broader the point-spread function, the worse the eyesight. It's a useful thing to know, because you can then model how any picture would appear to your optics. What you'd do is a convolution of the unblurry image with the point-spread function, that is, apply the point-spread function at every point. 

The reverse process 'de-convolution' is possibly more useful - if you can work out exactly how an image is being blurred, you can work out what to change about its optics to make the image sharp (i.e. design an appropriate 'lens' for it.) 

For those who know about such things, convolutions are easy to do numerically with Fourier Transforms.

| | Comments (0) | TrackBacks (0)

One of the benefits of me undertaking a teaching qualification is that I am now a lot more conscious of the kinds of thought processes my students are using. (The best way to do that is to talk to them). This year I've noticed how 'compartmentalized'  students' learning appears to be. What I mean by that is that students appear to find it difficult to drag a concept from one area of physics/maths and apply it to another.

An example I had recently was that, having had students analyze a mechanical problem, they obtained a quadratic equation.  Now, all of them I'm sure, if they were just given a quadratic equation and asked to solve it in a maths class, could do it without a problem. But no-one in the class (to my knowledge) spotted that what they had was a quadratic equation, so could be solved. The mathematical concept didn't carry across to the physical problem.

If I had more time, I'd like to analyze this a bit more.  (I'm sure others have gone down this road with their research, so there's probably a lot written about it.) First of all, I'd really ascertain whether this is a problem, or just something I have a 'feeling' about.  It's incredible how many scientists lecture based on 'feelings' of what their students might or might not be experiencing, without actually finding out for sure. A scientist would never do their research based on 'feeling' - they'd always look to do a properly controlled experiment - but it doesn't happen in teaching as often as it should.  (I got that from Eric Mazur).

Then perhaps I might look at what's going on. For example, is it a result of our semesterized teaching, where everything is arranged in neat little packages called 'papers' - you do the paper, sit the exam, and then promptly forget everything in it? I've often thought that NCEA encourages compartmentalized learning; a concept from one assessment standard absolutely can't be used in a problem in another assessment standard.  The trouble is that real physics isn't like this - the spider's web of concepts is tightly linked together. That's one thing I like about the approach of the  NZQA physics scholarship exam - a student needs to pull together concepts from across physics to make sense of a problem - just like real physics.

However, applying Mazur's rule here, I shouldn't speculate about whether semesterized teaching / NCEA encourages this - I should actually go and find it out. So maybe I should just stop writing on this blog entry, fire up the literature search engines and see what I can find - and, if the answer's 'not much', then actually think about doing some proper research on it.

| | Comments (0) | TrackBacks (0)

neuron_plot_p2999b_painters.jpgHere's an example of how easy it is to see things that don't exist. It's from a real piece of research (mine). As  way of background, I've been doing some work with computer models of neurons in the cortex (NB this isn't artificial neural networks, which were all the rage in the 1980/90s). Broadly speaking, I've been looking at the cross-over between two different models - (a) a very detailed model of neurons including explicity modelling of ionic currents across membranes that lead to action potentials (a neuron 'firing'), and (b) a more statistical approach in which we only consider firing rates, rather than modelling every firing event. 

What I've been trying to do is to take output from the simpler firing rate model (b) and reconstruct a pattern of firing events that is consistent with it - i.e. reverse engineer the detail. Then see how this compares with model (a), when the detail was in there to start with.

What I was hoping to see from the computer output was sequences of firing events that were strongly correlated in space. I'll show you a picture of what I got (which is just one small piece of a much bigger simulation). Time is along the x-axis (horizontal), position (i.e. different neurons) along the y-axis (vertical), and colour gives the neuron voltage. A neuron firing is therefore shown as a small, bright, vertical bar.


Now, at first glance this looks promising. There's a chunk of neurons in the middle that appear to be firing pretty much together. And at the bottom, neurons appear to be 'pairing', so a neuron fires with its neighbour. So, having plotted this, I was quite happy.

But then came the let-down. I actually analyzed the full output (much longer than the 0.5 seconds shown here, and more neurons too) in a systematic, mathematical manner, independent of my gut feeling.  The result? Any correlations I'm seeing are purely imagined (or, if they exist, are very small indeed). I am 'seeing things' in the picture. (NB Yes, there certainly are correlations in time - each neuron fires at a fairly constant rate, but there aren't any correlations between neighbouring neurons, which is what I wanted to see.)

It's often the case that we can see what we want to see in data, when it's just presented to us in raw form. We're great at 'seeing' patterns that just don't exist. Analyze it systematically, and often these disappear. Frustrating in this case - I was hoping that they'd be real, but that's not the result I get.


| | Comments (0) | TrackBacks (0)

Hopefully, now that 20 March has passed, people feel a bit more confident in assessing for themselves the abilities of Ken Ring (and others) to predict major earthquakes. No doubt the offenders will try to wriggle out of it by claiming that a M5.1 aftershock in Christchurch was 'close enough', but I'm afraid Mr Ring that a 5.1 aftershock is hardly one "for the history books".

Last night (during the quarter/half time breaks in the netball - which by the way was much much better than last week - well done Ms Williams, Langman etc)    I was reading through an article I've been preparing with a couple of colleagues on Cafe Scientifique. To be fair, though, it's the colleagues, Alison and Kathrin,  who have done most of the work here. One thing that we mention is that when scientists don't listen to people's concerns, the trust people have for scientists goes down. We draw from the UK's House of Lords Select Committee on Science and Techology, who reported that "supressing uncertainty is bound to diminish public trust and respect". What's needed is genuine, open discussion between the science community and the public, in a way that the public feels they own. Otherwise it's fuel for the conspiracy theorists. So well done GNS for getting out there and trying to do just that in the run up to last weekend - presenting what we do know about earthquakes and what we simply don't know.

It's a tricky tightrope to walk because science (and there's no better example than understanding earthquakes) is full of uncertainy. As much as we (I mean scientists) would like to, we can't predict exactly when and where earthquakes will strike. We can talk in statistical terms, and talk about the probability of one of size X or greater hitting near somewhere in the next Y years, but pinning a date to it is cloud-cuckoo land, which, I'm afraid, is where Mr Ring lives. There is no conspiracy to hide this information from anyone.

An example of where scientists fell off this tightrope was the run up to 'switch on' of the Large Hadron Collider a couple of years ago. There were many, many people who were genuinely scared of what was going on in Geneva, and us physicists didn't really do our job properly in reassuring our communities. (I did, eventually, but only AFTER the thing had switched on, and then promptly fallen over.)

Cafe scientifique, amongst other events, is a way of getting that genuine dialogue going between experts and non-experts. People can feel that they can interact with scientists equitably - the latter don't have some intellectual monopoly on knowledge that they wield for their own advantage. My experience has been that some topics create considerable discussion, some less so, but people do feel that they can air their views and genuine concerns about some aspect of science. That's got to be a good thing, hasn't it?

| | Comments (0) | TrackBacks (0)

I don't have any clearer picture than any of you on what is happening in the Japanese nuclear power stations at the moment. I've only got the official statements to go on, the same as anyone else.  But one thing I can talk about a bit is the nature of radioactive risk and to try to untangle the plethora of different measures of radiation and their units.

Radiation can cause damage to cells in your body, and this in turn can cause nasty effects such as cancer or genetic issues with any children you have in the future. The biology of this I'm not so sure about, but in terms of the physics, each radioactive particle carries energy and that energy can cause damage when it hits you. It's rather like a cricket ball carrying kinetic energy that will damage any car that gets in the way. Now, you can't escape radiation. We are continually bombarded by cosmic rays (and, if you live for example in a granite region - radiation from rocks as well)  - and so lots of these minature cricket balls are hitting you every second. Each one of these has the potential to cause damage to a cell that leads to something nasty. 

This means that the more that hit you, the greater the risk. The risk is cumulative. A dose-equivalent (I'll mention what that means in a moment) of 1 milliSievert for 10 days gives the same level of risk as 10 milliSieverts for one day.  It also means there is no such thing as a 'safe' level of radiation. Any radiation at all gives a chance of an adverse effect on you.  It's like playing a backwards lottery.  Imagine a zillion tickets, of which a zillion minus one are winning tickets. Just one is a losing ticket. Each time you are exposed to a radioactive particle you enter the lottery, staking your life.  Chances are that you'll win, and live, - but, play the lottery enough times, and your chances of at some point buying that losing ticket become greater overall.

So what about those units?  An easy measure of radioactivity is 'Activity'. This is a measure of the number of radioactive decays (when an atom changes its status, emitting an alpha, beta or gamma particle - or sometimes more than one) every second.  It's measurable with a Geiger counter.  The Systeme Internationale units are the becquerel (after Henri Becquerel, the discoverer of radioactivity), but often the old curie unit is used.

Activity doesn't correlate well with risk to you. A further measure, that is relatively easy to do in a lab, is exposure (measured in roentgens). This is a measure of the amount of ionization of air that a radioactive material causes - ionization being when a molecule has electrons stripped from it by the particle.

Then we bring in Dose.   The dose is a measure of the energy absorbed (e.g. by you). This is measured in grays. It's a physics-based measure, not a biological-based measure, so still isn't the best measure of your risk factor. A better measure is the dose equivalent. The unit is the sievert, and it is a reasonable measure of the chance of causing cell damage that leads to cancer etc.

Remember, it's a cumulative risk, so you want to add up the total dose equivalent you have had over your life. To give you some idea, a typical individual gets about 2 milliSieverts per year. I was hearing on the radio today that they are talking about 200 mSv per hour at places within the Fukushima complex - that's basically a lifetime's worth in an hour. You would not want to be there for a particularly long time. But, even so, if you were not exposed for too long a period of time, it would still not be a terribly large risk.

Radiation safety decisions really come down to 'risk'.  I'll close by saying that we are pretty bad at estimating risk generally speaking, and when presented with a figure we might scream in fright, but be prepared to take much greater risks everyday, for example when driving to work or crossing the road.  (E.g. do a quick estimate on your chance of dying in a car crash this year - given that about 400 New Zealanders die a year out of a population of 4 million - the calculation is an easy one.  Would you be prepared to enter a backwards lottery with those chances?  Most people do.)

| | Comments (0) | TrackBacks (0)

On Sunday I went to the Magic - Mystics netball game at Mystery Creek in Hamilton. It was a game that both teams did their very best to lose, in the end the Magic were more dedicated to this cause than the Mystics, and cunningly let them sneak a winning goal in the last few seconds.  

Halfway through the game the rather lacklustre and bored crowd decided to get some chanting going on.  "Magic, Magic..."  (or was it "Tragic, Tragic" ? - hard to tell). If you've attended sports events at large arenas you'll probably have experienced how difficult it is to get a decent chant going. I reckon that part of the reason is that in a big arena it takes a significant fraction of a second for sound to travel from one end to the other. So for people at the back of the two stands at either end of the court, maybe 60 metres from each other, it's going to be about a fifth of a second before they hear each other.  That's about 330 metres a second, the speed of sound in air, divided by 60 metres.

At a rugby or football game, with over a hundred metres separating the crowds at either end of the stadium, it's more like a third of a second delay. Try singing in time when you keep hearing a delayed copy of what you've just sung.  It's why a crowd singing a national anthem, if they're unguided by a heavily miked singer, is just going to get into a real mess. Rather like the Magic mid-courters.

| | Comments (0) | TrackBacks (0)

Last week I came face to face with another physics misconception with some of my students. I do think that, as I get more experienced teaching, I'm getting better at picking up on where students are having problems. But it's a very difficult thing to do.

Last week it was circular motion The students were looking at a fairly simple problem (well, I thought so, they are third year mechanical engineering students after all). In a nutshell, it's this: Imagine you have a T-shirt in a spin dryer (top loading). If you know the coefficient of friction between the T-shirt and the wall of the drum, and the drum's radius, you should be able to work out the minimum rotation speed for the T-shirt to stick to the wall of the drum, rather than slide down to the bottom. So, what is it?

As with all force problems, a good way to start is by drawing a free-body diagram, where we put in arrows for every force acting on the body. The total of these arrows (added as vectors), or the net force, is just the mass times acceleration, by Newton's 2nd law. So, I put a sketch on the board and asked the students to tell me what forces are acting on the T-shirt.  There is of course gravity on the T-shirt. That's balanced by the frictional force of the wall on the shirt (otherwise it would slide down.) Then there's the inward force exerted by the reaction of the wall on the T-shirt. Now, that's all the forces there are. I could have just carried on with the analysis,  but instead I asked the class whether that was it  "Have we got all the forces?"  I expected a 'yes' answer, but a student gave me the response "There's the centripetal force as well." 

There's a clearly a misconception about circular motion buried in that comment.   Yes, there is centripetal force, but it's the reaction of the wall on the T-shirt that provides this force. We've got it already. The centripetal force isn't an 'extra' force - that would be counting it twice. This is a tricky concept to present clearly. The best I can manage now is that it's the fact that there exists a force towards the centre of rotation that CAUSES the object to move in the circle, not that the fact that an object moves in a circle causes it to have a centripetal force. 

Centripetal force is not a 'magic' force, that mysteriously appears when you turn corners. There is always a very physical force that provides the centripetal force, whether it's the tension in a string (for whirling a mass-on-a-string) around, friction from the road on your car tyres (for driving a car around a circle), the lift force from an aeroplane's wings (for an aeroplane banking) or gravitational attraction (for the earth orbiting the sun).

I'm not sure I did a great job during class of explaining this, but at least I know now to look out for this misunderstanding with students in first year. And it also demonstrates that if you don't ask questions of the students, you really don't have a clue what they are thinking.

| | Comments (0) | TrackBacks (0)

I vaguely remember the following conversation from back when I was a PhD student.

Student A: What's a Bessel function?

Student B (waving his arms about): It's a wavy thing - goes like this, doesn't it?

Me: Sounds vaguely familiar - I think we did it in third-year.

Student A: But what IS it?

Me: Something to do with waves on a membrane, isn't it?

Student A: Does it have a formula?

Me and student B: Don't know...



| | Comments (0) | TrackBacks (0)

This afternoon I've been discussing with a PhD student a question that is really at the heart of the scientific method. He's measuring something in the lab that is a bit variable. Everytime he takes a reading of Y it is a little bit different. Essentially, he wants to know that if he does X to his experiment, do his results Y change?  (In other words, does changing X produce changes in Y?  And, deeper, HOW does it cause Y to change.) 

His question to me was 'how many measurements should I  take?' It is a very interesting one. Whatever we measure, we'll find there is always variation in it. Some things exhibit really large variation, in others the variation can be so small it is below your ability to measure it. There is a huge raft of statistical techniques that can be called on to help you describe these variations and what they mean. But, essentially, it boils down to this: The more measurements you take, the better you will know the average (mean) result. More specifically, if you want to halve your uncertainty (i.e. be twice as certain about something), you need to take four times the number of measurements. This means that the number of measurements you require to show something can often be very large indeed.

Just how many measurements you should take is governed by the size of the effect you are trying to demonstrate.  If the change that X is likely to have on Y is very pronounced, you will only need a few measurements before you can see it. But if the change is really small, a lot of measurements are called for.

In some cases we might have a good idea of what the size of the effect is likely to be (e.g. someone has done a theoretical calculation on it) and so we can design an experiment accordingly. But often we have only a vague idea. So we'll do a few trial measurements, and see what results are turning up. And then we'll do a few more. Hopefully we'll then get some clue as to the magnitude of the effect we have (or don't have, as the case may be), and this will inform a decision on how many measurements to take when we (by which I mean my student) do this 'for real'.

After all that, however, we still won't be able to say anything for certain. Statistics in the physics context is about quantifying your uncertainty. Uncertainty will never go away - even if your effect is very pronounced you could always say that there is a smidgen of a chance that this is just due to statistical variation, and that, if you did the experiments again, you'd get a different result.  But we can quantify that smidgen, and, if it's small enough (again, how small is small enough?...5%, 1% are just arbitrary nothing is absolutely proven beyond any doubt whatsover) we can say with some confidence that X does have an effect on Y.


| | Comments (0) | TrackBacks (0)

Over the summer a lot of the engineering students here have been out on work placements. At the end of the placement, they write a report on it, which is then assessed. These reports get shared arouhd the staff in order to do this, so I've got a few to do, which are beginning to land either in my email inbox or in hardcopy form on my desk.  

Now, the students get the chance, if they wish, to submit a draft report for comment first, before they submit their final one. After looking at these for a few years now, I can pretty-well write half my comments before looking at the report, since they are almost always lacking in one area - namely 'Reflection and Review'.

The point of this section is to get students to think about their own development as they went through the placement. Reports are generally full of technical stuff, such as how to programme a laser-differential centrifugal widget-driver, which is all fine, but doesn't really tell me whether the placement was a success or not. I mean this: our engineering degree is there to get students to a position where they can enter an engineering workplace. The summer work placements are intended to be steps towards this goal.  So, in the student's opinion, did they come out of the placement closer to this end goal than before they went in?  What caused this learning? / what hindered it? / what has the student identified that still needs to be done?  This is all reflection, and it indicates that a student is able to think critically about the skills they have in relation with the skills they need.

Unfortunately, 'reflecting' isn't something that the average engineering or physics student does naturally. (Nor does writing in whole sentences without resorting to text language, but that's another issue.)  I know this myself - for my Postgraduate Certificate in Tertiary Teaching, I need to keep a 'reflective teaching journal'. That might be easy-peasy for an arty-type, but physicists and engineers tend to shy away from using the 'I' word - I mean, what is personal about physics? But I do know that reflecting helps me to recognize how I am doing (in my case, in teaching) relative to where I want to be, and how I might get to where I want to be.

So, the students usually find the 'Reflection' section of their placement reports difficult to write, and, likewise, I suspect my reflective diary is going to be difficult for me to write. The latter might help me understand the problems my students have with the former, though, and that's got to be a good thing. (Is that a reflection?)

| | Comments (1) | TrackBacks (0)