The University of Waikato - Te Whare Wānanga o Waikato
Faculty of Science and Engineering - Te Mātauranga Pūtaiao me te Pūkaha
Waikato Home Waikato Home > Science & Engineering > Physics Stop
Staff + Student Login

I overheard the following conversation at the best coffee outlet on campus yesterday:

"Well, winter's nearly over. We're past the shortest day so it's getting warmer. And we've had eleven frosts so far this year, and the record for Hamilton is twelve, so there can only be one more to come." - Anonymous

Where do I begin?

Well, first let's point out that the shortest day does not equal the coldest day. Not by a long shot. In fact, I believe that statistically speaking the coldest week of the year for New Zealand is the last week of July (i.e. now). Why the difference? While it's true that it's the sun that provides the heat input to the earth, and that's at a minimum on the shortest day, there's a lot of thermal inertia on the earth, and particularly on the sea. And there is a lot of sea surrounding New Zealand. Temperatures are slow to change. While the sun remains low in the sky, the sea temperatures are slowly cooling, and that is going to influence the temperature in Hamilton. Conversely, the sea temperature in December is still pretty nippy. It's late summer before the sea temperature hits its maximum. Seasonal temperature variation is more about the cumulative heat put in over an extended period of time, as opposed to the heat input from the sun on a particular day.

And then the second point. I've always found it amusing that Hamiltonians count frosts, and think that  minus 4 Celsius (as it has been a couple of mornings recently) is cold. It is only cold because in New Zealand it is (near enough) compulsory to live in poorly heated, uninsulated, single-glazed detached houses. Europeans find this concept laughable, and, I think, Canadians probably sink their heads in their hands in despair.  Anyway, let's leave that aside. So if there have only ever been twelve frosts in a single winter in Hamilton (I doubt this, but don't have statistics on this at hand), and we've had eleven so far, then does that mean there is only at most one more to come?

Um, no. Probability doesn't work like that.  Our weather systems don't have a memory (not in that sense anyway), and they certainly aren't intelligent enough record the number of frosts a particular place has a year and act accordingly in the weeks ahead. I'd say we would be in for a few more frosts yet. That's simply based on the metservice statistics. Go to http://www.metservice.com/towns-cities/hamilton and look at the historical data tab. You'll see that the mean minimum temperature for August is -2 C, and for September it's 0 C, suggesting there can easily be some more negative temperatures coming for 2014. Enjoy. 

I remember several years ago playing a board game with a few friends. We'd had a long run of throws of the dice without seeing a 'six'. One of my friends asked me what the probability was that the next through would be a 'six'.  "One sixth" I answered - "same as for any other throw."  This sparked an intense discussion on whether that was right or not. It is. The dice does not have a memory. It doesn't remember what side it has landed up on in the past. Each throw is equally likely to show 1, 2, 3, 4, 5, or 6.  The probability of a 'six' is one sixth. What was perhaps most interesting is that a friend of mine who was doing a maths degree at the time refused to back me up. 

So is winter nearly over? While it's true that today feels rather spring-like, and the days are now noticeably longer than they were a month ago, winter still has plenty of teeth left.  

 

 

 

 

 

 

| | Comments (0)

I've been following the weather with interest this week. First of all, I was very glad when the wind and rain disappeared late last weekend. We were at a wedding in Whakatane on Saturday afternoon/evening, and boy, did it rain. With the wedding in a garden in something that was a bit more substantial than a marquee (think marquee with hard walls and floor), with a portaloo outside,  and a four minute walk up a long, dark, mud and puddle infested driveway in a storm separating you from the car, it was certainly a memorable wedding reception. 

Now, with beautiful clear skies, light winds, and frosty mornings, you'd be forgiven for thinking there's a big fat high pressure system sitting over us.  But there isn't.  For the last few days, we (by which I mean at this end of the country) have been in or around a saddle-point, in terms of pressure. There have been lows to the north and south, highs to the east and west, and somewhere in the middle over us. I note today that things have rotated a bit, so the lows now lie east and west, with a high to the north and another approximately south. Here's a picture I've stolen from the metservice website this morning (www.metservice.com, 18 July 2014, 11am); it's the forecaset for noon today. Note how NZ is sandwiched between two lows, but isn't really covered by either. 

Capture.JPG

 
You can see the strength of the wind on this plot by the feathers on the arrow symbols. The more feathers, the stronger the winds. (The arrows point in the direction of the wind). Note how the wind blows clockwise around the low pressures (and anticlockwise, less strongly, around the highs). Have a look just around Cape Reinga (for non-NZ dwellers, and I know there's a few of you out there, that's the northern-most tip of the North Island.) There's a point where the wind (anthropromorphising) doesn't know what to do. It's in what's mathematically termed a saddle point. It's a point where locally there is no gradient in pressure, but is neither a high or a low. Winds are light.  In two dimensions (this is what we have on the earth's surface) with a single variable such as pressure, there are those three possibilities where the gradient of pressure is zero - a the maximum of a high, the minimum of a low, or a saddle. 
 
In terms of terrain, a mountain pass is a saddle point. It's where one goes from valley to valley (low to low), between two mountains. On top of the pass, you are at a point where the gradient is zero. But it's neither a peak or a trough. It's called a 'saddle', because the shape looks rather like a saddle for a horse - which is low on both flanks, but high at the front and back. A marble placed on top of a saddle should, if it were placed exactly at the equilibrium point with no vibrations, stay there. 
 
Saddle-points crop up in all kinds of dynamical systems (e.g. brain dynamics) where there's more than one variable involved.  Such a point is termed an unstable equilibrium - any deviation from the equilibrium point will cause the system to move away from it. However, the change may not be terribly rapid. When there are lots of variables involved, such local equilibria may have very complicated dynamics associated with them indeed - the range of possibilities gets very large and dynamics can become very rich indeed. 
| | Comments (0)

A common technique in physics is 'modelling'. This is about constructing a description of a physical phenomenon in terms of physical principles. Often these can be encapsulated with mathematical equations. For example, it's common to model the suspension system of a car as two masses connected by springs to a much larger mass. Here, the large mass represents the car body, one of the small masses represents the wheel, and the other the tyre. The two springs represent the 'spring' in the suspension system (which on a car is usually a curly spring - though it can take other forms on trucks or motorbikes), and the tyre (which has springyness itself). We can then add in some damping effect (the shock absorber). What we've done is to reduce the actual system into a stylized system that maintains the essential characteristics of the original but is simpler and more suitable for making mathematical calculations. 

That's great. We can now work on the much simpler stylized system, and make predictions on how it behaves. Transferring those predictions to the real situation, that helps us to design suspension systems for real situations. 

There are however, some drawbacks. We have to be sure that our stylized system really does capture the essential features of the actual system. Otherwise we can get predictions completely wrong. On the other hand, we don't want to make our model too complicated, otherwise there is no advantage in using the model. "A model should be as simple as possible, but not simpler" as Einstein might have said

There's another trap for modellers, which is going outside the realm of applicability for the model. What do I mean by that? Well, some simplifications work really well, but only in certain regimes. For example, Newton's laws are a great simplification on relativistic mechanics. They are much easier to work with. However, if you use them when things are moving close to the speed of light, your answers will be incorrect. They may not even be close to what actually happens. We say that Newton's laws apply when velocities are much less than the velocity of light. When that's the case (e.g. traffic going down a road) they work just fine - you'd be silly to use relativity to improve car safety - but when that's not the case (e.g. physics of black holes) you'll get things very wrong indeed. 

A trap for a modeller is to forget where the realm of applicability actually is. In the rush to make approximations and simplifications, just where the boundary is between reasonable and not reasonable can be forgotten. I've been reminded of this this week, while working with some models of the electrical behaviour of the brain. Rather than go into the detail of what that problem was, here's a (rather simpler!) example I can across some time ago now. 

I was puzzling over some predictions made in a scientific paper, using a model. It didn't quite seem right to me, though I struggled for a while to put my finger on exactly what I didn't like about what the authors had done. Then I saw it. There were some complicated equations in the model, and to simplify them, they'd made a common mathematical approximation: 1/(1+x) is approximately equal to 1-x.  That's a pretty reasonable assumption so long as x is a small number (rather less than 1). We can see how large it's allowed to get by looking at the plot here. The continuous blue line shows y = 1/(1+x); the dotted line shows 1-x.  (The insert is the same, at very small x). 

 

 

approximation_test.jpg

We can see for very small x (smaller than 0.1 or so) there's not much difference, but when x gets above 0.5 there's a considerable difference between the two. When x gets larger still (above 1) we have the approximation 1-x going negative, whereas the unapproximated 1/(1+x) stays positive. It's then a completely invalid approximation. 

However, in this paper, the authors had made calculations and predictions using a large x. What they got was just, simply, wrong, because they were using the model outside the region where it was valid. 

This kind of thing can be really quite subtle, particularly when the system being modelled is complicated (e.g. the brain) and we are desperate to make simplifications and approximations. There's a lot we can do that might actually go beyond what is reasonable, and a good modeller has to look out for this. 

| | Comments (0)

Being a father of an active, outdoor-loving two-year-old, I am well acquainted with the bath. Almost every night: fill with suitable volume of warm water, check water temperature, place two-year-old in it, retreat to safe distance. He's not the only thing that ends up wet as he carries out various vigorous experiments with fluid flow. 

One that he's just caught on to is how the water spirals down the plug-hole. Often the bath is full of little plastic fish (from a magnetic fishing game), and if one of these gets near the plug hole it gets a life of its own. It typically ends up nose-down over the hole, spinning at a great rate as it gets driven round by the exiting water. 

The physics of rotating water is a little tricky. There are two key concepts thrown in together; first the idea of circular motion  - which involves a rotating piece of water having a force on it towards the centre (centripetal force); second is viscosity - in which a piece of water can have a shear force on it due to a velocity gradient in the water. Although viscosity has quite a technical definition, colloquially, one might think of it as 'gloopiness' [Treacle is more viscous than water. The ultimate in viscosity is glass, which is actually a fluid, not a solid - the windows of very old buildings are thicker at the bottom than the top due to the fluid flow over tens or hundreds of years.] In rotational motion there's a subtle interplay between these two forces which can result in the characteristic water-down-plughole motion. 

In terms of mathematics, we can construct some equations to describe what is going on and solve them. We find, for a sample of rotating fluid, that two steady solutions are possible. 

The first solution is what you'd get if all the fluid rotated at the same angular rate - the velocity of the fluid is proportional to the radius. This is what you'd get if you put a cup of water on a turntable and rotated it - all the water rotates as if it were a solid.

The second solution has the velocity inversely proportional to the radius - so the closer the fluid is to the centre, the faster it is moving. This is like the plughole situation where a long way from the plug hole the fluid circulates slowly, but close in it rotates very quickly. Coupled with this is a steep pressure gradient - low pressure on the inside (because the water is disappearing down the hole) but higher pressure out away from the hole. Obviously this solution can't apply arbitrarily close to the rotation axis because then the velocity would be infinite. This is where vortices often occur. (Actually, Wikipedia has a nice entry and animations on this, showing the two forms of flow I've described above). 

A Couette viscometer expoits these effects as a way of measuring the viscosity of a fluid. Two coaxial cylinders are used, with fluid between them. The outer is rotated while the inner one is kept stationary, and the torque required enables us to calculate the viscosity of the liquid.

 

| | Comments (0)

Long story cut short: I'm currently writing a paper on a piece of work I presented at the (fairly) recent conference on Threshold Concepts, that was hosted here at Waikato. In order to do this, I'm needing to learn a new language, namely that of qualitative research. 

Qualitative Research is not something that comes naturally to a physicist. The most obvious problem is that it requires a paradigm shift - from the positivist approach that underlies most of science, and particularly physics, to the (ahem) social constructivism that is common-place in the qualitative literature. I need help. 

So, yesterday, under cover of darkness*, with heavy coat and thick scarf wrapped around my face, I sneaked into the library, passed by the familiar  'Q' section and headed across the corridor** to raid the 'H' section***. I knew my target - I'd already searched the on-line library catalogue in the safety of my office - so it was a quick mission. Get in there, grab the books, get them issued (on the self-service kiosk, certainly not the front desk lest I be recognized for what I was - a scientist carrying subversive literature) and get out of there before any of my colleagues, or worse still, any of my students, spotted me. Catching a positivist (or p******ist, as they're refered to in the social science literature) raiding the 'H' section would be sure to inflame cross-disciplinary tensions so discretion was absolutely paramount.  Mission safely accomplished, I returned to the safety of EF-link block.  

However, my mission has hardly begun. The next step is to decode the language. The words might be English, but they're written in some kind of secret code known only to practioners of social constructivism. Fortunately, my wife Karen has come across such writing before and is familiar with teasing out some of the hidden meanings in the language. With some tuition, and hard work, I've begun to make a little sense of this writing. It is a hard and frightening task - there is so much that is just utterly alien to me. I feel that there must be some underpinning concept behind it that I just haven't grasped, that makes it so troublesome - if I get it - if I discover what that secret code really is - it will all fall into place and at last I'll be able to see what qualitative research really is about. 

But one thing I do know, is that I'm dealing with a threshold concept here. And there's the deep irony. In order for me to support my paper on threshold concepts, I need to get into the qualitative research literature, and this in itself is a threshold concept to me. The introductory chapter of one of the books, which explicitly states that it is for people with no familiarity with qualitative research, is still intractable to me. Why? The words are English, the sentences aren't long, but somehow it appears to draw from hidden knowledge that I am not familiar with.  I just don't get it - the sense isn't there - the concepts are so troubling. I'm sure then the very notion of 'qualtiative research' is, to someone trained as a scientist, a threshold concept in itself. And I'm not yet over that threshold. Not completely, anyway. . 

So, to close, I'm still grappling with this stuff. But perhaps the greatest impact is that I now have some idea of what my students are going through when they complain "I just don't get it" when dealing with what I feel is the blatantly obvious

 

*OK, so it was actually about 9.30 in the morning, but that doesn't sound as dramatic. 

**One might actually say 'crawled under the barbed wire laid long the border': This metaphor has echos of Glen S. Aikenhead (1996) Science Education: Border Crossing into the Subculture of Science, Studies in Science Education, 27:1, 1-52.

***The Q and H refer to US Library of Congress coding of books - Q is (broadly) where physics lives, H is where the qualitative research methods books hang out, looking menacing.

| | Comments (0)

I've just been at a great lecture by Peter Leijen as part of our schools-focused Osborne Physics and Engineering Day.   He's an ex-student of ours, who did electronic engineering here at Waikato - and graduated just a couple of years ago.  He now works in the automotive electronics industry. That's an incredibly quickly growing industry. So much of a car's systems are now driven by electronics, not mechanics. Being a car 'mechanic' now means being a car 'electronic engineer' just as much as it does being a mechanic. 

One interesting piece of electronics is the ignition timing system. The mechanism that produces the spark in the cylinders from a 12 volt battery is really old and standard technology - one uses a step-up transformer and kills the current to the primary coil by opening a switch - the sudden drop in current creates a  sudden reduction in magnetic flux in the transformer, and these collapsing flux lines cutting the secondary coil create a huge voltage, enough for the spark plug to spark. That really is easy to do. The tricky thing is getting it to spark at the right time. 

One needs the fuel/air mix in the cylinder to be ignited at the optimum time, so that the resulting explosion drives the piston downwards. Ignite too early, while the compression is going on, and you'll simply stop the piston rather than increasing the speed of its motion. Apply too late, and you won't get the full benefit of the explosion. It's rather like pushing a child on a swing - to get the amplitude of the motion to build, you need to push at the optimum time - this is just after they've started swinging away from you. 

All this is complicated by the fact that the explosion isn't instantaneous. It takes a small amount of time to happen. That means, at very high revolution rates, one has to be careful as to exactly when you make the ignition. It has to be earlier than at lower rates, particularly if the throttle setting is low, because the explosion takes a significant proportion of the period of the oscillation.  This is called 'ignition advance'.  

On newer cars, this is done electronically. A computer simply 'looks up' the correct angle of advance for the rpm and the throttle setting of the car, and applies the outcome. The result: a well running, efficient engine, using all the power available to it. Or so you might think.

But here's the revelation from Peter: car manufacturer's can deliberately stuff up the timing. Why do they want to do that? Well, there's a market for selling different versions of the otherwise same car. The high-end models have performance and features (and price tag) that the low-end models don't have. There's status in buying the high-end model (if you're the kind of person who cares about that - and the fact that these things sell says, yes, there are such people), but, alternatively, if that extra couple of horsepower doesn't bother you, you can get the lower-spec model for a lower price. Now, the manufacturers have worked out that making lots of different versions of the otherwise same car is inefficient. It's far easier to have a production line that fires out identical cars. So how do you achieve the low-end to high-end specification spectrum? Easy. You build everything high-end, and then to produce  low-end cars deliberately disable or tinker with the features so they don't work or don't perform so well. That is, make the car worse. 

Ignition timing is one example, says Peter. There are in fact companies who will take your low-end car and un-stuff-up your electronics for you - in effect reprogramme it to do what it should be doing. In other words, turn your low-end car back into a high-end one (which is how it started out life) without you having to pay the premium that the manufacturer would place on it for not stuffing it up in the first place. 

Who said free market economics resulted in the best outcome for consumers?

 

| | Comments (0)

A couple of weeks ago saw the University of Waikato Open Day. (Acually, two days). There were some fantastic displays set up across the whole univerisity, with some exciting lectures and activities. With a dual-audience of would-be students and members of the public, our displays were meant to be eye-catching and fun, and I thought they were. There were some good whizz-bang displays, and some really great whizz-whizz bang-bang displays and activities. I think nearly everyone had a good time there. (Naturally the Van de Graaff generator was its usual hit...)

However, when the feedback on the day(s) began to roll it, it became apparant that some displays were not as fun as I thought. At least, the audience didn't think so. Too boring. I wonder whether this is because people have come to expect that whizz-bang-interactive-touch-it-yourself-excitement is the normal, basic thing to expect in science displays now.   Whizz-bang just doesn't cut it - you need to be double whizz - double bang or you don't get a look-in now. 

Is this down to 'interactivity inflation'?  When I was very young,  the most exciting place in the world to visit was the Natural History Museum in London. Back in the late seventies it wasn't really interactive - that came in slowly - there was lots of stuff in cabinets just to look at. But what it did have was a fossil skeleton of a diplodocus in the main entrance hall (yes, some entrance hall). It didn't move, it didn't grunt (or whatever diplodoci did), it just stood there looking, well, wow! - dinosaur-ish.  What more could you want. Further into the museum one found the whale skeleton suspended from the ceiling - again - wow! with the pickled contents of its stomach in a large glass jar.  At that time, a large jar of krill in formaldehyde was indeed exciting stuff. I loved it. 

Nowadays a jar of long-dead krill is simply silly. Yuck. Have we come to expect too much from our scientific displays? Or is it an example of the current generation's requirement for things that can be instantly double-clicked, shared, downloaded, posted or liked.  Whatever, it certainly takes a lot of time and thought to put together something double-whizz double-bang.

And, finally, is WWBB what recruits future students anyway? Sure, it gets their attention. But does it maintain it over several years? I suspect not.   

P.S. I've just looked at the Natural History Museum Website.  (Obviously a thing that didn't exist when I was 8.) The first thing on it: "Download the UK Tree Identification App." What happened to taking the time to carefully learn different leaf and fruit shapes, bark texture, canopy shape etc? Who cares about that - just let the app do the work...

| | Comments (0)

A few days ago I was updating one of the lectures I do for my Experimental Physics course. I was putting in a bit more about safety and managing hazards, which are things that are associated with doing experiments for real. When I was a student, we didn't learn anything about this - my first introduction to the ideas behind hazard management came only when I joined an employer. Before then, I simply hadn't thought about the issues involved. 

One of the things that gets banded around Health and Safety discussions is Heinrich's pyramid, dating back to 1931. The basic idea of this is that accidents don't just happen out of the blue. For every fatal accident there are several non-fatal but major accidents, for every major accident there's several minor accidents, and for every minor accident there's a whole heap of incidents (things that could have been accidents if circumstances had been different). The implication is then that by addressing the minor things that crop up frequently, we make the workplace a safer place.  I've seen various versions of the pyramid on-line, but here's one:

enerpipe_img-SafetyPyramid.jpg

Diagram taken from http://www.enerpipeinc.com/HowWeDoIt/Pages/safety.aspx 

That all seems to make some sense. However, searching around for a good picture to include in my lecture notes, I came across this article by Fred Manuele:

http://www.asse.org/professionalsafety/pastissues/056/10/052_061_f2manuele_1011z.pdf

It calls into question the whole basis of this pyramid and its implications for health and safety in the workplace. Specifically, Manuele reports that:

1. No-one can trace Heinrich's original data

2. If it exists, then the extent to which 1920's and 30's data applies in today's workplace is dubious. 

3. That the pyramid idea is counter-productive to ensuring a safe working environment since it over-emphasizes the importance of minor non-compliance issus (not wearing one's lab coat) and focuses attention away from major, systemic failings in senior managment and even government regulators and legislators whence the really big events tend to come. [Think Pike River, where MBIE's own investigation points the finger at itself (in the form of its predecessor, the Ministry of Economic Development) for carrying out its regulatory function in a 'light-handed and perfunctory way'.]

There's some lovely statistics included on what a focus on reducing small incidents actually does. Here's some US figures on the reduction in accdient-related insurance claims between 1997 and 2003 (F. Manuele,  “State of the Line,” by National Council on Compensation Insurance, 2005, Boca Raton, FL):

Less than $2000: Down 37%

$2000 - $10000: Down 23%

$10k - $50k: Down 11%

above $50k: Down 7%

See the issue here? Focusing attention on small incidents and small accidents does wonders for reducing small incidents and small accidents, but very little on reducing the big accidents. That's because, as Manuele describes, they have different underlying causes. 

The paper's worth a read, and cuts at what I've been taught over several years about health and safety. One notable feature is that it actually draws from hard data, rather than myth, which is how Manuele labels Heinrich's work. 

And the consequence for my experimental physics students? I shan't be including that pyramid in their lectures.

 

 

 

| | Comments (2)

At afternoon tea yesterday we were discussing a problem regarding racing slot-cars (electric toy racing cars).  A very practical problem indeed! Basically, what we want to know is how do we optimize the size of the electric motor and gear-ratio (it only has one gear) in order to achieve the best time over a given distance from a stationary start?

There's lots of issues that come in here. First, let's think about the motors. A more powerful motor gives us more torque (and more force for a given gear ratio), but comes with the cost of more mass. That means more inertia and more friction. But given that the motor is not the total weight of the car, it is logical to think that stuffing in the most powerful motor we can will do the trick. 

Electric motors have an interesting torque against rotation-rate characteristic. They provide maximum torque at zero rotation rate (zero rpm), completely unlike petrol engines. Electric motors give the best acceleration from a standing start - petrol engines need a few thousand rpm to give their best torque. As their rotation rate increases, the torque decreases, roughly linearly, until there reaches a point where they can provide no more torque. For a given gear ratio, the car therefore has a maximum speed - it's impossible to accelerate the car (on a flat surface) beyond this point. 

Now, the gear ratio. A low gear leads to a high torque at the wheels, and therefore a high force on the car and high acceleration. That sounds great, but remember that a low gear ratio means that the engine rotates faster for a given speed of the car. Since the engine has a maximum rotation rate (where torque goes to zero) that means in a low gear the car has good acceleration from a stationary start, but a lower top-speed. Will that win the race? That depends on how long the race is. It's clear (pretty much) that, to win the race over a straight, flat track, one needs the most powerful engine and a low gear (best acceleration, for a short race) or a high gear (best maximum velocity, for a long race). The length of the race matters for choosing the best gear. Think about racing a bicycle. If the race is a short distance (e.g. a BMX track), you want a good acceleration - if it's a long race (a pursuit race at a velodrome), you want to get up to a high speed and hence a huge gear.  

One can throw some equations together, make some assumptions, and analyze this mathematically. It turns out to be quite interesting and not entirely straightforward. We get a second-order differential equation in time with a solution that's quite a complicated function of the gear-ratio. If we maximize to find the 'best' gear, it turns out (from my simple analysis, anyway) that the best gear ratio grows as the square-root of the time of the race. For tiny race times, you want a tiny gear (=massive acceleration), for long race times a high gear.   If one quadruples the time of the race, the optimum gear doubles. Quite interesting, and I'd say not at all obvious. 

The next step is to relax some of the assumptions (like zero air resistance, and a flat surface) and see how that changes things. 

What it means in practice is that when you're designing your car to beat the opposition, you need to think about the time-scales for the track you're racing on. Different tracks will have different optimum gears.

| | Comments (0)

No, nothing to do with carrots and vitamin A I'm afraid. 

With dark evenings and mornings with us now :(, Benjamin's become interested in the dark. It's dark after he's finished tea, and he likes to be taken outside to see the dark, the moon, and stars, before his bath. "See dark" has become a predictable request after he's finished stuffing himself full of dinner. It's usually accompanied by a hopeful "Moon?"  (pronounced "Moo") to which Daddy has had to tell him that the moon is now a morning moon, and it will be way past his bedtime before it rises. 

I haven't yet explained that his request is an oxymoron. How can one see the dark? Given dark is lack of light, what we are really doing is not seeing. But there's plenty of precedence for attributing lack of something to an entity itself, so 'seeing the dark' is quite a reasonable way of looking at it.  

One can talk about cold, for example. "Feel how cold it is this morning". It is heat, a form of energy, that is the physical entity here. Cold is really the lack of heat, but we're happy to talk about it as if it were a thing in itself. Another example: Paul Dirac in 1928 interpreted the lack of electrons in the negative energy states that arise from his description of relativistic quantum mechanics as being anti-electrons, or positrons. In fact, this was a prediction of the existence of anti-matter - the discovery of the positron didn't come until latter.  

In semiconductor physics, we have 'holes'. These are the lack of electrons in a valence band - a 'band' being a broad region of energy states where electrons can exist. If we take an electron out of the band we leave a 'hole'. This enables nearby electrons to move into the hole, leaving another hole. In this way holes can move through a material. It's rather like one of those slidy puzzles - move the pieces one space at a time to create the picture. Holes are a little bit tricky to teach to start with. Taking an electron out of a material leaves it charged, so we say a hole has a positive charge. That's a bit confusing - some students will usually start of thinking that holes are protons. Holes will accelerate if an electric field is applied (because they have positive charge) and so we can attribute a mass to the hole. That's another conceptual jump. How can the lack of something have a mass? Holes, because they are the lack of an electron, tend to move to the highest available energy states not the lowest energy states. Once the idea is grasped, we can start talking about holes as real things, and that is pretty well what solid-state physics textbooks will do. It works to treat them as positively charged particles. It's easy then to forget that we talking about things that are really the lack of something, rather than something in themselves. 

A more recent example is being developed in relation to mechanics of materials as part of a Marsden-funded project by my colleage Ilanko. He's working with negative masses and stiffnesses on structures - as a way of facilitating the analysis of the vibrational states and resonances of a structure (e.g. a building). By treating the lack of something as a real thing, we often can find our physics comes just a bit easier to work through. 

So seeing the dark is not such a silly request, after all.

 

 

 

 

| | Comments (2)