The University of Waikato - Te Whare Wānanga o Waikato
Faculty of Science and Engineering - Te Mātauranga Pūtaiao me te Pūkaha
Waikato Home Waikato Home > Science & Engineering > Physics Stop
Staff + Student Login

A few weeks ago I commented on a task our second year software-engineering students are doing: building robots to follow a white line with the Lego 'Mindstorm' kit. It's been entertaining watching their various attempts and their design selections. Most groups have pretty-well optimized their robot now, and there's some final tweaking going on, ahead of our Engineering Design Show at the end of semester.

There's a fine line between an excellently-performing robot and a disaster. To be fair to the students, they haven't come across control theory yet, so for them to identify what's going on when the robot veers off sideways and accelerates into the wall is often not easy. There's been one common problem that the groups have been tackling, namely instability in their tracking. 

Most groups are using two sensors to look for the white line. Crudely speaking, if the robot veers off to the right, the left-hand sensor will cross the line, the robot will 'realize' this, and turn to the left. Conversely, if it goes too far to the left, the right-hand sensor crosses the line and the robot will respond by turning right. But getting the control system stable isn't as straightforward as that. 

If the robot doesn't turn hard enough, what happens is that it fails to get round corners. It goes off the line completely, so that neither sensor now sees the line, and then it's doomed. However, if it turns too hard, it can over-adjust, so that it now veers off the line on the other side. What can happen now is an oscillation: the robot drifts off to the left - so it then corrects and moves hard to the right - but it goes too far right and now it needs to turn hard back left, and so on. 

We can end up with the robot either wiggling along a perfectly straight line, or worse, having it progressively over-correct until the corrections become so large it loses the line completely. The former is an example of a 'limit cycle, or attractor' - a systems-theory term for a stable but possibly rather complicated oscillation. 

More amusing this morning was the poor robot that ended up going in every-decreasing circles. Just what was happening with it I'm not sure, but it veered off the line, did a large diameter circle, and continued in a circlular orbing, but gradually speeding up and reducing its diameter. It ended up spinning on the spot with the left-wheel on maximum forward speed and the right-wheel on maximum reverse speed. This is another (and more entertaining) example of a limit cycle - once it had got into the spinning state, that was where it was going to end up. 

Preventing these things is a bit easier when you know some control theory (see here for example) and can apply the negative feedback in a sensible manner - but we teach them that later. For now, it's about the design process (and the entertainment value). 

 

| | Comments (0)

In the last few weeks holes have been popping up all over Cambridge. They are being dug by 'ditch-witches'  - pieces of machinery designed for making small-diameter tunnels for cabling - as part of the installation of fibre-optic cables for the much vaunted ultra-fast broadband. A ditch-witch is about the ultimate in machinery-obsessed-toddler heaven. We've been avidly following their movement around the Cambridge streets, or at least the youngest member of our family has. They went down our street about seven weeks ago, and since then have been tracking southwards. I'm tempted to slip a GPS locator beacon on them and then write a ditch-witch locator app to help all those stressed parents cope with constant demands to find them.

So, Sunday saw Benjamin and I get on the bicycle and go on a ditch-witch hunt. (We're going on a ditch-witch hunt... We're going to catch a BIG one...we're not scared...). And, much to my relief, we found them, resting quietly on Thompson Street. 

But this entry isn't about ditch-witches or diggers or cranes or other large pieces of machinery, it's about what we saw on the way. On the front lawn of one house, there was a teenage boy practising 'barrel walking'. He was standing on a barrel, and rolling it forward and backwards around the garden. He was obviously reasonably skilled at this since he had some pretty good control of where he was going. 

An interesting observation is that to get the barrel to roll forwards, the rider has to walk backwards. That must feel a little disconcerting. To get the barrel (and you) moving forward at say 2 km/h, you have to walk backwards at 2 km/h . That's because the bottom of the barrel, in contact with the ground, is instaneously stationary, so if the centre is travelling at 2 km/h forwards, the top of the barrel must be going 4 km/h forwards relative to the ground. In order for you to go at the pace of the centre, 2 km/h forwards (and stay on top), you therefore need to go 2 km/h backwards with respect to the top of the barrel. In terms of mathematics: your speed relative to the ground = 2v - v = v, where the 2v is the speed of the top of the barrel, the  '-v' is the speed of you relative to the barrel, and the 'v' is the speed of the centre. Go it?

That kind of relationship crops up quite a bit in physics. I've talked about a case before - when a satellite in orbit loses energy because it hits air molecules, it speeds up. Uh! How does that work? It's because, as it loses energy, it drops to a lower orbit, one with less potential energy. But lower orbits have higher orbital speeds. It turns out that the loss in potential energy is exactly double the gain in kinetic energy. That is, if the satellite loses 100 J of energy, It's made up of a gain of 100 J of kinetic energy and a loss of 200 J of potential energy. It's another '2 - 1 = 1' sum. 

There's also the neat but confusing case of a parallel plate capacitor at constant voltage. Let's say a capacitor consists of two large flat plates, a distance of 1 cm apart. The plates are maintained at constant voltage of say 12 V by a power-supply (e.g battery). This means that the plates have opposite charge, and so attract each other. (To hold them at constant distance, you have to fix them in place somehow). Now, consider pulling those plates apart. Since they attract each other, it is clear that you have to do work on the system to do this. One might therefore expect that the energy stored in the capacitor has gone up. But no. Do the calculation, and you'll see that the energy goes down. (Energy stored = capacitance times voltage squared, divided by two. The voltage stays the same, and since the capacitance is inversely proportional to plate separation, increasing the separation will decrease the stored energy.) Uh! Where does the energy go then? In this case, you have to consider the power supply. What happens is that you are putting energy back into the battery, by causing a current to flow backwards through it. It turns out in this case, that the work you need to do is exactly half the energy that goes to the power supply. The other half comes from the loss in energy stored in the capacitor. So, if we put in 10 J of energy, we lose 10 J of stored energy in the capacitor, and we gain 20 J of energy in the power supply. So, again, we have the '2 - 1 = 1' sum. 

So, for every kilojoule of energy burned by the ditch-witch, doe the toddler also burns a kilojoule, thus meaning 2 kilojoules of heat end up in the air?  (As neat as it would be if that were true, I don't think the actual figures will come close). 

 

 

 

 

| | Comments (0)

The 'Science' news hitting the media at the weekend was Guilio Ruffini and Alvaro Pascual-Leone's demonstration of 'telepathy'. There's been a lot of media coverage on this - for example the neat little interview of Ruffini on the BBC's 'Today' programme.

Their article on this can be read here. It's not a long one, and, for a piece of science, I reckon it's pretty clearly described. 

But, I'm afraid, you can forget The Chrysalids - the messages sent from India to France are of a rather more humble nature. But the science behind it is great. 

Essentially, the work has linked together two existing technologies, via the internet. The first is long-established - namely monitoring of the electroencephalogram (EEG). If you place electrodes on the surface of your scalp, you can detect electrical signals that originate from the electrical behaviour of the neurons in the cortex of your brain. The signals aren't large, just a few microvolts, but they are fairly easy to pick up. I get students doing it in the lab. Different kinds of brain activity lead to different signal patterns. A 'thinking' brain has lots of small amplitude, fast activity, whereas someone in deep sleep shows an EEG pattern that has a large, approximately 1 Hz cycle to it. The two patterns are very different. EEG is routinely used for monitoring sleep patterns and as a tool for an anaesthetist to monitor the depth of anaesthesia in their patient - one wants to make sure the patient is well anaesthetised, but on the other hand one doesn't want to head into Michael Jackson territory. The EEG can help. 

So the EEG is a way of 'reading' the state of the brain. To go from an EEG recording to working out what the subject is thinking about is a long, long way off, if indeed it's possible at all, but one can certainly say something about the brain state. 

If EEG is about reading the state of a brain, then the other technology, transcranial magnetic stimulation (TMS), does the reverse. This is rather newer, and our understanding of it is much poorer (I'm involved with a TMS research project at the moment).  In TMS, pulses of magnetic field are applied to the brain. The effect depends on what area of the brain the pulse is applied to, and in what orientation. At a simple level you can make an arm 'twitch' by applying the pulse to the correct part of the motor cortex. I've seen this done at the University of Otago (on a brave summer student of mine). In Ruffini's work, they used the magnetic pulse to 'create' the perception of a flash of light by stimulating the visual cortex. The subject 'sees' the light, even though there's no such flash on the retina, since the sensory circuits in the cortex that usually interpret what's going on on the retina are activated remotely. 

So what did the experiment do? The person in India sending the message imagined a particular activity (hand or foot movement), and their EEG changed depended on whether they imagined the hand or foot. A computer interpreted the EEG, decided on which it was, and communicated with the computer in France. The French hardware system then zapped the human receiver in such a way as to either trigger the flash or not trigger the flash. The receiver then reported orally whether they'd seen a flash. In this way the 'message' (a string of 1's (hands) and 0's (feet) ) has been sent from one to another without using the senses of the receiver. 

In that sense this is telepathic. The receiving person had no communication with the transmitting person in a visual, oral, or any other way. True, one might ask, why didn't they just phone/Skype/email each other to send the message, and of course you wouldn't want to communicate with your family members overseas with an EEG/TMS system. But that's not the point. The point is that it is a great demonstration of science. 

Will it lead to small telepathic headsets? Rather than fuss with phones and email, we could just have a conversation with anyone in the world just by thinking about it. (You'd want to be sure you'd switched it off afterwards!)  Don't get excited - we're not in Chrysalids territory yet. That's a long, long, long, long way off. But it is good science. 

 

 

 

| | Comments (0)

I've been reviewing some papers for an engineering education conference this week. I can't go into detail about any of them, because I've been given them in confidence to look at, but they have provoked some thoughts about the nature of university. Students come to university, to study physics or engineering or whatever, and what is it they expect to get? Is it the same as what we (the academic staff) expect to give them? It's an important question: remember students give three years of more of their life to getting a degree, and pay multi-thousands (and sometimes multi-tens-of-thousands) of dollars for the privelege. That's a huge commitment -so it's no surprise that students can feel very aggrieved when they don't receive what they are expecting to receive.  

An example of the tensions: As an academic, I try to get my students to a position where they can think through science for themselves, to be able to do their own learning, to reach their own conclusions. So I encourage students to go and find their own resources for supporting their papers, rather than just spoon-feed what I think they should be doing.  But (I know this from appraisals) spoon-feeding is what some students want and expect. "If I wanted to find my own resources, I can do that. If I wanted to do learning for myself I can sign-up to a MOOC for free. But I'm paying you to educate me, to give me resources that a MOOC or textbook can't.  So why are you not identifying and giving me the resources I need, telling me the answers to the questions, making my life easy for me?"

We might say in return "we are giving you what you need - you just don't recognize it yet." But it's a fair question, and I reckon maybe around 25% of students would sympathize to some extent with it. When I was an undergraduate, that kind of question never occured to me. However, I was very much aware that I was receiving an excellent education courtesy of the taxpayer - and, in fact, since it was in the days that the student grant still existed (just)  in the UK, the taxpayer was actually paying me to be educated at their expense. I expected to have to work hard at university. I expected to have to work through things myself and to take charge of my own learning. But now the whole landscape of education funding has changed. While the government still funds a considerable part of education, so do the students themselves. So what do they think they are going to get for their money and does it align with what we think we should be giving them?  How much effort (how many hours a week) do they expect to put in to their education? We might say 'full-time', but a student may reply "Yeah, right - I have to do paid work full-time to  get the money so I can live - where am I going to get the time to do 40 hours a week on my study too!" What do they expect their lecturers to do for them? 

Do we really know the answers to these questions? I would suggest that if we don't know what our students expect, we can't help them settle into university, we can't give them value for their money, and the drop-out rates at first year will remain stubbornly high.  

 

 

 

 

 

 

 

| | Comments (0)

Question: What does a rare-earth magnet do to a USB stick? 

Answer: (Having accidentally carried out this experiment by having both in my desk drawer at the same time): It sticks to it.

I was rather relieved to discover that the data on my USB stick seems to be perfectly intact, despite the casing of the stick being stuck to the magnet for a while. 

Actually, it shouldn't have surprised me. The mechanism for flash memory (see here, for example - it's a bit techy but you might get the idea) needs very substantial electric fields (of order ten volts over a tiny length scale) to change anything. Once electrons have been put on the floating-gate, they won't shift for a long time - unless the right electric fields are applied to bring them back again. While waving a magnet at it will certainly create electric fields (that's Faraday's law), it isn't going to come close to changing anything in a flash memory.

 

 

| | Comments (0)

I was at a conference on 'brain research' last week in sunny Queenstown. There were some great talks, but I was particularly taken by one on the final morning by Jason Kerr, a kiwi now at Max Planck Institute in Germany. He was talking about the vision of rats, and described a very interesting series of experiments on working out just what a rat is seeing and how it uses this information. The experiments involved tracking what each eye was doing in different situations. It turns out that rats use their eyes in a very different way to humans. Rather than both eyes moving together (in phase) as a human usually does (left-eye moves left, right-eye also moves left) they can move either in phase or out of phase (left eye moves left, right-eye moves right) depending on the situation. While they have a decent amount of their field of view that is usually covered by both eyes, they don't appear to use stereo-vision, as such. Each eye might be looking at something different. But, maybe most interestingly, they have excellent coverage of the area above and behind their heads. 

Jason showed a neat video of what happens when a rat in a large enclosure is presented with moving images in its field of view. Mess around with what a rat sees on the horizontal plane, and the rat doesn't bat an eyelid. But change anything above it, and the rat dives for cover. Why? It's one of those conclusions that are obvious in hindsight: What is the largest predator of rats? Birds of prey. Looking out for a threat from above seems to be the major role of the rat's eyes. 

Anyway, so what has this got to do physics? Jason came up with a wonderful quote in his talk, relating to the power of a multi-disciplinary team for tackling a tricky problem.   If I'd had a twitter account, I'd have tweeted it there and then. Instead, I hurriedly scribbled it down. Speaking to an audience primarily of biologists, he said:

Always hang out with physicists - they've already solved everything for you - they just don't know what to do with it yet.

So there! 

 

 

| | Comments (0)

In the last few weeks I've been working with some second-year software engineering students on a design project. Their particular task is to build (with Lego - but the high-tech variety) a robot that can follow a white line on a bench. It's fun to watch them play with different ideas and concepts - there's the occasional disaster when the robot roars off at high speed in an unexpected direction and falls off the bench top. 

To produce something that approximately works isn't that difficult. We can use a couple of lights and detectors, sitting either side of the white line. If the robot is going straight, neither gets much reflection. But if one records a high amount of reflected light (and so is on top of the line) we need to turn the robot  - if it's the other that records a high amount, we ned to turn the other way. Indeed, many, many years ago I made something very similar using analogue electronics (a few LEDs, photocells, transistors etc and a couple of motors to turn the wheels). It approximately worked, but there were a lot of conditions that would fool it - give it shadows and sharp corners to deal with and it was lousy. 

The lego robots that the students have can be programmed - and as such there is a huge array of different options for their control. The exercise is just as much in the development of the software as the hardware. Indeed, since these students are software engineering students, that is the bit they are most familiar with. 

One thing we're trying to get them to think about are different concepts. It's easy to think of one solution and just go with that. But is that the best solution? In engineering we can't afford just to develop the first idea that comes into our heads. We don't really have much idea about what is 'best' until we think through other possibilities and assess them against relevant criteria.  Too often we can be constrained by traditional thinking - "it has to be done that way" - without really considering novel options.  Two light sensors might work. But would three (or even four) be better? How are they best placed?  What about sensors that aren't rigidly mounted but can move (actively search for the line)? The possibilities are almost endless. 

But the hardware is only half the problem. How should the robot best respond to the input signals? Simply turning one way or the other is easy to implement, but can lead to excessive oscillation. There are smarter control systems available (e.g. Proportional Integral-Differential control), but at a cost of increased complexity. Is it worth pursuing them?

These are questions that the students need to think about with their project. We can get them to do that (rather than just thinking up one solution that might work and considering nothing else) by setting the assessment tasks appropriately. So they are not just judged on how well their robots can follow the white line, but what concepts they thought about, and whether they selected one appropriately using reasonable specifications and design criteria (i.e. how well they followed the established process for engineering design). In fact, following the design process well should ensure that the end result actually does do a good job of following the line accurately, repeatedly and quickly . 

There are still several weeks until the end of semester, when these line-following robots need to be perfected. They'll be tested at the Engineering Design Show where we'll find out how to build a good line-follower.

 

| | Comments (0)

I'm at the International Union of Pure and Applied Biophysics Congress in Brisbane this week. Besides being a nice escape from the winter, I'm learning a lot - mostly molecular biology. The 'physics' content in some of the talks and posters is rather hard to spot - the 'bio' is rather more evident. I wonder, though, if a biologist would complain that it's the 'bio' bit that is missing and the physics dominates. It's easy to take what we know forgranted and forget how tricky it can be for someone not in that field.

An example came in the first talk, on Sunday, by Nobel Laureate Brian Kobilka. His title was 'Structural insights into G protein coupled receptor signalling' though that was pretty irrelevant, since I failed to gain any insight into anything. He started by putting up a rather complicated diagram and saying "I don't need to show this to such an audience as you'll already be familiar with this..." and then went on from there. Well, I was not familiar with it, and was completely lost right from the start. That's not how to do a plenary talk at a conference (or in any other forum). My thought is "how often do I do this with my students?"

Then, in one of the parallel sessions on Monday, a speaker started: "I don't need to cover this introductory material since you had such an excellent introduction by Brian Kobilka on Sunday..." That was like rubbing salt into the wound. Yes, I do need an introduction. What are GPCRs and why is their signalling so important?" As much as I hate to say it - hooray for Wikipedia - it gives me more learning than the word's expert in the field does. (Is this why students turn to Wikipedia so often?)

Contrast this with Carl Wieman, who I heard talk at the New Zealand Institute of Physics conference several years ago. Carl is a Nobel Laureate who can give a lecture.

 

| | Comments (0)

I overheard the following conversation at the best coffee outlet on campus yesterday:

"Well, winter's nearly over. We're past the shortest day so it's getting warmer. And we've had eleven frosts so far this year, and the record for Hamilton is twelve, so there can only be one more to come." - Anonymous

Where do I begin?

Well, first let's point out that the shortest day does not equal the coldest day. Not by a long shot. In fact, I believe that statistically speaking the coldest week of the year for New Zealand is the last week of July (i.e. now). Why the difference? While it's true that it's the sun that provides the heat input to the earth, and that's at a minimum on the shortest day, there's a lot of thermal inertia on the earth, and particularly on the sea. And there is a lot of sea surrounding New Zealand. Temperatures are slow to change. While the sun remains low in the sky, the sea temperatures are slowly cooling, and that is going to influence the temperature in Hamilton. Conversely, the sea temperature in December is still pretty nippy. It's late summer before the sea temperature hits its maximum. Seasonal temperature variation is more about the cumulative heat put in over an extended period of time, as opposed to the heat input from the sun on a particular day.

And then the second point. I've always found it amusing that Hamiltonians count frosts, and think that  minus 4 Celsius (as it has been a couple of mornings recently) is cold. It is only cold because in New Zealand it is (near enough) compulsory to live in poorly heated, uninsulated, single-glazed detached houses. Europeans find this concept laughable, and, I think, Canadians probably sink their heads in their hands in despair.  Anyway, let's leave that aside. So if there have only ever been twelve frosts in a single winter in Hamilton (I doubt this, but don't have statistics on this at hand), and we've had eleven so far, then does that mean there is only at most one more to come?

Um, no. Probability doesn't work like that.  Our weather systems don't have a memory (not in that sense anyway), and they certainly aren't intelligent enough record the number of frosts a particular place has a year and act accordingly in the weeks ahead. I'd say we would be in for a few more frosts yet. That's simply based on the metservice statistics. Go to http://www.metservice.com/towns-cities/hamilton and look at the historical data tab. You'll see that the mean minimum temperature for August is -2 C, and for September it's 0 C, suggesting there can easily be some more negative temperatures coming for 2014. Enjoy. 

I remember several years ago playing a board game with a few friends. We'd had a long run of throws of the dice without seeing a 'six'. One of my friends asked me what the probability was that the next through would be a 'six'.  "One sixth" I answered - "same as for any other throw."  This sparked an intense discussion on whether that was right or not. It is. The dice does not have a memory. It doesn't remember what side it has landed up on in the past. Each throw is equally likely to show 1, 2, 3, 4, 5, or 6.  The probability of a 'six' is one sixth. What was perhaps most interesting is that a friend of mine who was doing a maths degree at the time refused to back me up. 

So is winter nearly over? While it's true that today feels rather spring-like, and the days are now noticeably longer than they were a month ago, winter still has plenty of teeth left.  

 

 

 

 

 

 

| | Comments (0)

I've been following the weather with interest this week. First of all, I was very glad when the wind and rain disappeared late last weekend. We were at a wedding in Whakatane on Saturday afternoon/evening, and boy, did it rain. With the wedding in a garden in something that was a bit more substantial than a marquee (think marquee with hard walls and floor), with a portaloo outside,  and a four minute walk up a long, dark, mud and puddle infested driveway in a storm separating you from the car, it was certainly a memorable wedding reception. 

Now, with beautiful clear skies, light winds, and frosty mornings, you'd be forgiven for thinking there's a big fat high pressure system sitting over us.  But there isn't.  For the last few days, we (by which I mean at this end of the country) have been in or around a saddle-point, in terms of pressure. There have been lows to the north and south, highs to the east and west, and somewhere in the middle over us. I note today that things have rotated a bit, so the lows now lie east and west, with a high to the north and another approximately south. Here's a picture I've stolen from the metservice website this morning (www.metservice.com, 18 July 2014, 11am); it's the forecaset for noon today. Note how NZ is sandwiched between two lows, but isn't really covered by either. 

Capture.JPG

 
You can see the strength of the wind on this plot by the feathers on the arrow symbols. The more feathers, the stronger the winds. (The arrows point in the direction of the wind). Note how the wind blows clockwise around the low pressures (and anticlockwise, less strongly, around the highs). Have a look just around Cape Reinga (for non-NZ dwellers, and I know there's a few of you out there, that's the northern-most tip of the North Island.) There's a point where the wind (anthropromorphising) doesn't know what to do. It's in what's mathematically termed a saddle point. It's a point where locally there is no gradient in pressure, but is neither a high or a low. Winds are light.  In two dimensions (this is what we have on the earth's surface) with a single variable such as pressure, there are those three possibilities where the gradient of pressure is zero - a the maximum of a high, the minimum of a low, or a saddle. 
 
In terms of terrain, a mountain pass is a saddle point. It's where one goes from valley to valley (low to low), between two mountains. On top of the pass, you are at a point where the gradient is zero. But it's neither a peak or a trough. It's called a 'saddle', because the shape looks rather like a saddle for a horse - which is low on both flanks, but high at the front and back. A marble placed on top of a saddle should, if it were placed exactly at the equilibrium point with no vibrations, stay there. 
 
Saddle-points crop up in all kinds of dynamical systems (e.g. brain dynamics) where there's more than one variable involved.  Such a point is termed an unstable equilibrium - any deviation from the equilibrium point will cause the system to move away from it. However, the change may not be terribly rapid. When there are lots of variables involved, such local equilibria may have very complicated dynamics associated with them indeed - the range of possibilities gets very large and dynamics can become very rich indeed. 
| | Comments (0)