Friday, February 24, 2006

Quite dangerous ideas

I came across the Edge website last week, whose home page declares the rather grand aim:

To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.

It appears that the Edge asks an Annual Question, "What is the answer to life, the universe, and everything", that sort of thing, and then publishes the answers by the contributing illuminati.

The 2006 question is "What is your dangerous idea?".

So it was with some excitement that I started to read the assembled responses of the great and the good. Very interesting and well worth reading but, I have to say, the ideas expressed are, er, not very dangerous. Quite dangerous, one might say, but by and large not the sort of ideas that had me rushing to hide behind the sofa.

So, I hear you say, "what's your dangerous idea?".

Ok then, here goes.

I think that Newton's interpretation of his first law of motion was wrong and that there is no such thing as a force of gravity. Let me say right away that this is not my idea: it is the result of a lifetime's work by my friend Science Philosopher Viv Pope. But I have played a part in the development of this work, so I feel justified in evangelising about it.

Recall your school physics. Newton's first law of motion states that every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it. In other words, that the 'natural' state of motion is in a straight line. Of course in an abstract sort of way this feels as if it is right. Perhaps that is why it has not been seriously challenged for the best part of 400 years (or it could be because Newton's first law has become so embedded in the way we think about the world that we simply accept it unquestioningly).

Consider an alternative first law of motion: the natural (force less) state of motion is orbital. I.e. that bodies continue to orbit unless an external force is applied. Now the Universe is full of orbital motion. From the micro-scale - electrons in orbit around nuclei, to the macro-scale - moons around planets, planets around stars, rotating galaxies etc. If this alternative first law is true, it would mean that we don't need to invent gravity to account for orbital motion. This appeals to me, not least because it leads to a simpler and more elegant explanation (and I like Occam's Razor). It would also explain why - despite vast effort and millions of dollars worth of research - no empirical evidence (gravity waves or gravity particles) has yet been found for how gravity propagates or acts at-a-distance. A common-sense objection to this idea is "well if there's no such thing as gravity what is it that sticks us to the surface of the earth - why don't we just float off?". The answer is (and you can show this with some pretty simple maths), that the natural (force less) orbital radius for you (given the mass of your body), is quite a long way towards the centre of the earth from where you now sit. So there is a force that means that you weigh something, it's just not a mysterious force of gravity but the real force exerted by the thing that restrains you from orbiting freely, i.e. the ground under your feet.

This has all been worked out in a good deal of detail by Viv Pope and mathematician Anthony Osborne, and its called the Pope Osborne Angular Momentum Synthesis, or POAMS.

Now that's what I call a dangerous idea.

Thursday, February 23, 2006

On microcode: the place where hardware and software meet

Software is remarkable stuff. Ever since writing my first computer program in October 1974* I have not lost that odd but exhilarating sense that writing a program is like working with pure mind stuff. Even now, over 32 years later, when I fire up Kylix** on my laptop and crack my fingers ready to starting coding I still feel the excitement - the sense of engineering something out of nothing in that virtual mind-space inside the computer.

But there is an even more remarkable place that I want to talk about here, and that is the place where hardware and software meet. That place is called microcode.

Let me first describe what microcode is.

Most serious computer programming is (quite sensibly) done with high-level languages (C++, Java, etc), but those languages don't run directly on the computer. They have to be translated into machine-code, the binary 0s and 1s that actually run on the processor itself. (The symbolic version of machine code is called 'assembler' and hard-core programmers who want extreme performance out of their computers program in assembler.) The translation from the high-level language into machine-code is done by a program called a compiler and if, like me, you work within a Linux environment then your compiler will most likely be the highly respected GCC (Gnu C Compiler).

However, there is an even lower level form of code than machine-code, and that is microcode.

Even though a machine-code instruction is a pretty low-level thing, like 'load the number 10 into the A register', which would be written in symbolic assembler as LD A,10, and in machine-code as an unreadable binary number, it still can't be excuted directly on the processor. To explain why I first need to give a short tutorial on what's going on inside the processor. Basically a microprocessor is a bit like a city where all of the specialist buildings (bank, garage, warehouse, etc) are connected together by the city streets. In a microprocessor the buildings are pieces of hardware that each do some particular job. One is a set of registers which provide low level working storage, another is the arithmetic logic unit (or ALU) that will perform simple arithmetic (add, subtract, AND, OR etc), yet another is an input-output port for transferring data to the outside world. In the microprocessor the city streets are called data busses. And, like a real city, data has to be moved around between say the ALU and the registers, by being routed. Also like a real city data on the busses can collide, so the microprocessor designer has to carefully avoid this otherwise data will be corrupted.

Ok, now I can get back to the microcode. Basically, each assembler instruction like LD A,10 has to be converted into a set of electrical signals (literally signals on individual wires) that will both route the data around the data busses, in the right sequence, and select which functions are to be performed by the ALU, ports, etc. These electrical signals are called microorders. Because the data takes time to get around on the data busses the sequence of microorders has to carefully take account of the time delays (which are called propagation delays) for data to get between any two places in the microprocessor. Thus, each assembler instruction has a little program of its own, a sequence of microorders (which may well have loops and branches, just like ordinary high level programs), and programming in microcode is exquisitely challenging.

Microcode really is the place where hardware and software meet.

----------------------------------------------------------------------------
*in Algol 60, on a deck of punched cards, to run on an ICL 1904 mainframe.
**which I am very sorry to see has now been discontinued by Borland.

Wednesday, February 15, 2006

On wild predictions of human-level AI

Is it just me or has anyone else noticed a spate of predictions of human-equivalent or even super-human artificial intelligence (AI) in recent weeks?

For instance the article futurology facts (now there's an oxymoron if ever there was one) on the BBC world home page quoted the British Telecom 'technology timeline' including:

2020: artificial intelligence elected to parliament
2040: robots become mentally and physically superior to humans

A BT futurologist is clearly having a joke at the expense of members of parliament. Robots won't exceed humans intellectually until 2040 but it's presumably ok for a sub-human machine intelligence to be 'elected' to parliament in 2020. Hmmm.

Setting aside the patent absurdity of the 2020 prediction, let's consider the 2040 robots becoming intellectually superior to humans thing.

First let me declare that I think machine intelligence equivalent or superior to human intelligence is possible (I won't go into why I think it's possible here - leave that to a future blog). However, I think the idea that this will be achieved within 35 years or so is wildly optimistic. The movie i,Robot is set in 2035; my own view is that this level of robot intelligence is unlikely until at least 2135.

So why such optimistic predictions (apart perhaps from wishful thinking)? Part of the problem I think is a common assumption that human level machine intelligence just needs an equivalent level of computational power to the human brain, and then you've cracked it. And since, as everyone knows, computers keep doubling in power roughly every two years (thanks to that nice man Gordon Moore), it doesn't take much effort to figure out that we will have computers with an equivalent level of computational power to the human brain in the near future.

That assumption is fallacious for all sorts of reasons, but I'll focus on just one.

It is this. Just having an abundance of computational power is not enough to give you human level artificial intelligence. Imagine a would-be medieval cathedral builder with a stockpile of the finest Italian marble, sturdy oak timbers, dedicated artisans and so on. Having the material and human resources to hand clearly does not make him into a cathedral builder - he also needs the design.

The problem is that we don't have the design for human-equivalent AI. Not even close. In my view we have only just started to scratch the surface of this most challenging of problems. Of course there are plenty of very smart people working on the problem, and from lots of different angles. The cognitive neuroscientists are by-and-large taking a top-down approach by studying real brains; the computer scientists build first-principles computational models of intelligence, and the roboticists take a bottom-up approach by building at-first simple robots with simple brains. But it's an immensly hard problem because human brains (and bodies) are immensly complex.

Surely the really interesting question is not when we will have that design, but how. In other words will it be by painstaking incremental development, or by a single monumental breakthrough. Will there (need to) be an Einstein of artificial intelligence? If the former then we will surely have to wait a lot longer than 34 years. If the latter then it could be tomorrow.

Perhaps a genius kid somewhere has already figured it out. Now there's a thought.

Monday, February 06, 2006

On free will and noisy brains

Consider that humblest of automata: the room thermostat. It has a sensor (temperature) and an actuator (boiler on/off control) and some artificial intelligence, to decide whether to switch the boiler on - if the room is getting cold, or off - if the room is getting too warm. (If the thermostat has hysteresis the 'on' temperature will be different to the 'off' temperature - but that's not important here.)

I said that the theromstat's AI 'decides' whether to switch the boiler on or off, which implies that it has free will. Of course it doesn't, because its artificial intelligence is no more than a simple rule, 'if temperature <> 60 then switch boiler off', for example. So, depending on the temperature, what the thermostat decides is completely determined. With this simple deterministic rule the thermostat can't simply decide to switch the boiler off regardless of the temperature just for the hell of it.

Well all of that is true for 99.99..% of the time. But consider the situation when the temperature is poised on almost exactly the value at which the thermostat switches. The temperature is neither going up nor down but is balanced precariously at a value just a tiny fraction of a degree away from the switching value. Now what determines whether the thermostat will switch? The answer is noise. All electrical systems (actually all physical systems above absolute zero) are noisy. So, at any instant in time the noise will have the effect of adding or subtracting a tiny amount to the temperature value, either pushing it over the switching threshold, or not.

For 99.99..% of the time the thermostat is deterministic, but for the remaining 0.00..1% of the time it is stochastic: it 'decides' whether to switch the boiler on or off at random, i.e. 'just for the hell of it'.

But, I hear you say, that's not free will. It's just like tossing a coin. Well, maybe it is. But maybe that's what free will is.

Consider now that oldest of choices. Fight or flee. Most of the time, for most animals, there is no choice. The decision is easy: the other animal is bigger, so run away; or smaller, so let's fight; or it's bigger but we're trapped in a corner, so fight anyway. Just like the thermostat, most of the time the outcome is determined by the rules and the situation, or the environment.

But occasionally (and probably somewhat more often than in the thermostat case) the choices that present themselves are perfectly evenly balanced. But the animal still has to make a choice and quickly, for the consequences of dithering are clear: dither and most likely be killed. So, how does an animal make a snap decision whether to fight or flee, with perfectly balanced choices? The answer, surely, is that the animal needs to, metaphorically speaking, toss a coin. On these rare occasions its fate is decided stochastically and brains, like thermostats, are noisy. Thus it is, I contend, neural noise that will tip the brain into making a snap decision when all else is equal - the neural equivalent of tossing a coin.

This is why I think brains evolved to be noisy.

The long extinct ditherers probably had less noisy brains.