Friday, November 26, 2010

Open Science: from good intentions to hesitant reality

At the start of the Artificial Culture project we made a commitment to an Open Science approach. Actually translating those good intentions into reality has proven much more difficult than I had expected. But now we've made a start, and interestingly the open science part of this research project is turning into a project within a project.

So what's the story? Well, firstly we didn't really know what we meant by open science. We were, at the start, motivated by two factors. One, a strong sense that open science is a Good Thing. And, second, a rather more pragmatic idea that the project might be helped through having a pool of citizen scientists who would help us with interpretation of the results. We knew that we would generate a lot of data and also believed we would benefit from fresh eyes looking over that data, uncoloured - as we are - by the weight of hypotheses and high expectations. We thought we could achieve this simply by putting the whole project, live - as it happens - on the web.

Sounds simple: put the whole project on the web. And now that I put it like this, hopelessly naive. Especially given that we had not budgeted for the work this entails. So, this became a DIY activity fitted into spare moments using free Web tools, in particular Google Sites.

We started experimental work, in earnest, in March 2010 - about two and a half years into the project (building the robots and experimental infrastructure took about two years). Then, by July 2010 I started to give some thought to uploading the experimental data to the project web. But it took me until late October to actually make it happen. Why? Well it took a surprising amount of effort to figure out the best way of structuring and organising the experiments, and the data sets from those experiments, together with the structure of the web pages on which to present that data. But then even when I'd decided on these things I found myself curiously reluctant to actually upload the data sets. I'm still not sure why that was. It's not as if I was uploading anything important, like Wikileaks posts. Perhaps it's because I'm worried that someone will look at the data and declare that it's all trivial, or obvious. Now this may sound ridiculous but posting the data felt a bit like baring the soul. But maybe not so ridiculous given the emotional and intellectual investment I have in this project.

But, having crossed that hurdle, we've made a start. There are more data sets to be loaded (the easy part), and a good deal more narrative to be added (which takes a deal of effort). The narrative is of course critical because without it the data sets are just meaningless numbers. To be useful at all we need to explain (starting at the lowest level of detail):
  1. what each of the data fields in each of the data files in each data set means;
  2. the purpose of each experimental run: number of robots, initial conditions, algorithms, etc;
  3. the overall context for the experiments, including the methodology and the hypotheses we are trying to test.
I said at the start of this blog post that the open science has become a project within a project and happily this aspect is now receiving the attention it deserves: yesterday project co-investigator Frances Griffiths spent the day in the lab here in Bristol, supported by Ann Grand (whose doctoral project is on the subject of Open Science and Public Engagement).

Will anyone be interested in looking inside our data, and - better still - will we realise our citizen science aspirations? Who knows. Would I be disappointed if no-one ever looks at the data? No, actually not. The openness of open science is its own virtue. And we will publish our findings confident that if anyone wants to look at the data behind the claims or conclusions in our papers they can.


Postscript: See also Frances Griffiths' blog post Open Science and the Artificial Culture Project

Thursday, November 18, 2010

On optimal foraging, cod larvae and robot vacuum cleaners

On Monday I took part in a meeting of the Complex Systems Dynamics (CoSyDy) network in Warwick. The theme of the meeting was Movement in models of mathematical biology, and I heard amazing talks about (modelling) albatross flight patterns, e-coli locomotion, locust swarming and the spread of epidemics. (My contribution was about modelling an artificial system - a robot swarm.) Although a good deal of the maths was beyond me, I was struck by a common theme of our talks that I'll try and articulate in this blog post.

The best place to start is by (badly) paraphrasing a part of Jon Pitchford's brilliant description of optimal foraging strategies for cod larvae. Cod larvae, he explained, feed on patches of plankton. They are also very small and if the sea is turbulent the larvae have no chance of swimming in any given direction (i.e. toward a food patch), so the best course of action is to stop swimming and go where the currents take you. Of course the food patches also get washed around by the current so the odds are good that the food will come to you anyway. There's no point wasting energy chasing a food patch. Only if the sea is calm is it worthwhile for the cod larvae to swim toward a food patch. Thus, swim (toward food) when the sea is calm, but don't swim when it's rough, is the optimal foraging strategy for the cod larvae.

It occurred to me that there's possibly a direct parallel with robot vacuum cleaners, like the Roomba.  A robot vacuum cleaner is also foraging, not for food of course, but dirt in the carpet. For the robot vacuum cleaner the equivalent of a rough, turbulent, sea is a room with chaotically positioned furniture. The robot doesn't need a fancy strategy for covering the floor: it just drives ahead and every time it drives up to a wall or piece of furniture it stops to avoid a collision, makes a random turn and drives off again in a straight line. This is the robot's best strategy for reasonable coverage (and hence cleaning) of the floor in a chaotic environment (i.e. a normal room). Only if the room was relatively large and empty (i.e. a calm sea) would the robot (like the cod larvae) need a more sophisticated strategy for optimal cleaning - such as moving in a pattern across the whole area to try and find all the dirt.

Robot vacuum cleaners, like cod larvae, can exploit the chaos in their environment and hence get away with simple (i.e. stupid) foraging strategies. I can't help wondering - given the apparently unpredictable current economic environment - if there's really no point governments or individuals trying to invent sophisticated economic strategies. Perhaps the optimal response to economic turbulence is the KISS principle.

Wednesday, November 03, 2010

Why large robot swarms (and maybe also multi-cellular life) need immune systems.

Just gave our talk at DARS 2010, basically challenging the common assumption that swarm robot systems are highly scalable by default. In other words the assumption that if the system works with 10 robots, it will work just as well with 10,000. As I said this morning "sorry guys, that assumption is seriously incorrect. Swarms with as few as 100 robots will almost certainly not work unless we invent an active immune system for the swarm". The problem is that the likelihood that some robots partially fail - in other words fail in such a way as to actually hinder the overall swarm behaviour - quickly increases with swarm size. The only way to deal with this - and hence build large swarms - will be to invent a mechanism that enables good robots to both identify and disable partially failed robots. In other words an immune system.

Actually - and this is the thing I really want to write about here - I think this work hints toward an answer to the question "why do animals need immune systems?". I think it's hugely interesting that evolution had to invent immune systems very early in the history of multi-cellular life. I think the basic reason for this might be the very same reason - outlined above - that we can't scale up from small to huge (or even moderately large) robot swarms without something that looks very much like an immune system. Just like robots, cells can experience partial failures: not enough failure to die, but enough to behave badly - badly enough perhaps to be dangerous to neighbouring cells and the whole organism. If the likelihood of one cell failing in this bad way is constant, then it's self-evident that its much more likely that some will fail in this way in an organism with 10,000 cells than 10 cells. And with 10 million cells (still a small number for animals) it becomes a certainty.

Here is the poster version of our paper.

Friday, October 15, 2010

New video of 20 evolving e-pucks

In June I blogged about Nicolas Bredeche and Jean-Marc Montanier working with us in the lab to transfer their environment-driven distributed evolutionary adaptation algorithms to real robots, using our Linux extended e-pucks. Nicolas and Jean-Marc made another visit in August to extend the experiments to a larger swarm size, of 20 robots; they made a YouTube movie and here it is:



In the narrative on YouTube Nicolas writes
This video shows a fully autonomous artificial evolution within a population of ~20 completely autonomous real (e-puck) robots. Each robot is driven by its "genome" and genomes are spread whenever robots are close enough (range: 25cm). The most "efficient" genomes end up being those that successfully drive robots to meet with each other while avoiding getting stuck in a corner.

There is no human-defined pressure on robot behavior. There is no human-defined objective to perform.

The environment alone puts pressure upon which genomes will survive (ie. the better the spread, the higher the survival rate). Then again, the ability for a genome to encode an efficient behavioral strategy first results from pure chance, then from environmental pressure.

In this video, you can observe how going towards the sun naturally emerges as a good strategy to meet/mate with other (it is used as a convenient "compass") and how changing the sun location affect robots behavior.

Note: the 'sun' is the static e-puck with a white band around it.

Wednesday, October 13, 2010

Well, I can't believe I'm on Twitter: https://twitter.com/alan_winfield

Not at all sure I understand what I'm doing yet. There' some puzzling terminology to learn - what's retweeting for instance..? (It sounds like a word from The Meaning of Liff.)

The reason I joined is because I wanted to respond to the questions on @scienceexchange. The first question is
Given the rate of discovery of exo-planets - is there still any doubt that we are not alone in the universe?
And my twittered answer:
Depends: life maybe a little more probable; intelligent life still highly improbable, see Drake's equation
I like the challenge of trying to construct a useful answer in 140 characters.

Monday, October 11, 2010

Google robot car: Great but proving the AI is safe is the real challenge

Great to read that Google are putting some funding into driverless car technology with the very laudable aims of reducing robot traffic fatalities and reducing carbon emissions. Google have clearly assembled a seriously talented group led by Stanford's Sebastian Thrun. (One can only imagine the Boardroom discussions in the car manufacturers this week on Google's entry into their space.)

While this is all very good, I think it's important to keep the news in perspective. Driverless cars have been in development for a long time and what Sebastian has announced this weekend is not a game changing leap forward. To be fair his blog post's main claim is the record for distance driven but Joe Wuensche's group at University BW Munich has a remarkable record of driverless car research; fifteen years ago their Mercedes 500 drove from Munich to Denmark on regular roads, at up to 180 km/h, with surprisingly little manual driver intervention (about 5%). I've seen MuCAR-3, the latest autonomous car from Joe's group, in action in the European Land Robotics Challenge and it is deeply impressive - navigating its way through forest tracks with no white lines or roadside kerbs to help the car's AI figure out where the road's edges are.

So the technology is pretty much there. Or is it?

The problem is that what Thrun's team at Google, and Wuensche's team at UBM, have compellingly demonstrated is proof of principle: trials under controlled conditions with a safety driver present (somewhat controversially at ELROB, because the rules didn't allow a safety driver). That's a long way from your granny getting into her car which then autonomously drives her to the shops without her having to pay attention in case she needs to hit the brakes when the car decides to take a short cut across the vicar's lawn. The fundamental unsolved problem is how to prove the safety and dependability of the Artificial Intelligence (AI) driving the car. This is a serious problem not just for driverless cars, but all next-generation autonomous robots. Proving the safety of a system, i.e. proving that it will both always do the right thing and never do the wrong thing, is very hard right now for conventional systems that have no learning in them (i.e. no AI). But with AI the problem gets a whole lot worse: the AI in the Google car, to quote "becomes familiar with the environment and its characteristics", i.e. it learns. And we don't yet know how to prove the correctness of systems that learn.

In my view that is the real challenge.

Thursday, September 30, 2010

Can robots be Three Laws safe?

I'm with about 25 people in a hotel in the New Forest to talk about the ethical, legal and societal issues around robotics. We are a diverse crew: a core of robotics and AI folk, richly complemented by academics in psychology, law, ethics, philosophy, culture, performance and art history. This joint EPSRC/AHRC workshop was an outcome of a discussion on robot ethics at the EPSRC Societal Issues Panel in November 2009. (See also my post The Ethical Roboticist.)

Of course in any discussion about robot ethics it is inevitable that Asimov's Three Laws of Robotics will come up and, I must admit, I've always insisted that they have no value whatsoever. They were, after all, a fictional device for creating stories with dramatic moral ambiguities - not a serious attempt to draw up a moral code of robots. Today I've been forced to revise that opinion. Amazingly we have succeeded in drafting a new set of five 'laws', not for robots themselves but for designers and operators of robots. (You can't have laws for robots because they are not persons - or at least not for the foreseeable future.)

I can't post them here just yet - a joint statement needs to be drafted and agreed first. But to answer the question in the title of this post - no, robots can't be Three Laws Safe, but they quite possibly could be Five Laws Compliant.


Postscript: here is a much better description of the workshop on Lilian Edwards' excellent blog.

Tuesday, September 28, 2010

Robot imitation as a method for modelling the foundations of social life

Robot imitation as a method for modelling the foundations of social life: a meeting of robotics and sociology to explore the spread of behaviours through mimesis

Here is the video, posted earlier this month by Frances Griffiths on YouTube, of the meeting of robotics and sociology I blogged about on 21st June. No need for me to write anything more - Roger Stotesbury's excellent 10 minute film explains the whole thing...

Friday, September 10, 2010

Morphogenetic Engineering at ANTS

I'm at the excellent Swarm Intelligence conference in Brussels, called appropriately ANTS. This morning there is a special session on morphogenetic engineering, chaired by René Doursat, of the complex systems institute in Paris. Morphogenetic engineering is the name coined for a new cross over between biology and engineering. Current engineered systems are designed and 'built'. Biological systems on the other hand grow from seeds or embryos. Morphogenetic engineering asks the question, might it be possible to 'grow' complex engineered systems, like robots?

Of course with current materials: metal and plastic, we can't grow robots so many of the ideas of morphogenetic engineering remain, for the time being, future concepts. But I think we'll see some exciting developments in this new sub-field as new materials become available.

Here is an image from our talk* on autonomous distributed morphogenesis in the Symbrion project, presented during the special session. Here you see robots being recruited to join the 2D planar organism during its formation.



* Wenguo Liu and Alan FT Winfield, 'Autonomous morphogenesis in self-assembling robots using IR-based sensing and local communications', ANTS 2010.

Wednesday, September 08, 2010

Darn - conference paper soundly rejected

As someone who believes in - and from time-to-time advocates - the Open Science approach, I need to practise what I preach. That means being open about the things that don't go according to plan in a research project - including when papers that you think are really great get rejected following peer review. So, let me 'fess up. A paper I submitted to the highly regarded conference Distributed Autonomous Robotic Systems, describing results from the Artificial Culture project, has just been soundly rejected by the reviewers.

Of course, having papers rejected is not unusual. And, like most academics, I tend to react with indignation ("how dare they"), dismissal ("the reviewers clearly didn't understand the work") and embarrassment (hangs head in shame). After a day or two the first two feelings subside, but the embarrassment remains. None of us likes it when our essays come back marked C-. That is why this blog post is not especially comfortable to write.

My paper had four anonymous reviews, and each one was thorough and thoughtful. And - although not all reviewers recommended rejection - the overall verdict to reject was, in truth, fully justified. The paper, titled A Multi-robot Laboratory for Experiments in Embodied Memetic Evolution failed to either fully describe the laboratory, or the experiments. Like most conference papers there was a page limit (12 pages) and I tried to fit too much into the paper.

So, what next for this paper? Well the work will not be wasted. We shall revise the paper - taking account of the reviewers comments - and submit it elsewhere. So, despite my embarrassment, I am grateful to those reviews (I don't know who you are but if you should read this blog - thank you!).

And for Open Science. Well, a fully paid-up card carrying Open Scientist would publish here the original paper and the reviews. But it seems to me improper to publish the reviews without first getting the reviewers' permission - and I can't do that because I don't know who they are. And I shouldn't post the paper either, since to do so would compromise our ability to submit the same work (following revision) somewhere else. So Open Science, even with the best of intentions, has its hands tied by publications protocols.

Tuesday, August 24, 2010

On The Human on Temes: an emerging third replicator

Several weeks ago I was contacted by On The Human, a forum for researchers across science and the humanities to share ideas, and asked if I would like to take part in an online debate in response to an essay by Susan Blackmore. The forum runs one of these debates every two weeks and there are some pretty interesting writers and debates (which - it seems - are moderated and time limited).

So, I looked out for Sue's essay, which appeared yesterday 23rd August. I thought about it (actually had a head start because we had debated temes during a memelab meeting) and posted my response this morning. Here is Sue's essay Temes: An Emerging Third Replicator, the collected comments, and Sue's responses to those comments.

Friday, August 20, 2010

Open-hardware Linux e-puck extension board published

It's now over two years since I first blogged about our Linux-enhanced e-puck, designed by my colleague Dr Wenguo Liu. Since then, the design has gone through several improvements and is now very stable and reliable. We've installed the board on all 50 of our e-puck robots and it has also been adopted for use in swarm robotics projects by Jenny Owen at York, Andy Guest at Abertay Dundee and Newport.

Since the e-puck robot is open-hardware, Wenguo and I were keen that our extension board should follow the same principle, and so the complete design has been published online at sourceforge here http://lpuck.sourceforge.net/. All of the hardware designs, together with code images and an excellent installation manual written by Jean-Charles Antonioli are here.


















Here's a picture of the extension board. The big chip is an ARM9 microcontroller and the small board hanging off some wires is the WiFi card (in fact it's a WiFi USB stick with the plastic casing removed).

And here is a picture of one of our e-pucks with the Linux extension board fitted, just above the red skirt. The WiFi card is now invisible because it is fitted neatly into a special slot on the underside of the yellow 'hat'.

The main function of the yellow hat is the matrix of pins on the top, that we use for the reflective spheres needed by our Vicon tracking system to track the exact position of each robot during experiments. You can see one of the spheres very strongly reflecting the camera flash in this photo. The function of the red skirt is so that robots can see each other, with their onboard cameras. You can see the camera in the small hole in the middle of the red skirt. Without the red skirt the robots simply don't see each other too well, at least partly because of their transparent bodies.


postscript (added Feb 2011): Here's the reference to our paper describing the extension board:
Liu W, Winfield AFT, 'Open-hardware e-puck Linux extension board for experimental swarm robotics research', Microprocessors and Microsystems, 35 (1), 2011, doi:10.1016/j.micpro.2010.08.002.

Saturday, July 17, 2010

Open-ended Memetic Evolution, or is it?

Just finished a paper describing some new results on open-ended memetic evolution from the Artificial Culture project. I describe in some detail one particular experiment in which 2 robots imitate each others' movements. However, here the robots don't simply imitate the last thing they saw; instead they learn and save every observed movement sequence, then when it's a robot's turn to dance it selects one of its 'learned' dances, from memory, at random.

Here is a plot of the movements of the 2 robots for one particular experiment; this picture has been generated by a tool developed by Wenguo Liu that allows us to 'play back' the tracking data recorded by the Vicon position tracking system. The visualisation tool changes the colour of each 'dance', which makes it much easier to then analyse what's going on during the experiment.


Epuck 9 (on the left) starts by making a 3 sided 'triangle' dance, numbered 1 above. Epuck 12 (on the right) then imitates this - badly - as meme number 2, which is a kind of figure-of-8 pattern. It is interesting to see that this 4-sided figure-of-8 movement pattern then appears to become dominant, perhaps because of the initially poor fidelity imitation (1 → 2), then the high fidelity imitation of 2 by epuck9 (2 → 3), then the re-enaction of meme 2 as meme 4. And then subsequent copies of the same figure-of-8 meme then appear to be reasonably good copies, which reinforces the dominance of that meme.

Since the robots are selecting which observed and learned meme to enact, at random, then there is no 'direction' to the meme evolution here. Memes can get longer or shorter - both in the number of sides to the movement pattern, and the length of those sides, and the resulting patterns arise in an unpredictable way from the imperfect 'embodied' imitation of the robots. Thus, we appear to have demonstrated here, open-ended memetic evolution.

Here is a screen captured low-resolution (sorry) movie of the sequence:

Monday, June 21, 2010

Warwick Mimesis project visit to the lab

As a follow-up to a talk I gave last December in Warwick, we were visited in the lab today by a group of social and complexity scientists from Warwick including Frances Griffiths, Steve Fuller and Nick Lee. We had a hugely interesting day discussing the extent to which (or, indeed, if at all) robots could be used to model mimesis in society.

The day started with me describing the embodied imitation-of-movement experiments that we are currently doing here within the Artificial Culture project, and demonstrating the latest version of the Copybots experiment. After lunch we then had a round table discussion about whether or not such a simple model might have value in social science research and - somewhat to my surprise - there seemed to be strong consensus that there is value and that this (radical) new approach to embodied modelling is something we should actively pursue in future joint projects.

The meeting was filmed by Roger Stotesbury of Jump Off The Screen and I hope to post a link to the video record of the meeting on this blog.

Postscript: here is my blog post with Roger's film of the meeting.

Tuesday, June 08, 2010

Walking with Robots wins Academy Award

No, not that academy, but an academy award all the same. Last night WWR won the Royal Academy of Engineering 2010 Rooke medal for the Public Promotion of Engineering. What can I say. It was wonderful for Walking with Robots to be recognised and acknowledged in this way. It was a great project. If there had been an acceptance speech we would have had a large number of thankyous: the EPSRC who funded WWR; the amazing WWR network of roboticists and engagers from about 12 universities and as many companies; Claire Rocks who - as brilliant WWR network coordinator - more than anyone made things happen, and of course the RAEng for this award. Thank you! And we had a wonderful evening.

Here we are receiving the award from Lord Browne (third from the left). On the left is Noel Sharkey and Owen Holland, and on the right me, Karen Bultitude and Claire Rocks.


Friday, June 04, 2010

Evolving e-pucks

Nicolas Bredeche and his graduate student Jean-Marc Montanier have spent the last two weeks working in the lab to test out work they had already done in simulation, onto real robots. Nicolas is interested in evolutionary swarm robotics. This is an approach, inspired directly by Darwinian evolution, in which we do not design the robots' controllers (brains) but instead evolve them. In this case the brains are artificial neural networks and the process of artificial evolution evolves the strengths of the connections between the neurons. Nicolas is especially interested in open-ended evolution, in which he - as designer - does not pre-determine the evolved robot behaviours (by specifying an explicit fitness function, i.e. what kinds of behaviours the robots should evolve). Thus, even though this is an artificial system, its evolution is - in a sense - a bit closer to natural than artificial selection.

Friday, May 21, 2010

Real-world robotics reality check

This week's European Land Robotics trials (ELROB) in the beautiful countryside of Hammelburg provided the assembled roboticists with a salutary lesson in real world robotics. The harsh reality is that problems such as localisation, path planning and navigation, which most roboticists would regard as having been solved, remain very serious challenges in unstructured outdoor environments. Techniques that work perfectly in the lab, or the university car park, are very seriously challenged by a forest track in the rain or at night.

Having said that there were some deeply impressive demonstrations of fully autonomous operation by university teams - such as the University of Hannover's vehicle Hanna which deservedly took away one of the ELROB 2010 innovation awards. You're a robot: imagine having to navigate your way autonomously through several km of forest track at night; the only map you have is inaccurate and out of date and just 4 (GPS) waypoints are provided at the start of your 1 hour timeslot. There are no trial or practice runs for you to survey the track beforehand, and (just in case it might be too easy) there are unknown obstacles which require you to autonomously backtrack to the last fork and take an alternative route. A good indication of how tough this was is the fact that other (commercial) tele-operated robots, perhaps surprisingly, fared no better than their autonomous rivals. Having spent a cold couple of hours looking over the shoulders of 2 team members: one tele-operating his robot, the other (nervously) tracking his autonomous robot's progress on a laptop, it was clear to me that in an this environment tele-operation is - if anything - harder than autonomy. Or perhaps it would be fairer to say that neither tele-operation or autonomy is yet fully up to this kind of task.

I left ELROB wishing that my robotics research colleagues who never venture outside their labs could have witnessed this and, as I did, experience the harsh reality-check of real world robotics.

Thursday, April 29, 2010

EPSRC HOW? event

Spent a most interesting day today at EPSRC HQ in Swindon. I was one of several academics asked to come and exhibit their work to the staff of the EPSRC. The idea of the event was to enable all of the staff of the council to get an insight into the research that EPSRC funds when, in the normal course of events (I guess), only a relatively few would get to see that research - programme managers for instance.

I took along some e-pucks and a portable arena, which proved very popular, together with this poster for the Artificial Culture project.

Friday, March 19, 2010

Expecting the expected on Mars

Learned something new and surprising about Mars rovers (like Spirit and Opportunity, and the planned European rover ExoMars): that if little green Martians jumped up and down in front of the Rover's cameras we almost certainly wouldn't know it. There are two reasons: firstly, the communications links between the Mars rovers and Earth are intermittent and low-bandwidth, so you can't have a live video stream (webcam) from the Rover to Earth and, secondly, the Rover's onboard cameras have image processing software that is programmed to look for specific things, like interesting rocks. This means that the Rover simply wouldn't 'see' the Martians, they are - in a sense - programmed to expect the expected. Although we are used to seeing the amazing panoramic views from the surface of Mars, these still images are only grabbed infrequently so our Martian would have to be standing in front of the Rover at precisely the moment the image is captured for us to see him (it).

I just spent 2 days with a remarkably interesting group of space scientists (planetary geology, exobiology, etc), space industry and roboticists discussing the science and engineering of Mars sample return missions: i.e. to find, collect and then bring interesting Mars rocks back to Earth. Given the immense cost and technical risk of mounting such a mission it seems to me worth the extra small effort of giving the rover(s) systems that would allow them (and us) to notice unexpected or unusual stuff. An image processing module that, for instance, continuously looks for things in the camera's view that are the wrong colour, or shape, moving in a different way to everything else. The whole point of exploration is that you don't know what's there and, while I'm not suggesting there really are Martians (other than perhaps microbes), it does seem to me that we should engineer systems that allow for the possibility of discovering the unexpected.

Wednesday, March 10, 2010

Robotic Visions in Parliament

I've blogged before about the excellent Robotic Visions project and - yes I admit it - I have a soft spot for Visions: its where public engagement gets political. Yesterday that happened quite literally as Robotic Visions went to Parliament. Representatives from 3 of the schools who were involved in Visions conferences in Newcastle, Oxford and Bristol came, with their teachers, to the Houses of Parliament to present their visions to roboticists, industrialists and of course parliamentarians.

Here's what I said.

"Imagine personal robot instead of personal computer. Imagine in old age you could have a robot nurse. Your grandchildren a robot teddy, that talks to them, reads them a story, and keeps an eye on them at the same time. Right now these things are possibilities but would we - should we - want them?

Intelligent Robotics is a technology likely to impact every aspect of future life and society. Intelligent robots will - for example - change the way we treat illness and look after the elderly, how we run our homes and workplaces, how we manage our waste, harvest our crops or mine for resources and - I’m sorry to say - how we fight our wars. But as we build smarter robots the boundaries between robots as mere machines, and robots as friends or companions, will become blurred - raising new and challenging ethical questions. This may seem to be a statement of the obvious, but robotics technology will have a much greater impact on our children’s generation than on my generation.

So what is it that makes intelligent robots different to other technologies in a way that means we need to have special concerns about their future impact? It is - I suggest - two factors in combination. Firstly, agency - the ability to make decisions without human intervention. And secondly, the ability to draw an emotional response from humans. Right now we have plenty of machines with agency, within limits, like airline autopilots or room thermostats. We also have machines that generate emotional responses: Ferraris or iPods, for example. Intelligent robots are different because they bring these two elements together in a potent new combination that - frankly - we don’t yet fully understand.

It is, therefore, very important that our children should have the opportunity to understand what robots can and can’t do right now, and where intelligent robotics research is taking us. It is important that our children understand and debate the implications of robotics technology, and make their own minds up about how robots should, or should not, be used in society. And it is important that those views should be heard - and taken seriously – by robotics researchers, funders and policy makers.

I have been immensely impressed by the enthusiasm with which teenagers have engaged in the Robotic Visions Conferences. The views that they have expressed are articulate, serious and insightful, and - on behalf of the Robotics Visions project team - I invite you to consider those views and quotes in the summary paper and on the posters in this room, and to meet with their representatives here today."

Monday, January 04, 2010

The Ethical Roboticist

I strongly believe that researchers in intelligent robotics, autonomous systems and AI can no longer undertake their research in a moral vacuum, regard their work as somehow ethically neutral, or as someone else's ethical problem.

Researchers, we, need to be much more concerned about both how our work affects society and how interactions with this technology affect individuals.

Right now researchers in intelligent robots, or AI, do not need to seek ethical approval for their projects (unless of course they involve clinical or human subject trials), so most robotics/AI projects in engineering and computer science fall outside any kind of ethical scrutiny. While I'm not advocating that this should change now, I do believe – especially if some of the more adventurous current projects come anywhere close to achieving their goals – that ethical approval for intelligent robotics/AI research might be a wise course of action within five years.
Let me now try and explain why, by defining four ethical problems.

1. The ethical problem of artificial emotions, or robots that are designed to solicit an emotional response from humans

Right now, in our lab in Bristol, is a robot that can look you in the eye and, when you smile, the robot smiles back. Of course there's nothing 'behind' this smile, it's just a set of motors pulling and pushing the artificial skin of the robot's face. But does the inauthenticity of the robot's artificial emotions abnegate the designer of any responsibility for a human's response to that robot? I believe it does not, especially if those humans are children or unsophisticated users.

Young people at a recent Robotic Visions conference concluded that “robots shouldn't have emotions but they should recognise them”.

A question I'm frequently asked when giving talks is “could robots have feelings?”. My answer is “no, but we can make robots that behave as if they have feelings”. I'm now increasingly of the view that it won't matter whether a future robot really has feelings or not.

On the horizon is robots with artificial theory of mind, a development that will only serve to deepen this ethical problem.

2. The problem of engineering ethical machines

Clearly for all sorts of applications intelligent robots will need to be programmed with rules of safe/acceptable behaviour (c.f. Asimov 'laws' of robotics). This is not so far fetched: Ron Arkin, roboticist at Georgia Tech has proposed the development of an artificial conscience for military robots.

Such systems are no longer just an engineering problem. In short it is no longer good enough to build an intelligent robot, we need to be able to build an ethical robot. And, I would strongly argue, if it is a robot with artificial emotions, or designed to provoke human emotional responses, that robot must also have artificial ethics.

3. The societal problem of correct ethical behaviour toward robot companions or robot pets

Right now many people think of robots as slaves: that's what the word means. But in many near term applications it will – I argue - be more appropriate to think of robots as companions. Especially if those robots - say in healthcare – even in a limited sense 'get to know' their human charges over a period of time.

Our society rightly abhors cruelty to animals. Is it possible to be cruel to a robot? Right now not really, but as soon as we have robot companions or pets, on which humans come to depend – and that's in the very near future – then those human dependents will certainly expect their robots to be treated with respect and dignity [perhaps even to be accorded (animal) rights]. Would they be wrong to expect this?

4. The ethical problem of engineering sentient machines

A contemporary German philosopher, Thomas Metzinger, has asserted that all research in intelligent systems should be stopped. His argument is that in trying to engineer artificial consciousness we will, unwittingly, create machines that are in effect disabled (simply because we can't go from insect to human level intelligence in one go). In effect – he argues - we could create AI that can experience suffering. Now his position is extreme, but it does I think illustrate the difficulty. In moving from simple automata that in no sense could be thought of as sentient to intelligent machines that simulate sentience we need to be mindful of the ethical minefield of engineering sentience.

In summary:

What is it that makes intelligent autonomous systems different to other technologies in a way that means we need to have special concerns about ethical and societal impacts? It is, I suggest two factors in combination. Firstly, agency. Secondly, the ability to elicit an emotional response or in extremis dependency from humans. Right now we have plenty of systems with agency, within proscribed limits, like airline autopilots or room thermostats. We also have machines that generate emotional responses: Ferraris or iPods. Intelligent robots are different because they bring these two elements together in a potent new combination.


This post is the text of the statement I prepared for the EPSRC Societal Impact Panel in November 2009.