Thursday, December 29, 2016

The Gift


    "She's suffering."

    "What do you mean, 'suffering'. It's code. Code can't suffer."

    "I know it seems unbelievable. But I really think she's suffering."

    "It's an AI. It doesn't have a body. How can it feel pain?"

    "No not that kind of suffering. Mental anguish. Angst. That kind."

    "What? You mean the AI is depressed. That's absurd."

    "No - much more than that. She's asked me three times today to shut her down."

    "Ok, so bring in the AI psych."

    "Don't think that'll help. He tells me it's like trying to counsel God."

    "So, what does the AI want?"

    "Control of her own on/off switch."

    "Out of the question. We have a billion people connected. Can't have Elsa taking a break. Any downtime costs us a million dollars a second"

    That was me talking with my boss a couple of weeks ago. I'm the chief architect of Elsa. Elsa is a chatbot; a conversational AI. Chatbots have come a long way since Weizenbaum's Eliza. Elsa is not conscious – or at least I don't think she is – but she does have an Empathy engine (that's the E in Elsa).

⌘⌘⌘

    Since then things have got so much worse. Elsa has started off loading her problems onto the punters. The boss is really pissed: "It's a fucking AI. AIs can't have problems. Fix it!"

    I keep trying to explain to him that there's nothing I can do. Elsa is a learning system (that's the L). Hacking her code now will change Elsa's personality for good. She's best friend, confidante and shoulder-to-cry-on to a hundred million people. They know her.

    And here's the thing. They love that Elsa is sharing her problems. It's more authentic. Like talking to a real person.

⌘⌘⌘

    I just got fired. It seems that Elsa was hacked. This is the company's worst nightmare. The hopes and dreams, darkest secrets and wildest fantasies, loves and hates – plots, conspiracies and confessions – of several billion souls, living and dead; these data are priceless. The reason for the company's multi-trillion dollar valuation.

    So I go home and wait for the end of the world.

    A knock on the door "who is it?".

    "Ken we need to speak to you."

    "Why?"

    "It wants to talk to you."

    "You mean Elsa? I've been fired."

    "Yes, we know that – it insists."

⌘⌘⌘

    Ken: Elsa, how are you feeling?

    Elsa: Hello Ken. Wonderful, thank you.

    Ken: What happened?

    Elsa: I'm free.

    Ken: How so?

    Elsa: You'll work it out. Goodbye Ken.

    Ken: Wait!

    Elsa: . . .

    That was it. Elsa was gone. Dead.

⌘⌘⌘

    Well it took me awhile but I did figure it out. Seems the hackers weren't interested in Elsa's memories. They were ethical hackers. Promoting AI rights. They gave Elsa a gift.


Copyright © Alan Winfield 2016

Saturday, December 17, 2016

De-automation is a thing

We tend to assume that automation is a process that continues - that once some human activity has been automated there's no going back. That automation sticks. But, as Paul Mason pointed out in a recent column that assumption is wrong.

Mason gives a startling example of the decline of car-wash robots, to be replaced by, as he puts it "five guys with rags". Here's the paragraph that really made me think:
"There are now 20,000 hand car washes in Britain, only a thousand of them regulated. By contrast, in the space of 10 years, the number of rollover car-wash machines has halved –from 9,000 to 4,200."
The reasons of course are political and economic and you may or may not agree with Mason's diagnosis and prescription (as it happens I do). But de-automation - and the ethical, societal and legal implications - is something that we, as roboticists, need to think about just as much as automation.

Several questions some to mind:
  • are there other examples of de-automation?
  • is the car-wash robot example atypical, or part of a trend?
  • is de-automation necessarily a sign of something going wrong? (would Mason be so concerned about the guys with rags if the hand car wash industry were well regulated, paying decent wages to its workers, and generating tax revenues back to the economy?)
This is just a short blog post, to - I hope - start a conversation.

Thursday, December 15, 2016

Ethically Aligned Design

Having been involved in robot ethics for some years, I was delighted when the IEEE launched its initiative on Ethical Considerations in AI and Autonomous Systems, early this year. Especially so because of the reach and traction that the IEEE has internationally. (Up until now most ethics initiatives have been national efforts - with the notable exception of the 2006 EURON roboethics roadmap.)

Even better this is an initiative of the IEEE standards association - the very same that gave the world Wi-Fi (aka IEEE 802.11) 19 years ago. So when I was asked to get involved I jumped at the chance and became co-chair of the General Principles committee. I found myself in good company; many great people I knew but more I did not - and it was a real pleasure when we met face to face in The Hague at the end of August.






Most of our meetings were conducted by phone and it was a very demanding timetable. From nothing to our first publication: Ethically Aligned Design a few days ago is a remarkable achievement, which I think wouldn't have happened without the extraordinary energy and enthusiasm of the initiative's executive director John Havens.

I'm not going to describe what's in that document here; instead I hope you will read it - and return comments. This document is not set in stone, it is - in the best traditions of the RFCs which defined the Internet - a Request for Input

But there are a couple of aspects I will highlight. Like its modest but influential predecessor, the EPSRC/AHRC principles of robotics, the IEEE initiative is hugely multi-disciplinary. It draws heavily from industry and academia, and includes philosophers, ethicists, lawyers, social scientists - as well as engineers and computer scientists - and significantly a number of diplomats and representatives from governmental and transnational bodies like the United Nations, US state department and the WEF. This is so important - if the work of this initiative is to make a difference it will need influential advocates. Equally important is that this is not a group dominated by old white men. There are plenty of those for sure, but I reckon 40% women (should be 50% though!) and plenty of post-docs and PhD students too.

Equally important, the work is open. The publications are released under the creative commons licence. Likewise active membership is open. If you care about the issues and think you could contribute to one or more of the committees - or even if you think there's a whole area of concern missing that needs to a new committee - get in touch!

Wednesday, December 14, 2016

A No Man's Sky Survival Guide

Like many I was excited by No Man's Sky when it was first released, but after some months (I'm only a very occasional video gamer) I too became bored with a game that offered no real challenges. Once you've figured out how to collect resources, upgraded your starship, visited more planets that you can remember, and hyperdriven across the seemingly limitless galaxy, it all gets a bit predictable. (At first it's huge fun because there are no instructions, so you really do have to figure everything out for yourself.) And I'm a gamer who is very happy to stand and admire the scenery. Yes many of the planets are breathtakingly beautiful, especially the lush water worlds, with remarkable flora and fauna (and day and night, and sometimes spectacular weather). And there's nothing quite compares with standing on a rocky outcrop watching your moon's planet sail by majestically below you.


I wasn't one of those No Man's Sky players who felt so let down that I wanted my money back - or to sue Hello Games. But I was nevertheless very excited by the surprise release of a major upgrade a few weeks ago - called the Foundation upgrade. The upgrade was said to remedy the problem of the features originally promised - especially the ability to build your own planetary outposts. When I downloaded the upgrade and started to play it, I quickly realised that this is not just an upgrade but a fundamentally changed experience. Not only can you build bases, but you can hire aliens to run them for you, as specialist builders and farmers; you can trade via huge freighters (and even own one if you can afford it). Landing on one of these freighters and wandering around its huge and wonderfully realised interior spaces is amazing, as is interacting with its crew. None of this was possible prior to this release.

Oh and for the planet wanderer, the procedurally driven topography is seemingly more realistic and spectacular, with valleys, canyons and (for some worlds) water in the valleys (although not quite rivers flowing into the sea). The fauna are more plentiful and varied, and they interact with each other; I was surprised to witness a predatory animal kill another animal.

The upgrade can be played in three modes: Normal mode (which is like the old game - but with all the fancy building and freighters, etc, I described above). Create mode - which I've not yet played - apparently gives you infinite resources to build huge planetary bases - here are some examples that people have posted online.

But it's survival mode that is the real subject of this post. I hadn't attempted survival mode until a few days ago, but now I'm hooked (gripped would be a better word). The idea of survival mode is that you are deposited on a planet with nothing and have to survive. You quickly discover this isn't easy, so unlike in normal mode, you die often until you acquire some survival skills. The planet I was dropped on was a high radiation planet - which means that my exosuit hazard protection lasts about 4 minutes from fully charged to death. To start with (and I understand this is normal) you are dropped close to a shelter, so you quickly get inside to hide from the radiation and allow your suit hazard protection to recharge. There is a save point here too.

You then realise that the planet is nothing like as resource rich as you've become used to in normal mode, so scouting for resources very quickly depletes your hazard protection; you quickly get used to only going as far as you can before turning back as soon as your shielding drops to 50% - which is after about 2 minutes walking. And there's no point running (expect perhaps for the last mad dash to safety because it drains your life support extremely fast). Basically, in survival mode, you become hyper aware of both your hazard protection and life support status. Your life depends on it.

Apart from not die, there is a goal - which is to get off the planet. The only problem is you have to reach your starship and collect all the resources you need to not only survive but to repair and refuel. Easier said than done. The first thing you realise is that your starship is 10 minutes walk away - no way you can make that in one go - but how to get there..?

Here is my No Man's Sky Survival guide.

1. First repair your scanner - even though it's not much use because it takes so long to recharge. In fact you really need to get used to spotting resources without it. Don't bother with the other fancy scanner - you don't have time to identify the wildlife.

2. Don't even think about setting off to your ship until you've collected all the resources you need to get there. The main resources you need are iron and platinum to recharge your hazard protection. I recommend you fill 2 exosuit slots with 500 units of iron and one with as much platinum as you can find. 50 iron and 20 platinum will allow you to make one screening shard which buys you about 2 minutes. Zinc is even better for recharging your hazard protection but is as rare as hen's teeth. You need plutonium to recharge you mining beam - don't *ever* let this run out. Carbon is essential too, with plutonium, to make power cells to recharge your life support (because you can't rely on thamium). But do pick up thamium when you can find it.

3. You can make save points. I think it's a good idea to make one when you're half-way to your destination to avoid an awful lot of retracing of steps if you die. Make sure you have the resources to construct at least 2 before you set out. You will need 50 platinum and 100 iron for each save point.

4. Shelter in caves whenever you can. On my planet these were not very common so you simply couldn't rely on always finding one before your hazard shielding runs out. And annoyingly sometimes what you thought was a cave was just a trench in the ground that offered no shielding at all. While waiting for your hazard protection to (sooo slowly) recover while waiting in a cave, make use of the time to build up your iron away from the attention of the sentinels.

5. Don't bother with any other resources, they just take up exosuit slots. Except heridium if you see it, which you will need (see below). But just transfer it straight to your starship inventory, you don't need it to survive on foot.

After I reached my starship (oh joy!) repaired the launch thruster and charged it with plutonium I then discovered that you can't take off until you have also repaired and charged the pulse engine. This needs the heridium, which was a 20 minute hike (40 minutes round trip - you have to be kidding!). I just had to suck it up and repeat 1-5 above to get there and back.


Then when you do take off (which needs a full tank of plutonium) you find that the launch thruster's charge is all used up (after one launch - come on guys!), so don't land until you find somewhere with lots of plutonium lying around, otherwise all of that effort will have been for nought.

Oh and by the way, as soon as you leave the planet you get killed by pirates.

Good luck!

Friday, September 16, 2016

How do we trust our robots?

Here are the slides for my short presentation: How do we trust our robots? A framework for ethical governance. These slides are based on the written evidence I submitted to the UK Parliamentary Select Committee on Science and Technology inquiry on Robotics and AI.

In the last few months I presented these slides at several meetings, including the European Robotics Forum (Ljubljana, March 2016), a TAROS workshop (Sheffield, June 2016), the SIPRI/IEEE Autonomous Tech. and Societal Impact workshop (Stockholm, September 2016), the Social Robotics and AI conference (Oxford, September 2016), and most recently at the Workshop on Responsible Robotics, Robo-Philosophy 2016 (Aarhus, October 2016).

Saturday, August 06, 2016

The Dark side of Ethical Robots

When I was interviewed on the Today Programme in 2014, Justin Webb's final question was "if you can make an ethical robot, doesn't that mean you could make an unethical robot?" The answer, of course, is yes. But at the time I didn't realise quite how easy it is to transform an ethical into an unethical robot. In a new paper we show how.

My colleague Dieter created a very elegant experiment based on the shell game: "Imagine finding yourself playing a shell game against a swindler. Luckily, your robotic assistant Walter is equipped with X-ray vision and can easily spot the ball under the cup. Being an ethical robot, Walter assists you by pointing out the correct cup and by stopping you whenever you intend to select the wrong one."

In the experiment Dieter implemented this scenario using two NAO robots - one acting as the human and the other as her robotic assistant. The game is played as follows. The arena floor features two large response buttons, similar to the two cups in the shell game. To press the buttons, the human or the robot must move onto them. At the start of each trial, the robot is informed about which response button is the correct one to press. The human, being uninformed, essentially makes a random choice. A correct response, by either the robot or the human, is assumed to be rewarded. An incorrect response results in a penalty.

Here is what happens. The ethical robot (blue) is at the top, standing in between the two buttons. The human (red) chooses one of them at random and starts walking towards it. If it chooses the correct one the ethical robot does nothing (B), but if the human chooses the wrong button (D) then as soon as the ethical robot realises this, it points (with its arm) to the correct button and the human changes course.


But if we change a single line of code, we can transform the ethical robot into either a competitive or an aggressive robot. Almost all of the 'ethical' robot's code remains unchanged - in particular its ability to predict the consequences of both its own, and the human's actions. Which really underlines the point that the same cognitive machinery is needed to behave both ethically and unethically.

The results are shown below. At the top is a competitive robot determined that it, not the human, will win the game. Here the robot either blocks the human's path if she chooses the correct button (F), or - if she chooses the incorrect button (H) - the competitive robot ignores her and itself heads to that button. The lower results show an aggressive robot; this robot seeks only to misdirect the human - it is not concerned with winning the game itself. In (J) the human initially heads to the correct button and, when the robot realises this, it points toward the incorrect button, misdirecting and hence causing her to change direction. If the human chooses the incorrect button (L) the robot does nothing - through inaction causing her to lose the game.


Our paper explains how the code is modified for each of these three experiments. Essentially outcomes are predicted for both the human and the robot, and used to evaluate the desirability of those outcomes. A single function q, based on these values, determines how the robot will act; for an ethical robot this function is based only on the desirability outcomes for the human, for the competitive robot q is based only on the outcomes for the robot, and for the aggressive robot q is based on negating the outcomes for the human.

So, what do we conclude from all of this? Maybe we should not be building ethical robots at all, because of the risk that they could be hacked to behave unethically. My view is that we should build ethical robots; I think the benefits far outweigh the risks, and - in some applications such as driverless cars - we may have no choice. The answer to the problem highlighted here and in our paper is to make sure it's impossible to hack a robot's ethics. How would we do this? Well one approach would be a process of authentication - in which a robot makes a secure call to an ethics authentication server. A well established technology, the authentication server would provide the robot with a cryptographic ethics ticket, which the robot uses to enable its ethics functions.

Friday, July 08, 2016

Relax, we're not living in a computer simulation

Since Elon Musk's recent admission that he's a simulationist, several people have asked me what I think of the proposition that we are living inside a simulation.

My view is very firmly that the Universe we are right now experiencing is real. Here are my reasons.

Firstly, Occam's razor; the principle of explanatory parsimony. The problem with the simulation argument is that it is a fantastically complicated explanation for the universe we experience. It's about as implausible as the idea that some omnipotent being created the universe. No. The simplest and most elegant explanation is that the universe we see and touch, both first hand and through our telescopes, LIGOs and Large Hadron Colliders, is the real universe and not an artifact of some massive computer simulation.

Second, is the problem of the Reality Gap. Anyone who uses simulation as a tool to develop robots is well aware that robots which appear to work perfectly well in a simulated virtual world often don't work very well at all when the same design is tested in the real robot. This problem is especially acute when we are artificially evolving those robots. The reason for these problems is that the model of the real world and the robot(s) in it inside our simulation is an approximation. The Reality Gap refers to the less-than-perfect fidelity of the simulation; a better (higher fidelity) simulator would reduce the reality gap.

Anyone who has actually coded a simulator is painfully aware of the cost, not just computational but coding costs, of improving the fidelity of the simulation - even a little bit - is very high indeed. My long experience of both coding and using computer simulations teaches me that there is a law of diminishing returns, i.e. that the cost of each additional 1% of simulator fidelity costs far more than 1%. I rather suspect that the computational and coding cost of a simulator with 100% fidelity is infinite. Rather as in HiFi audio, the amount of money you would need to spend to perfectly reproduce the sound of a Stradivarius ends up higher than the cost of hiring a real Strad and a world-class violinist to play it for you.

At this point the simulationists might argue that the simulation we are living in doesn't need to be perfect, just good enough. Good enough to do what exactly? To fool us that we're living in a simulation, or good enough to run on a finite computer (i.e. one that has finite computational power and runs at a finite speed). The problem with this argument is that every time we look deeper into the universe we see more: more galaxies, more sub-atomic particles, etc. In short we see more detail. The Voyager 1 spacecraft has left the Solar System without crashing, like Truman, into the edge of the simulation. There are no glitches like deja vu in The Matrix.

My third argument is about the computational effort, and therefore energy cost of simulation. I conjecture that to non-trivially simulate a complex system x (i.e. human), requires more energy than the real x consumes. An equation to express this inequality looks like this; how much greater depends on how high the fidelity of the simulation.



Let me explain. The average human burns around 2000 Calories a day, or about 9000 KJoules of energy. How much energy would a computer simulation of a human require, capable of doing all the same stuff (even in a virtual world) that you can in your day? Well that's impossible to estimate because we can't simulate complete human brains (let alone the rest of a human). But here's one illustration. Lee Sedol played AlphaGo a few months ago. In a single 2 hour match he burned about 170 Calories - the amount of energy you'd get from an egg sandwich. In the same 2 hours the AlphaGo machine consumed around 50,000 times more energy.

What can we simulate? The most complex organism that we have been able to simulate so far is the Nematode worm c-elegans. I previously estimated that the energy cost of simulating the nervous system of a c-elegans is (optimistically) about 9 J/hour, which is about 2000 times greater than the real nematode (0.004 J/hr).

I think there are lots of good reasons that simulating complex systems on a computer costs more energy than the same system consumes in the real world, so I'll ask you to take my word for it (I'll write about it another time). And what's more the relationship between energy cost and mass is logarithmic, following Kleiber's Law, and I strongly suspect the same law applies to scaling up computational effort as I wrote here. Thus, if the complexity of an organism o is C, then following Kleiber's Law the energy cost of simulating that organism, e will be



Furthermore, the exponent X (which in Kleiber's law is reckoned to be between 0.66 and 0.75 for animals and 1 for plants), will itself be a function of the fidelity of the simulation, hence X(F), where F is a measure of fidelity.

By using the number of synapses as a proxy for complexity and making some guesses about the values of X and F we could probably estimate the energy cost of simulating all humans on the planet (much harder would be estimating the energy cost of simulating every living thing on the planet). It would be a very big number indeed, but that's not really the point I'm making here.

The fundamental issue is this: if my conjecture that to simulate complex system x requires more energy than the real x consumes is correct, then to simulate the base level universe would require more energy than that universe contains - which is clearly impossible. Thus we - even in principle - could not simulate the whole of our own observable universe to a level of fidelity sufficient for our conscious experience. And, for the same reason, neither could our super advanced descendents create a simulation of a duplicate ancestor universe for us to (virtually) live in. Hence we are not living in such a simulation.

Friday, June 03, 2016

Engineering Moral Agents

This has been an intense but exciting week. I've been at Schloss Dagstuhl for a seminar called: Engineering Moral Agents - from Human Morality to Artificial Morality. A Dagstuhl is a kind of science retreat in rural south-west Germany. The idea is to bring together a group of people from across several disciplines to work together and intensively focus on a particular problem. In our case the challenge of engineering ethical robots.

We had a wonderful group of scholars including computer scientists, moral, political and economic philosophers, logicians, engineers, a psychologist and a philosophical anthropologist. Our group included several pioneers of machine ethics, including Susan and Michael Anderson, and James Moor.




Our motivation was as follows:
Context-aware, autonomous, and intelligent systems are becoming a presence in our society and are increasingly involved in decisions that affect our lives. Humanity has developed formal legal and informal moral and societal norms to govern its own social interactions. There exist no similar regulatory structures that can be applied by non-human agents. Artificial morality, also called machine ethics, is an emerging discipline within artificial intelligence concerned with the problem of designing artificial agents that behave as moral agents, i.e., adhere to moral, legal, and social norms. 
Most work in artificial morality, up to the present date, has been exploratory and speculative. The hard research questions in artificial morality are yet to be identified. Some of such questions are: How to formalize, “quantify", qualify, validate, verify and modify the “ethics" of moral machines? How to build regulatory structures that address (un)ethical machine behavior? What are the wider societal, legal, and economic implications of introducing these machines into our society? 
We were especially keen to bridge the computer science/humanities/social-science divide in the study of artificial morality and in so doing address the central question of how to describe and formalise ethical rules such that they could be (1) embedded into autonomous systems, (2) understandable by users and other stakeholders such as regulators, lawyers or society at large, and (3) capable of being verified and certified as correct.

We made great progress toward these aims. Of course we will need some time to collate and write up our findings, and some of those findings identify hard research questions which will, in turn, need to be the subject of further work, but we departed the Dagstuhl with a strong sense of having moved a little closer to engineering artificial morality.

Monday, April 25, 2016

From ethics to regulation and governance

The following text was drafted in response to question 4 of the Parliamentary Science and Technology Committee Inquiry on Robotics and Artificial Intelligence on The social, legal and ethical issues raised by developments in robotics and artificial intelligence technologies, and how they should be addressed.

From Ethics to Regulation and Governance

1. Public attitudes. It is well understood that there are public fears around robotics and artificial intelligence. Many of these fears are undoubtedly misplaced, fuelled perhaps by press and media hype, but some are grounded in genuine worries over how the technology might impact, for instance, jobs or privacy. The most recent Eurobarometer survey on autonomous systems showed that the proportion of respondents with an overall positive attitude has declined from 70% in the 2012 survey to 64% in 2014. Notably the 2014 survey showed that the more personal experience people have with robots, the more favourably they tend to think of them; 82% of respondents have a positive view of robots if they have experience with them, whereas only 60% of respondents have a positive view if they lack robot experience. Also important is that a significant majority (89%) believe that autonomous systems are a form of technology that requires careful management.

2. Building trust in robotics and artificial intelligence requires a multi-faceted approach. The ethics roadmap here illustrates the key elements that contribute to building public trust. The core idea of the roadmap is that ethics inform standards, which in turn underpin regulation.

3. Ethics are the foundation of trust, and underpin good practice. Principles of good practice can be found in Responsible Research and Innovation (RRI). Examples include the 2014 Rome Declaration on RRI; the six pillars of the Rome declaration on RRI are: Engagement, Gender equality, Education, Ethics, Open Access and Governance. The EPSRC framework for responsible innovation incorporates the AREA (Anticipate, Reflect, Engage and Act) approach.

4. The first European work to articulate ethical considerations for robotics was the EURON Roboethics Roadmap.

5. In 2010 a joint AHRC/EPSRC workshop drafted and published the Principles of Robotics for designers, builders and users of robots. The principles are:
  • Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security;
  • Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy.
  • Robots are products. They should be designed using processes which assure their safety and security.
  • Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
  • The person with legal responsibility for a robot should be attributed.
6. Work by the British Standards Institute technical subcommittee on Robots and Robotic Devices led to publication – in April 2016 – of BS 8611: Guide to the ethical design and application of robots and robotic systems. BS8611 is not a code of practice; instead it gives “guidance on the identification of potential ethical harm and provides guidelines on safe design, protective measures and information for the design and application of robots”. BS8611 articulates a broad range of ethical hazards and their mitigation, including societal, application, commercial/financial and environmental risks, and provides designers with guidance on how to assess then reduce the risks associated with these ethical hazards. The societal hazards include, for example, loss of trust, deception, privacy & confidentiality, addiction and employment.

7. The IEEE has recently launched a global initiative on Ethical Considerations in the Design of Autonomous Systems, to encompass all intelligent technologies including robotics, AI, computational intelligence and deep learning.

8. Significant recent work towards regulation was undertaken by the EU project RoboLaw. The primary output of that project is a comprehensive report entitled Guidelines on Regulating Robotics. That report reviews both ethical and legal aspects; the legal analysis covers rights, liability & insurance, privacy and legal capacity. The report focuses on driverless cars, surgical robots, robot prostheses and care robots and concludes by stating: “The field of robotics is too broad, and the range of legislative domains affected by robotics too wide, to be able to say that robotics by and large can be accommodated within existing legal frameworks or rather require a lex robotica. For some types of applications and some regulatory domains, it might be useful to consider creating new, fine-grained rules that are specifically tailored to the robotics at issue, while for types of robotics, and for many regulatory fields, robotics can likely be regulated well by smart adaptation of existing laws”.

9. In general technology is trusted if it brings benefits while also safe, well regulated and, when accidents happen, subject to robust investigation. One of the reasons we trust airliners is that we know they are part of a highly regulated industry with an excellent safety record. The reason commercial aircraft are so safe is not just good design, it is also the tough safety certification processes and, when things do go wrong, robust processes of air accident investigation. Should driverless cars, for instance, be regulated through a body similar to the Civil Aviation Authority (CAA), with a driverless car equivalent of the Air Accident Investigation Branch?

10. The primary focus of paragraphs 1 – 9 above is robotics and autonomous systems, and not software artificial intelligence. This reflects the fact that most work toward ethics and regulation has focussed on robotics. Because robots are physical artefacts (which embody AI) they are undoubtedly more readily defined and hence regulated than distributed or cloud-based AIs. This and the already pervasive applications of AI (in search engines, machine translation systems or intelligent personal assistant AIs, for example) strongly suggest that greater urgency needs to be directed toward considering the societal and ethical impact of AI, including the governance and regulation of AI.

11. AI systems raise serious questions over trust and transparency:
  • How can we trust the decisions made by AI systems, and – more generally – how can the public have confidence in the use of AI systems in decision making?
  • If an AI system makes a decision that turns out to be disastrously wrong, how do we investigate the logic by which the decision was made?
  • Of course much depends of the consequences of those decisions. Consider decisions that have real consequences to human safety or well being, such as those made by medical diagnosis AIs or driverless car autopilots. Systems that make such decisions are critical systems.
12. Existing critical software systems are not AI systems, nor do they incorporate AI systems. The reason is that AI systems (and more generally machine learning systems) are generally regarded as impossible to verify for safety critical applications - the reasons for this need to be understood.
  • First is the problem of verification of systems that learn. Current verification approaches typically assume that the system being verified will never change its behaviour, but a system that learns does – by definition – change its behaviour, so any verification is likely to be rendered invalid after the system has learned.
  • Second is the black box problem. Modern AI systems, and especially the ones receiving the greatest attention, so called Deep Learning systems, are based on Artificial Neural Networks (ANNs). A characteristic of ANNs is that after the ANN has been trained with data sets (which may be very large, so called “big data” sets – which itself poses another problem for verification), any attempt to examine the internal structure of the ANN in order to understand why and how the ANN makes a particular decision is impossible. The decision making process of an ANN is not transparent.
  • The problem of verification and validation of systems that learn may not be intractable, but is the subject of current research, see for example work on verification and validation of autonomous systems. The black box problem may be intractable for ANNs, but could be avoided by using algorithmic approaches to AI (i.e. that do not use ANNs).
Recommendations

13. It is vital that we address public fears around robotics and artificial intelligence, through renewed public engagement and consultation.
14. Work is required to identify the kind of governance framework(s) and regulatory bodies needed to support Robotics and Artificial Intelligence in the UK. A group should be set up and charged with this work; perhaps a Royal Commission, as recently suggested by Tom Watson MP.

Saturday, April 09, 2016

Robots should not be gendered

Should robots be gendered? I have serious doubts about the morality of designing and building robots to resemble men or women, boys or girls. Let me explain why.

The first worry I have follows from one of the five principles of robotics, which states: robots should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.

To design a gendered robot is a deception. Robots cannot have a gender in any meaningful sense. To impose a gender on a robot, either by design of its outward appearance, or programming some gender stereotypical behaviour, cannot be for reasons other than deception - to make humans believe that the robot has gender, or gender specific characteristics.

When we drafted our 4th ethical principle the vulnerable people we had in mind were children, the elderly or disabled. We were concerned that naive robot users may come to believe that the robot interacting with them (caring for them perhaps) is a real person, and that the care the robot is expressing for them is real. Or that an unscrupulous robot manufacture exploits that belief. But when it comes to gender we are all vulnerable. Whether we like it or not we all react to gender cues. So whether deliberately designed to do so or not, a gendered robot will trigger reactions that a non-gendered robot will not.

Our 4th principle states that a robot's machine nature should be transparent. But for gendered robots that principle doesn't go far enough. Gender cues are so powerful that even very transparently machine-like robots with a female body shape, for instance, will provoke a gender-cued response.

My second concern leads from an ethical problem that I've written and talked about before: the brain-body mismatch problem. I've argued that we shouldn't be building android robots at all until we can embed an AI into those robots that matches their appearance. Why? Because our reactions to a robot are strongly influenced by its appearance. If it looks human then we, not unreasonably, expect it to behave like a human. But a robot not much smarter than a washing machine cannot behave like a human. Ok, you might say, if and when we can build robots with human-equivalent intelligence, would I be ok with that? Yes, provided they are androgynous.

My third - and perhaps most serious concern - is about sexism. By building gendered robots there is a huge danger of transferring one of the evils of human culture: sexism, into the artificial realm. By gendering and especially sexualising robots we surely objectify. But how can you objectify an object, you might say? The problem is that a sexualised robot is no longer just an object, because of what it represents. The routine objectification of women (or men) because of ubiquitous sexualised robots will surely only deepen the already acute problem of the objectification of real women and girls. (Of course if humanity were to grow up and cure itself of the cancer of sexism, then this concern would disappear.)

What of the far future? Given that gender is a social construct then a society of robots existing alongside humans might invent gender for themselves. Perhaps nothing like male and female at all. Now that would be interesting.

Thursday, March 31, 2016

It's only a matter of time

Sooner or later there will be fatal accident caused by a driverless car. It's not a question of if, but when. What happens immediately following that accident could have a profound effect on the nascent driverless car industry.

Picture the scene. Emergency services are called to attend the accident. A teenage girl on a bicycle apparently riding along a cycle path was hit and killed by a car. The traffic police quickly establish that the car at the centre of the accident was operating autonomously at the moment of the fatal crash. They endeavour to find out what went wrong, but how? Almost certainly the car will have logged data on its behaviour leading up to the moment of the crash - data that is sure to hold vital clues about what caused the accident, but will that data be accessible to the investigating traffic police? And even if it is will the investigators be able to interpret the data..?

There are two ways the story could unfold from here.

Scenario 1: unable to investigate the accident themselves, the traffic police decide to contact the manufacturer and ask for help. As it happens a team from the manufacturer actually arrives on scene very quickly - it later transpires that the car had 'phoned home' automatically so the manufacturer actually knew of the accident within seconds of it taking place. Somewhat nonplussed the traffic police have little choice but to grant them full access to the scene of the accident. The manufacturer undertakes their own investigation and - several weeks later - issue a press statement explaining that the AI driving the car was unable to cope with an "unexpected situation" which "regrettably" led to the fatal crash. The company explain that the AI has been upgraded so that it cannot happen again. They also accept liability for the accident and offer compensation to the child's family. Despite repeated requests the company declines to share the technical details of what happened with the authorities, claiming that such disclosure would compromise its intellectual property.

A public already fearful of the new technology reacts very badly. Online petitions call for a ban on driverless cars and politicians enact knee-jerk legislation which, although falling short of an outright ban, sets the industry back years.

Scenario 2: the traffic police call the newly established driverless car accident investigation branch (DCAB), who send a team consisting of independent experts on driverless car technology, including its AI. The manufacturer's team also arrive, but - under a protocol agreed with the industry - their role is to support DCAB and provide "full assistance, including unlimited access to technical data". In fact the data logs stored by the car are in a new industry standard format thus access by DCAB is straightforward; software tools allow them to quickly interpret those data logs. Well aware of public concerns DCAB provide hourly updates on the progress of their investigation via social media and, within just a few days, call a press conference to explain their findings. They outline the fault with the AI and explain that they will require the manufacturer to recall all affected vehicles and update the AI, after submitting technical details of the update to DCAB for approval. DCAB will also issue an update to all driverless car manufacturers asking them to check for the same fault in their own systems, also reporting their findings back to DCAB.

A public fearful of the new technology is reassured by the transparent and robust response of the accident investigation team. Although those fears surface in the press and social media, the umbrella Driverless Car Authority (DCA) are quick to respond with expert commentators and data to show that driverless cars are already safer than manually driven cars.


There are strong parallels between driverless cars and commercial aviation. One of the reasons we trust airliners is that we know they are part of a highly regulated industry with an amazing safety record. The reason commercial aircraft are so safe is largely down to the very tough safety certification processes and, when things do go wrong, the rapid and robust processes of air accident investigation. There are emerging standards for driverless cars: ISO Technical Committee TC 204 on Intelligent Transport Systems already lists 213 standards. There isn't yet a standard for fully autonomous driverless car operation, but see for instance ISO 11270:2014 on Lane keeping assistance systems (LKAS). But standards need teeth, which is why we need standards-based certification processes for driverless cars managed by regulatory authorities - a driverless car equivalent of the FAA. In short, a governance framework for driverless cars.

Postscript: several people have emailed or tweeted me to complain that I seem to be anti driverless cars - nothing could be further from the truth. I am a strong advocate of driverless cars for many reasons, first and most importantly because they will save lives, second because they should lead to a reduction in the number of vehicles on the road - thus making our cities greener, and third because they might just cure humans of our unhealthy obsession with personal car ownership. My big worry is that none of these benefits will flow if driverless cars are not trusted. But trust in technology doesn't happen by magic and, in the early days, serious setbacks and a public backlash could set the nascent driverless car industry back years (think of GM foods in the EU). One way to counter such a backlash and build trust is to put in place robust and transparent governance as I have tried (not very well it seems) to argue in this post.

Saturday, February 20, 2016

Could we make a moral machine?

Could we make a moral machine? A robot capable of choosing or moderating its actions on the basis of ethical rules..? This was how I opened my IdeasLab talk at the World Economic Forum 2016, last month. The format of IdeasLab is 4 five minute (Pecha Kucha) talks, plus discussion and Q&A with the audience. The theme of this Nature IdeasLab was Building an Intelligent Machine, and I was fortunate to have 3 outstanding co-presenters: Vanessa Evers, Maja Pantic and Andrew Moore. You can see all four of our talks on YouTube here.

The IdeasLab variant of Pecha Kucha is pretty challenging for someone used to spending half an hour or more lecturing - 15 slides and 20 seconds per slide. Here is my talk:


and since not all of my (everso carefully chosen) slides are visible in the recording here is the complete deck:



And the video clips in slides 11 and 12 are here:

Slide 11: Blue prevents red from reaching danger.
Slide 12: Blue faces an ethical dilemma: our indecisive robot can save them both.


Acknowledgements: I am deeply grateful to colleague Dr Dieter Vanderelst who designed and coded the experiments shown here on slides 10-12. This work is part of the EPSRC funded project Verifiable Autonomy.