Saturday, December 23, 2017

A Round Up of Robotics and AI ethics

This blogpost is a round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. The principles are presented here (in full or abridged) with notes and references but without commentary. If there any (prominent) ones I've missed please let me know.

Asimov's three laws of Robotics (1950)
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 
I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov's short story Runaround [1]. This wikipedia article provides a very good account of the three laws and their many (fictional) extensions.

Murphy and Wood's three laws of Responsible Robotics (2009)
  1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. 
  2. A robot must respond to humans as appropriate for their roles. 
  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws. 
These were proposed in Robin Murphy and David Wood's paper Beyond Asimov: The Three Laws of Responsible Robotics [2].

EPSRC Principles of Robotics (2010)
  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. 
  2. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. 
  3. Robots are products. They should be designed using processes which assure their safety and security. 
  4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. 
  5. The person with legal responsibility for a robot should be attributed. 
These principles were drafted in 2010 and published online in 2011, but not formally published until 2017 [3] as part of a two-part special issue of Connection Science on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible introduction to the EPSRC principles was published in New Scientist in 2011.

Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)

I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:
6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14. Shared Benefit: AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
An account of the development of the Asilomar principles can be found here.

The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)
  1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
  6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. 
See the ACM announcement of these principles here. The principles form part of the ACM's updated code of ethics.

Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)
  1. Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. 
  2. Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.
  3. Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.
  4. Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. 
  5. Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. 
  6. Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society. 
  7. Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. 
  8. Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.
  9. Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.
An explanation of the background and aims of these ethical guidelines can be found here, together with a link to the full principles (which are shown abridged above).

Draft principles of The Future Society's Science, Law and Society Initiative (Oct 2017)
  1. AI should advance the well-being of humanity, its societies, and its natural environment. 
  2. AI should be transparent
  3. Manufacturers and operators of AI should be accountable
  4. AI’s effectiveness should be measurable in the real-world applications for which it is intended. 
  5. Operators of AI systems should have appropriate competencies
  6. The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.
This article by Nicolas Economou explains the 6 principles with a full commentary on each one.

MontrĂ©al Declaration for Responsible AI draft principles (Nov 2017)
  1. Well-being The development of AI should ultimately promote the well-being of all sentient creatures.
  2. Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
  3. Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.
  4. Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.
  5. Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.
  6. Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.
  7. Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.
The Montréal Declaration for Responsible AI proposes the 7 values and draft principles above (here in full with preamble, questions and definitions).

IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)
  1. How can we ensure that A/IS do not infringe human rights
  2. Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being
  3. How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable
  4. How can we ensure that A/IS are transparent
  5. How can we extend the benefits and minimize the risks of AI/AS technology being misused
These 5 general principles appear in Ethically Aligned Design v2, a discussion document drafted and published by the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems. The principles are expressed not as rules but instead as questions, or concerns, together with background and candidate recommendations.

A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.

UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)
  1. Demand That AI Systems Are Transparent
  2. Equip AI Systems With an “Ethical Black Box”
  3. Make AI Serve People and Planet 
  4. Adopt a Human-In-Command Approach
  5. Ensure a Genderless, Unbiased AI
  6. Share the Benefits of AI Systems
  7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
  8. Establish Global Governance Mechanisms
  9. Ban the Attribution of Responsibility to Robots
  10. Ban AI Arms Race
Drafted by UNI Global Union's Future World of Work these 10 principles for Ethical AI (set out here with full commentary) “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.


References
[1] Asimov, Isaac (1950): Runaround,  in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).

Wednesday, October 11, 2017

Some reflections on ERL Emergency 2017

It has now been a couple of weeks since the ERL Emergency robotics competition in Piombino, Italy, so I've had time to wind down and reflect a little on the event. The competition and associated events was a great success. A total of 16 teams from 9 countries were organised into 8 multi-domain (air, land and sea) groups for the competition - see the ERL Emergency programme here for details.

From a technical point of view what interested me the most was to see how those teams improved their performances since euRathlon 2015.  Of course a precise comparison is not possible for several reasons: first, not all teams participated in both 2015 and 2017 competitions - and of those that did both personnel and robots had been refreshed, and second, since this is an outdoor competition - conditions (weather, wind and especially sea) were inevitably different.

However, the fact that euRathlon 2015 and ERL Emergency 2017 were held at the same location, with updated but broadly similar competition scenarios means that general scenario (task) level comparisons are possible. In fact we also carried forward some of the functional benchmarks from the 2015 competition, which will allow detailed analysis across both competitions (but not in this blog post).

Instead I will here give a few general (and rather subjective) comments comparing 2015 and 2017 performance.

Communications continued to be a problem for teams, with all but one team in 2017 (as in 2015) choosing to communicate with their land robots via WiFi. Now any communications engineer (as I used to be) will tell you that WiFi is a hopelessly bad choice for outdoor communications. The image above shows the waypoint positions for ground robots from the start position W1 to W6 in front of the building. The control tent was located near to W1 close to the trees above the beach - about 112m from the front of the building - and with no line of sight to W6 because of the uneven terrain. And to make matters worse the land robots need to enter the building and locate the machine room - about 10m from the entrance and again with no line of sight. Despite the obvious drawbacks of WiFi those teams that did use it came up with workarounds, including using robots as mobile repeaters and ingenious systems in which robots deployed a succession of fixed repeaters. There was clearly progress in communications from 2015 to 2017 because, in 2015 - when it became clear that no team could communicate successfully with the machine room (room #3) - we relocated the machine room from the rear of the building to the front (room #1), whereas in 2017 no such relocation was necessary; those teams that reached the machine room at the rear of the building were able to communicate with their robots. See the floorplan here.

Human-robot interfaces were critical to success. In 2015 we saw some interfaces that made it extremely difficult for teams to remotely tele-operate their robots, when operators struggled with postage-stamp sized windows showing the live video feed from the robot's cameras (especially difficult when bright sunshine most days meant that light levels are very high inside the tent). In 2017 we saw not only much improved HRIs but integration between autonomous and tele-operated functions so that, for instance, operators were able to drag and drop the next waypoint then monitor the robot's autonomous progress to that waypoint, then - when at the waypoint - make use of smart machine vision to identify objects of potential interest (OPIs).

Effective human-human communication was also a critical success factor, underlining the fact that ERL Emergency tests not just robots but human robot teams or - to be more accurate - human-human-robot-robot teams. Given that typically a team's aerial, underwater and land robot operators were in separate control tents, establishing exactly how and when these operators would communicate with each other was very important. In this regard we (the judges) didn't mind how intra-team communication was organised - they could use WiFi, mobile phones, or even a runner. In 2015 the weaker teams clearly had obviously not thought about this at all and suffered as a result. Again in 2017 we saw a big improvement, with very effective intra-team communication in the most successful teams.

The full results listings are shown here for euRathlon 2015, and here for ERL Emergency 2017.

Here are a few images from the 2017 competition:




Tuesday, August 15, 2017

The case for an Ethical Black Box

Last month we presented our paper The Case for an Ethical Black Box at Towards Autonomous Robotic Systems (TAROS 2017), University of Surrey. The paper makes a very simple proposition: all robots should be fitted, as standard, with the equivalent of an aircraft Flight Data Recorder. We argue that without such a device - which we call an ethical black box - it will be impossible to properly investigate robot accidents. Ian Sample covered our paper in the Guardian here.

Here is the paper abstract:
This paper proposes that robots and autonomous systems should be equipped with the equivalent of a Flight Data Recorder to continuously record sensor and relevant internal status data. We call this an ethical black box. We argue that an ethical black box will be critical to the process of discovering why and how a robot caused an accident, and thus an essential part of establishing accountability and responsibility. We also argue that without the transparency afforded by an ethical black box, robots and autonomous systems are unlikely to win public trust.
And here are the presentation slides from TAROS:



The full paper can be downloaded from here. Comments and feedback welcome.


The full paper reference:

Winfield A.F.T., Jirotka M. (2017) The Case for an Ethical Black Box. In: Gao Y., Fallah S., Jin Y., Lekakou C. (eds) Towards Autonomous Robotic Systems. TAROS 2017. Lecture Notes in Computer Science, vol 10454. Springer, Cham.

Related blog posts:
The infrastructure of life 2 - Transparency

Friday, July 14, 2017

Three stories about Robot Stories

Here are the slides I gave yesterday morning as member of panel Sci-Fi Dreams: How visions of the future are shaping the development of intelligent technology, at the Centre for the Future of Intelligence 2017 conference. I presented three short stories about robot stories.




Slide 2:
The FP7 TRUCE Project invited a number of scientists - mostly within the field of Artificial Life - to suggest ideas for short stories. Those stories were then sent to a panel of writers, who chose one of the stories. I submitted an idea called The feeling of what it is like to be a robot and was delighted when Lucy Caldwell contacted me. Following a visit to the lab Lucy drafted a beautiful story called The Familiar which - following some iteration - appeared in the collected volume Beta Life.

Slide 3:
More recently the EU Human Brain Project Foresight Lab brought three Sci Fi writers - Allen Ashley, Jule Owen and Stephen Oram - to visit the lab. Inspired by what they saw they then wrote three wonderful short stories, which were read at the 2016 Bristol Literature Festival. The readings were followed by a panel discussion which included myself and BRL colleagues Antonia Tzemanaki and Marta Palau Franco. The three stories are published in the volume Versions of the Future. Stephen Oram went on to publish a collection called Eating Robots.

Slide 4:
My first two stories were about people telling stories about robots. Now I turn to the possibility of robots themselves telling stories. Some years ago I speculated on the idea on the idea of robots telling each other stories (directly inspired by a conversation with Richard Gregory). That idea has now turned into a current project, with the aim of building an embodied computational model of storytelling. For a full description see this paper, currently in press.

Wednesday, June 21, 2017

CogX: Emerging ethical principles, toolkits and standards for AI

Here are the slides I presented at the CogX session on Precision Ethics this afternoon. My intention with these slides was to give a 10 minute helicopter overview of emerging ethical principles, toolkits and ethical standards for AI, including Responsible Research and Innovation.

Wednesday, March 08, 2017

Does AI pose a threat to society?

Last week I had the pleasure of debating the question "does AI pose a threat to society?" with friends and colleagues Christian List, Maja Pantic and Samantha Payne. The event was organised by the British Academy and brilliantly chaired by the Royal Society's director of science policy Claire Craig.

Here is my opening statement:

One Friday afternoon in 2009 I was called by a science journalist at, I recall, the Sunday Times. He asked me if I knew that there was to be a meeting of the AAAI to discuss robot ethics. I said no I don’t know of this meeting. He then asked “are you surprised they are meeting to discuss robot ethics” and my answer was no. We talked some more and agreed it was actually a rather dull story: a case of scientists behaving responsibly. I really didn’t expect the story to appear but checked the Sunday paper anyway, and there in the science section was the headline Scientists fear revolt of killer robots. (I then spent the next couple of days on the radio explaining that no, scientists do not fear a revolt of killer robots.)

So, fears of future super intelligence - robots taking over the world - are greatly exaggerated: the threat of an out-of-control super intelligence is a fantasy - interesting for a pub conversation perhaps. It’s true we should be careful and innovate responsibly, but that’s equally true for any new area of science and technology. The benefits of robotics and AI are so significant, the potential so great, that we should be optimistic rather than fearful. Of course robots and intelligent systems must be engineered to very high standards of safety for exactly the same reasons that we need our washing machines, cars and airplanes to be safe. If robots are not safe people will not trust them. To reach it’s full potential what robotics and AI needs is a dose of good old fashioned (and rather dull) safety engineering.

In 2011 I was invited to join a British Standards Institute working group on robot ethics, which drafted a new standard BS 8611 Guide to the ethical design of robots and robotic systems, published in April 2016. I believe this to be the world’s first standard on ethical robots.

Also in 2016 the very well regarded IEEE standards association – the same organization that gave us WiFi - launched a Global initiative on Ethical Considerations in AI and Autonomous Systems. The purpose of this Initiative is to ensure every technologist is educated and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems; in a nutshell, to ensure ethics are baked in. In December we published Ethically Aligned Design: A Vision for Prioritizing Human Well Being with AI and Autonomous Systems. Within that initiative I'm also leading a new standard on transparency in autonomous systems, based on the simple principle that it should always be possible to find out why an AI or robot made a particular decision.

We need to agree ethical principles, because they are needed to underpin standards – ways of assessing and mitigating the ethical risks of robotics and AI. But standards needs teeth and in turn underpin regulation. Why do we need regulation? Think of passenger airplanes; the reason we trust them is because it's a highly regulated industry with an amazing safety record, and robust, transparent processes of air accident investigation when things do go wrong. Take one example of a robot that we read a lot about in the news – the Driverless Car. I think there's a strong case for a driverless car equivalent of the CAA, with a driverless car accident investigation branch. Without this it's hard to see how driverless car technology will win public trust.

Does AI pose a threat to society? No. But we do need to worry about the down to earth questions of present day rather unintelligent AIs; the ones that are deciding our loan applications, piloting our driverless cars or controlling our central heating. Are those AIs respecting our rights, freedoms and privacy? Are they safe? When AIs make bad decisions, can we find out why? And I worry too about the wider societal and economic impacts of AI. I worry about jobs of course, but actually I think there is a bigger question: how can we ensure that the wealth created by robotics and AI is shared by all in society?

Thank you.

This image was used to advertise the BA's series of events on the theme Robotics, AI and Society. The reason I reproduce it here is that one of the many interesting questions to the panel was about the way that AI tends to be visualised in the media. This kind of human face coalescing (or perhaps emerging) from the atomic parts of the AI seems to have become a trope for AI. Is it a helpful visualisation of the human face of AI, or does it mislead to an impression that AI has human characteristics?

Wednesday, February 15, 2017

Thoughts on the EU's draft report on robotics

A few weeks ago I was asked to write a short op-ed on the European Parliament Law Committee's recommendations on civil law rules for robotics.

In the end the piece didn't get published, so I am posting it here.

It is a great shame that most reports of the European Parliament’s Committee for Legal Affairs’ vote last week on its Draft Report on Civil Law Rules on Robotics headlined on ‘personhood’ for robots, because the report has much else to commend it. Most important among its several recommendations is a proposed code of ethical conduct for roboticists, which explicitly asks designers to research and innovate responsibly. Some may wonder why such an invitation even needs to be made but, given that engineering and computer science education rarely includes classes on ethics (it should), it is really important that robotics engineers reflect on their ethical responsibilities to society – especially given how disruptive robot technologies are. This is not new – great frameworks for responsible research and innovation already exist. One such is the 2014 Rome Declaration on RRI, and in 2015 the Foundation for Responsible Robotics was launched.

Within the report’s draft Code of Conduct is a call for robotics funding proposals to include a risk assessment. This too is a very good idea and guidance already exists in British Standard BS 8611, published in April 2016. BS 8611 sets out a comprehensive set of ethical risks and offers guidance on how to mitigate them. It is very good also to see that the Code stresses that humans, not robots, are the responsible agents; this is something we regarded as fundamental when we drafted the Principles of Robotics in 2010.

For me transparency (or the lack of it) is an increasing worry in both robots and AI systems. Labour’s industry spokesperson Chi Onwurah is right to say, “Algorithms are part of our world, so they are subject to regulation, but because they are not transparent, it’s difficult to regulate them effectively” (and don’t forget that it is algorithms that make intelligent robots intelligent). So it is very good to see the draft Code call for robotics engineers to “guarantee transparency … and right of access to information by all stakeholders”, and then in the draft ‘Licence for Designers’: you should ensure “maximal transparency” and even more welcome “you should develop tracing tools that … facilitate accounting and explanation of robotic behaviour… for experts, operators and users”.  Within the IEEE Standards Association Global Initiative on Ethics in AI and Autonomous Systems, launched in 2016, we are working on a new standard on Transparency in Autonomous Systems.

This brings me to standards and regulation.  I am absolutely convinced that regulation, together with transparency and public engagement, builds public trust. Why is it that we trust our tech? Not just because it’s cool and convenient, but also because it’s safe (and we assume that the disgracefully maligned experts will take care of assuring that safety). One of the reasons we trust airliners is that we know they are part of a highly regulated industry with an amazing safety record. The reason commercial aircraft are so safe is not just good design, it is also the tough safety certification processes and, when things do go wrong, robust processes of air accident investigation. So the Report’s call for a European Agency for Robotics and AI to recommend standards and regulatory framework is, as far as I’m concerned, not a moment too soon. We urgently need standards for safety certification of a wide range of robots, from drones and driverless cars to robots for care and assisted living.

Like many of my robotics colleagues I am deeply worried by the potential for robotics and AI to increase levels of economic inequality in the world. Winnie Byanyima, executive director of Oxfam writes for the WEF, “We need fundamental change to our economic model. Governments must stop hiding behind ideas of market forces and technological change. They … need to steer the direction of technological development”. I think she is right – we need a serious public conversation about technological unemployment and how we ensure that the wealth created by AI and Automonous Systems is shared by all. A Universal Basic Income may or may not be the best way to do this – but it is very encouraging to see this question raised in the draft Report.

I cannot close the piece without at least mentioning artificial personhood. My own view is that personhood is the solution to a problem that doesn’t exist. I can understand why, in the context of liability, the Report raises this question for discussion, but – as the report itself later asserts in the Code of Conduct: humans, not robots are the responsible agents. Robots are, and should remain, artefacts.

Friday, January 06, 2017

The infrastructure of life 2 - Transparency

Part 2: Autonomous Systems and Transparency

In my previous post I argued that a wide range of AI and Autonomous Systems (from now on I will just use the term AS as shorthand for both) should be regarded as Safety Critical. I include both autonomous software AI systems and hard (embodied) AIs such as robots, drones and driverless cars. Many will be surprised that I include in the soft AI category apparently harmless systems such as search engines. Of course no-one is seriously inconvenienced when Amazon makes a silly book recommendation, but consider very large groups of people. If a truth such as global warming is - because of accidental or willful manipulation - presented as false, and that falsehood believed by a very large number of people, then serious harm to the planet (and we humans who depend on it) could surely result.

I argued that the tools barely exist to properly assure the safety of AS, let alone the standards and regulation needed to build public trust, and that political pressure is needed to ensure our policymakers fully understand the public safety risks of unregulated AS.

In this post I will outline the case that transparency is a foundational requirement for building public trust in AS based on the radical proposition that it should always be possible to find out why an AS made a particular decision.

Transparency is not one thing. Clearly your elderly relative doesn't require the same level of understanding of her care robot as the engineer who repairs it. Not would you expect the same appreciation of the reasons a medical diagnosis AI recommends a particular course of treatment as your doctor. Broadly (and please understand this is a work in progress) I believe there are five distinct groups of stakeholders, and that AS must be transparent to each, in different ways and for different reasons. These stakeholders are: (1) users, (2) safety certification agencies, (3) accident investigators, (4) lawyers or expert witnesses and (5) wider society.
  1. For users, transparency is important because it builds trust in the system, by providing a simple way for the user to understand what the system is doing and why.
  2. For safety certification of an AS, transparency is important because it exposes the system's processes for independent certification against safety standards.
  3. If accidents occur, AS will need to be transparent to an accident investigator; the internal process that led to the accident need to be traceable. 
  4. Following an accident lawyers or other expert witnesses, who may be required to give evidence, require transparency to inform their evidence. And 
  5. for disruptive technologies, such as driverless cars, a certain level of transparency to wider society is needed in order to build public confidence in the technology.
Of course the way in which transparency is provided is likely to be very different for each group. If we take a care robot as an example transparency means the user can understand what the robot might do in different circumstances; if the robot should do anything unexpected she should be able to ask the robot 'why did you just do that?' and receive an intelligible reply. Safety certification agencies will need access to technical details of how the AS works, together with verified test results. Accident investigators will need access to data logs of exactly what happened prior to and during an accident, most likely provided by something akin to an aircraft flight data recorder (and it should be illegal to operate an AS without such a system). And wider society would need accessible documentary-type science communication to explain the AS and how it works.

In IEEE Standards Association project P7001, we aim to develop a standard that sets out measurable, testable levels of transparency in each of these categories (and perhaps new categories yet to be determined), so that Autonomous Systems can be objectively assessed and levels of compliance determined. It is our aim that P7001 will also articulate levels of transparency in a range that defines minimum levels up to the highest achievable standards of acceptance. The standard will provide designers of AS with a toolkit for self-assessing transparency, and recommendations for how to address shortcomings or transparency hazards.

Of course transparency on its own is not enough. Public trust in technology, as in government, requires both transparency and accountability. Transparency is needed so that we can understand who is responsible for the way Autonomous Systems work and - equally importantly - don't work.


Thanks: I'm very grateful to colleagues in the IEEE global initiative on ethical considerations in Autonomous Systems for supporting P7001, especially John Havens and Kay Firth-Butterfield. I'm equally grateful to colleagues at the Dagstuhl on Engineering Moral Machines, especially Michael Fisher, Marija Slavkovik and Christian List for discussions on transparency.

Related blog posts:
The Infrastructure of Life 1 - Safety
Ethically Aligned Design
How do we trust our Robots?
It's only a matter of time

Sunday, January 01, 2017

The infrastructure of life 1 - Safety

Part 1: Autonomous Systems and Safety

We all rely on machines. All aspects of modern life, from transport to energy, work to welfare, play to politics depend on a complex infrastructure of physical and virtual systems. How many of us understand how all of this stuff works? Very few I suspect. But it doesn't matter, does it? We trust the good men and women (the disgracefully maligned experts) who build, manage and maintain the infrastructure of life. If something goes wrong they will know why. And (we hope) make sure it doesn't happen again.

All well and good you might think. But the infrastructure of life is increasingly autonomous - many decisions are now made not by a human but by the systems themselves. When you search for a restaurant near you the recommendation isn't made by a human, but by an algorithm. Many financial decisions are not made by people but by algorithms; and I don't just mean city investments - it's possible that your loan application will be decided by an AI. Machine legal advice is already available; a trend that is likely to increase. And of course if you take a ride in a driverless car, it is algorithms that decide when the car turns, brakes and so on. I could go on.

These are not trivial decisions. They affect lives. The real world impacts are human and economic, even political (search engine results may well influence how someone votes). In engineering terms these systems are safety critical. Examples of safety critical systems that we all rely on from time to time include aircraft autopilots or train braking systems. But - and this may surprise you - the difficult engineering techniques used to prove the safety of such systems are not applied to search engines, automated trading systems, medical diagnosis AIs, assistive living robots, delivery drones, or (I'll wager) driverless car autopilots.

Why is this? Well, it's partly because the field of AI and autonomous systems is moving so fast. But I suspect it has much more to do with an incompatibility between the way we have traditionally designed safety critical systems, and the design of modern AI systems. There is I believe one key problem: learning. There is a very good reason that current safety critical systems (like aircraft autopilots) don't learn. Current safety assurance approaches assume that the system being certified will never change, but a system that learns does – by definition – change its behaviour, so any certification is rendered invalid after the system has learned.

And as if that were not bad enough, the particular method of learning which has caused such excitement - and rapid progress - in the last few years is based on Artificial Neural Networks (more often these days referred to as Deep Learning). A characteristic of ANNs is that after the ANN has been trained with datasets, any attempt to examine its internal structure in order to understand why and how the ANN makes a particular decision is impossible. The decision making process of an ANN is opaque. Alphago's moves were beautiful but puzzling. We call this the black box problem.

Does this mean we cannot assure the safety of learning autonomous/AI systems at all? No it doesn't. The problem of safety assurance of systems that learn is hard but not intractable, and is the subject of current research*. The black box problem may be intractable for ANNs, but could be avoided by using approaches to AI that do not use ANNs.

But - here's the rub. This involves slowing down the juggernaut of autonomous systems and AI development. It means taking a much more cautious and incremental approach, and it almost certainly involves regulation (that, for instance, makes it illegal to run a driverless car unless the car's autopilot has been certified as safe - and that would require standards that don't yet exist). Yet the commercial and political pressure is to be more permissive, not less; no country wants to be left behind in the race to cash in on these new technologies.

This is why work toward AI/Autonomous Systems standards is so vital, together with the political pressure to ensure our policymakers fully understand the public safety risks of unregulated AI.

In my next blog post I will describe one current standards initiative, towards introducing transparency in AI and Autonomous Systems based on the simple principle that it should always be possible to find out why an AI/AS system made a particular decision.

The next few years of swimming against the tide is going to be hard work. As Luke
Muehlhauser writes in his excellent essay on transparency in safety-critical systems "...there is often a tension between AI capability and AI transparency. Many of AI’s most powerful methods are also among its least transparent".

*some, but nowhere near enough. See for instance Verifiable Autonomy.

Related blog posts:
Ethically Aligned Design
How do we trust our Robots?
It's only a matter of time