A Response to the EU Proposal on Electronic Person Status for Robots, January 2017
On 12th January 2017, the European Parliament Committee on Legal Affairs moved by 17 votes to 2 to approve a draft report published in May 2016 by Luxembourg MEP Mady Delvaux with recommendations to the Commission on Civil Law Rules on Robotics.
The report called for the establishment of electronic person status for robots and implicitly for the incorporation of Asimov's Laws into European Law governing robotics. It has now been put forward for a forthcoming vote by the whole of the European Parliament.
On 13th January, GWS Robotics were approached by a journalist for comment.
That same afternoon, our copywriter Philip Graves and programmer Tom Bellew held an intensive discussion on the points raised, and together drafted responses to the journalist's questions.
The following day, the text of our responses was edited and selectively amended by our creative director, David Graves.
Here we publish the full text of David's edit of our responses.
1. Do robots need legal status, such as 'electronic persons'?
In our opinion they do not, for several reasons.
The first is that they are essentially digital processors in a non-living shell, not conscious beings or animals able to experience physical pain or emotional distress. This means that they cannot possess rights equivalent to animal rights or human rights.
The second is that the granting of ‘electronic person’ status to robots carries serious ethical risks, diminishing the responsibilities of the humans who program and operate them.
In my view robots should remain the responsibility of those who have programmed and operate them. This is a necessary ethical safeguard to deter and prevent irresponsible programming or operation that might allow actions harmful to human or other sentient life to be taken by robots.
Because there are multiple levels of program running in typical modern robots, starting with the programming with which they are pre-configured by the original developers, and continuing with custom programming added in by secondary programmers and / or end-users, there are discussions to be had surrounding the division of legal responsibility for robots’ behaviours and actions from the standpoint of the division of responsibility between the original developer, the programmers and the operator.
There is a case for saying that the original developer should be responsible for creating a framework whereby the potential for further custom programming to cause robots to behave harmfully is limited.
The secondary responsibility will be with the programmer who customises what the robot does, in case he or she causes the robot to do something harmful. The actions of the operator are also important. In future robots may be ‘trained’ to perform certain tasks and a trainer might be responsible for the results of bad training.
2. Is a 'kill switch' necessary? Could the likes of Pepper be a threat?
A ‘kill switch’ is a rather sensationalist way of describing an ‘off switch’. Since robots are machines, just like vacuum cleaners, industrial machinery and cars, there must be a way to switch them off quickly whether in an emergency or in the course of normal use.
We don’t need to scare people into thinking that motor cars need a ‘kill switch’ to prevent them from causing death on the roads, and talk of a ‘kill switch’ for the current generation of robots is rather over the top.
If we were talking about military autonomous robots, and robots designed to physically coerce, disable or kill, then this would become more relevant.
Pepper and other social robots are no more of a threat than any other machine with limited mobility, limited autonomy and intelligence, and limited physical ability to cause harm. The average dog would probably be more dangerous.
Machines will only be as dangerous as they are designed to be in the first place. The responsibility for keeping them safe will be in the hands of the designers, programmers and operators.
3. The report said AI could surpass human intellect in a few decades? What implications does this have?
In our view, in terms of raw processing power, microchips have already surpassed human abilities. We saw the chess computer Deep Blue defeat grandmaster Garry Kasparov twenty years ago in 1997. That, however, is a reflection on the sophisticated development of artificial intelligence within narrowly defined structured contexts such as a game of chess, which has a mathematically limited range of possibilities for each move. Chess computers make use of probability calculations to determine the moves with the greatest chance of leading to an ultimate victory. This can be programmed using logic alone.
Human intelligence is more multi-faceted, going beyond logic, and is applied to very much more open-ended contexts than games of chess or other conventional applications for artificial intelligence.
While it seems likely that artificial intelligence programming will become ever more sophisticated as ways are found to artificially replicate brain structures, and microchip processing power will continue to increase, there is a case for thinking that artificial intelligence can only ever be as good as the intelligence that goes into its design.
If an artificially intelligent machine of the future (such as a robot) is programmed in such a way that it acquires improved judgement from experiences held in memory, it will be behaving much as conscious animals do when it comes to learning behaviour.
However, human intelligence is based on an open-ended consciousness not only of a particular situation but also on its context within everything else that is known or understood to be happening and in the life experience of the individual as well as their physical needs and drives. Other than the need for a power supply and the drives that it is programmed to have, it is hard to see how drives would develop that might cause a robot to behave in ways that would endanger people.
Other features of human consciousness such as emotions and responses to biological hormones further vary our experience and our drives from that of any artificial intelligence currently in production or likely to be produced in the foreseeable future.
While artificial intelligence programmers will attempt to replicate more and more features of human consciousness in their designs over time, the question is how useful or effective that will actually be.
Will a conscious, moody, disobedient robot be any use to mankind? If as seems likely it is not, then presumably companies will strive to create AI that, while it can cope with and ‘understand’ our moods and needs, is not conscious and does not have moods, needs and drives of its own, as that would reduce its usefulness to us.
At the same time, as the sophistication of artificial intelligence development continues to increase, the ethical and technological requirements for keeping robots from acting in ways injurious to other life-forms will become increasingly important. It may be sensible to draft legislation that sets out the responsibilities of developers and programmers of robots and other artificial intelligence to help address this.
4. Will the 'rules' suggested by science fiction writer Isaac Asimov, for how robots should act if and when they become self-aware, be applicable?
These rules state:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm
- A robot must obey the orders given by human beings except where such orders would conflict with the first law
- A robot must protect its own existence as long as such protection does not conflict with the first or second laws
We agree partially with Asimov’s first rule, and more particularly with the part relating to robots not being allowed or able to injure human beings.
The provision that through inaction a robot may not allow a human being to come to harm is more controversial, however, and would be more difficult to codify or justify from a legal standpoint. There are many situations in which humans in the vicinity of robots may come to harm that have nothing to do with the robots. Are robots going to be sophisticated enough to intervene to protect humans in their vicinity from all manner of threats? What if they misperceive the aggressor and the victim?
It may be necessary to restrict Asimov’s provision to instances in which the threat of harm is directly caused by the robot itself. Otherwise a robot might try to protect a criminal from a policeman trying to apprehend that criminal, which would seem to be prohibited by the first rule when there is the possibility of harm to the criminal.
Asimov’s second rule should in my view be subject to careful definition and interpretation. There should not necessarily be a responsibility for robots to obey any human who issues instruction to them. They could in the future be programmed to recognise who is giving them instructions, and only obey authorised personnel, and also to recognise which instructions would be counter-productive to known goals, tasks or guidelines, and to raise objections. But they would need to be programmed to respond in this more sophisticated way.
While I agree that robots should not generally be allowed to obey orders that cause harm to human beings, there are many contexts in which it would be sensible for them not to respond to instructions from anyone, and times when it would be sensible to refuse orders from human operators.
Asimov’s third rule is in our view unjustified by ethical considerations. Since robots do not possess true consciousness or the ability to feel pain and distress of a living creature, there should not be any cause to confer to them the right to life and self-preservation in the manner that is legally conferred to human beings in the modern world. It seems to us that this rule reflects a rather romantic view Asimov was taking of robots as similar to living beings.
5. Finally, what do you think of calls for the creation of a European agency for robotics and artificial intelligence that can provide technical, ethical and regulatory expertise?
This may ultimately be a matter of politics rather than ethics, depending on where you stand on supranational regulation and European bodies and laws against the demand for national independence.
International cooperation on scientific research is well-established and very important to progress. Law and technology can be uncomfortable bedfellows, with the law being used as a blunt instrument by vested interests to prevent or impede technological progress, and the legal system can struggle to keep up with changes in technology.
However an agency like this would help to set out parameters and best practice for AI developers and programmers worldwide, so I think it would be valuable.