This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

The Design and Deployment of Ethically Aligned Intelligent Systems – Bath 26 February 2019

Dr. Rob Wortham began his talk by asking the audience what they understood by terms such as 'robots' and 'intelligence' and then suggesting that they might extend to simpler instances than we might normally think. A robot could be a physical device that sensed its environment and performed a function and if did 'the right thing' according to the context then it could be said to perform in an intelligent way. A physical item such as an electric kettle fitted with a thermostat performs the operation of heating water and, regardless of the quantity of the water inside it or its temperature, does so until it boils and then switches off.



He told us that he was a member of the University of Bath Amoni Group (Artificial Models of Artificial Intelligence) which is attempting to achieve an understanding of human and artificial intelligence by using modelling and simulation. This is a two-way process, lessons learned from examining human systems could be applied to artificial (man-made) systems and vice-versa. It might be thought that a human and a robot behave in similar ways in that they sense their environment and act according to some in-built intelligence however humans are social animals and they form loose networks and they are able to test their own perceptions by comparing them to others, perhaps the true purpose of gossip? A question was raised about the way in which social media might give rise to uncontrolled gossip, (the gossip of small closed groups eventually self-corrects). This possibility was demonstrated in the artificial world when Microsoft's 'chatter bot', Tay, operating on Twitter was quickly shut down as it began generating 'tweets' deemed to be offensive after 'gossiping' with humans with malicious intent.



The interaction of humans with robots is intrinsically unnatural, our internal model of the world in part goes back to our primitive origins. As the biologist W.O. Wilson said, “The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology. “ A device that exhibits some human characteristic can easily become perceived as a human, particular if some part of it can be conceived of as a human face. There was a danger than humans anthropomorphise, in the way that they do with animals, in particular pets, (or even lavatory flush controls fitted with eyes!).



Robotic devices are becoming ever more commonplace and are being used in widely different ways, from the estimated 10,000 warehouse robots used by Amazon, to a single robotic tractor in a farm field producing precisely-aligned furrows, using RTK-GPS (Real-time kinematic Global Positioning System) guidance. Some robots such as Pepper are deliberately made to be humanoid, supposedly to 'make people happy', (or to empty their bank accounts by persuading their owners to purchase new apps). Robot designers can take advantage of the tendency of humans to anthropomorphise, which leads to then being accepted as benign and possessing real intelligence and supposed authority. In an experiment a robot labelled 'fire marshal' entered a room in which an alarm was sounded and the human occupants followed it through a door marked 'Danger! Do not Enter!' Humans were shown to accept the 'authority' of the robot without asking the 'Who, What, Why' questions that might be put to a fellow human.



Dr Wortham had carried out his own research using a small robot confined to a small area and observed by humans who were later questioned on their understanding of what the robot was doing and how it determined what to do. Ideally the participants knew little about robots. The robot's task might be to seek out human faces looking at it. It was found that by making the robot explain what it was doing, 'muttering', making the workings more transparent, the closer to reality was the human perception, i.e. that it was executing an algorithm rather than exhibiting human-like intelligence.



As there are more and more human-robot interactions there have been some moves by legislators to make robots responsible for their actions, e.g. an autonomous car would be treated in the same way as a human driver in the event of a collision. In practical terms that would mean that the owner of the robot would insure against any liabilities. Against that it has been argued that this is an extension of the tendency of designers/manufacturers not to take personal responsibility for their designs/products but to hide behind the legal construct of a corporation. A 'responsible' robot means that the corporation cannot be liable in law, let alone the designer/manufacturer.



Thoughts such as these had led to the Engineering and Physical Sciences Research Council (EPSRC) to set up a working group to consider the ethics of designing and operating robots in the real world. As a result it had produced five principles for designers, users and operators of robots.



It was the belief of the speaker that companies such as Amazon or Microsoft were not interested in developing their own ethical stance, rather they expected this to be determined by legislation to which they would then conform.



 



In the UK parliament Science and Technology Committee (Commons) had taken evidence on these matters and issued its 5th Special Report in October 2016 and the BSI has produced standard BS 8611:2016 - Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems.



 



In the USA the IEEE has a working group project P7001 – Transparency of Autonomous Systems



 



It was believed that there was a need for an ethical moral philosophy for AI systems as there were dangers that without it the ability of humans to make decisions would weaken, they would no longer feel accountable and they would become subservient to the need to deploy new technology rather than new technologies serving the needs of the human. Humans could lose privacy, dignity and a sense of autonomy, all of which were essential for well-being. There might be a reduction of human-human contact, we could become a 'transactional society' that was low on compassion.



 



Following on from questions it was suggested that robots might carry QR codes ('square' barcodes) that could be used to give 'Who, What, Why' data and at least one listener was relieved that the topic was being studied and legislation considered.




Several of the recent talks seem to me to have socio-political elements to them and this one perhaps more so. (Arguably this has always been the case with all engineering topics, engineering being an applied science, performed for a reason, requiring resources and having an impact on society and the environment). One audience member questioned the veracity of information on the internet, another wondered about how culture impacted morality, (do the Japanese react differently to a robot than a Briton, say?). Who makes the rules? It was suggested that morality in the UK had changed over the years. Personally I am not so sure. We have certainly seen an inversion in the meaning of words – we are told that we are a more tolerant society, by people who seem to be remarkably intolerant. To be tolerant used to mean 'to put up with' and now the implication is 'embrace whole-heartedly, or else'. Hatred is now defined by those that seek to find it anywhere, there is no attempt to define an absolute standard or even to apply 'the man on the Clapham omnibus' test, the 'reasonable man'. Dr Jordan Peterson has said of 'hate speech' “Who is going to regulate it? Who is going to define it? I know the answer to that — the last people in the world you would want to.” I can't help but feel that the same might be true of robotic ethics. (As stated in the talk, it was the EU that wanted to give 'responsibility' to the robot – a 'bad' thing! It is the EU that has ruined the European experience of the internet with its control of 'cookies' and personal data and soon to be enacted extension of copyright. Not all legislative bodies are benign).



 



When it was suggested that robots be labelled with 'who, what, why' (www) data my thought was “What about virtualisation? The cloud?” (Flavour of the times, internet of things). The hardware of the robot would be minimalised, the intelligence 'out in the cloud'. Where is the (www) now? But of course we have that already, the Amazon Fire Stick or Echo 'talks' to Amazon but it doesn't have to, and is only Amazon listening? Indeed the simple PC first entered the home as a typewriter with a rub-out key and morphed into a private porn machine. Same hardware, different download,. IBM, Intel and Microsoft labels on the box, none of whom bear any responsibility for how it is used.



 



We live in interesting times.




Links



 



Amoni



R5 Robot



Pepper Robot



EPSRC Principles of Robotics



House of Commons 5th Special Report



BSI 8611:2016



IEEE P7001