4 minute read time.

We have the opportunity to make autonomous vehicles reflect our ethical best, not our nightmares, says Dr Asaf Degani

In the coming years, we will witness the first manifestation of a robot in the public space. This robot, otherwise called ‘an Autonomous Vehicle,’ can be a boon in terms of effectiveness and efficiency – but it can also present a deadly serious quagmire: ‘how should it behave when driving on public roads, how should it relate to peoples wants and desires? What if these wants are conflicted with other people needs, what if it is safety related?’

We can’t say to these robots as we would to people -- “do what you think is virtuous to do”, we have to say precisely what we want it to do, because, frankly, it’s only a robot.  We can’t also consult Isaac Asimov’s famous robotic rules.  They don’t address the topic of proper behavior in the public space or common road situations and conflicts between cars and pedestrians.

This is what the new ISO 39003 Standard is meant to do: to set the ethical framework and a set of suggestions for the proper behavior of autonomous vehicles. A bona fide international standard that all nations agree on. It is assumed a similar framework and rule set can also be apply to other robotic entities that are coming “down the pike:” delivery bots maintenance robots, ambulance robots and one day, eventually, humanoid robots.

Amidst this robotic revolution, for which autonomous vehicles are just the first entrant, the nagging questions remains: How should they act? Should they be self-serving in the sense of cutting queues, taking advantage of other vehicles to advance their goals or their desires?  What will they do in a case of a conflict with another car?  What if that conflict has safety implications??

To answer these questions for robots we first need to ask these very tough questions about our own behavior in the public space.  Namely, “what is proper conduct, what is the “virtuous thing to do.”  Currently these questions are left for an individual human to resolve.  But this is no longer an option in the context of the robotic revolution. Thus, only if we answer these questions in the minutest detail, with regard to our own actions, and determine what is proper conduct, then we can be ready to impart that to robots.  

You can program a robot to ‘when you see this, you do that.  ‘When you see red, don’t walk.’ Now the good thing is every robot will act in such a way. And the bad thing is that every robot will indeed act in such a way.  Sometimes there is a situation where you want to cross in red, say for example, you’re driving and there’s a big lorry behind you and you see it is about to ram into you. You see the other section is empty and you decide you’re going to glide in and minimize the impact.  If a robot is programmed to always follow the rule, it will stop on red and cause a major accident.  Robots are, at least for the time being, “dumb and dutiful.” So, the point is that we need to be very careful in removing the ‘dumb.’  The ‘dutiful’ is actually an advantage since once you program it in a certain way, it will not circumvent the rule to its own advantage as many humans do.

In the new ISO Standard, we’ve tried to also look at the bigger picture, realizing that the current road and traffic laws will have to change to accommodate these robotic beings.  To this end we employed the concept of “world building” from the famous moral philosopher Immanuel Kant: ‘if we have a world where you can dictate the driving rules, how would you construct them?’ For example, we say ‘if a vehicle doesn’t have to change lane for the purpose of getting to point B; don’t change lane, unless there is something which makes it imperative for you to change lane.’ If everybody is driving on a highway, say 60 MPH and there is a very slow truck doing 15 MPH, you say ‘if the speed difference is great, its ok to go ahead and try to pass it.’  However if that truck is doing 55,  there’s really no need to pass it (and cause a change to the traffic flow).

The new Standard also takes into account situations which are ambiguous. Who’s giving the right of way when we have a merge? Is it the people on the left or the people on the right? What happens when you go into a roundabout, and it’s a very busy roundabout? The traffic laws say you’re not getting in on this unless there’s an open slot for you. But if it is 8:30 in the morning and there won’t be an open slot for you for an hour and a half and they’re honking behind you…what do you do?

In conclusion, Robots should NOT act like the Terminator. The whole idea is to be very judicious about how we design their behaviour.  An ethical framework for decision making can go a long way to reduce the likelihood of an emerging terminator.  This is what this Standard is about.  How to tell robots what to do so they don’t carry out drastic actions and become dangerous.

As told to Stephen Phillips

--------------------------------------------

Dr. Asaf Degani is a Technical Fellow at General Motor’s Research and Development Center, Israel

--------------------------------------------

What do you think autonomous vehicles should be doing on the road? Should they even be there?

There are two ways to engage and have your say: register for the Ethics and safety in connected and autonomous cars webinar featuring Asaf Degani, Dave Conway, Elizabeth Hofvenschioeld, Paula Palade, and Jamie Hodsdon. 

Or why not respond in the comments? We can’t wait to hear what you think!

  • Some of the public suspicion of autonomous vehicles is cultural (think the unsavoury image of A I in pop culture) and some is psychological (we prefer our own tribes).  These will soften over time.  We won't know its positive effects unless we do it, so (I agree) continued development is an imperative

  • The issues are complex and our first attempts at robotic driving will be far from perfect (but human drivers are far from perfect also). It will be all too easy for people to complain after an event.  Obviously designers are putting a lot of effort into getting things as right as possible but we have to accept that there will be problems and be honest about when they happen.  Ultimately, robotic drivers will make a positive contribution to comfort and safety (I believe) so it would be ethically wrong to stifle development.  In fact, I would say we have an ethical duty to proceed down this path, albeit with due care and caution.