This discussion is locked.
You cannot post a reply to this discussion. If you have a question start a new discussion

Should the IET seek its members to pledge not to help build killer robots?

I read an interesting article in the online E&T (above) that reports of a pledge to not assist in the development of so-called Killer Robots. Should the IET take a stance, be the first PEI to endorse the pledge and furthermore expect/encourage its members to sign up too?
Parents
  • Mark,

    In that case I think we are on the same page. AI and machine learning sometimes throws up some strange answers. An example I can think of (which I think was reported by IET in E&T) was bronchitis diagnosis where the risk to the patient was being assessed by AI on a number of factors based on historical outcomes and it concluded that asthma sufferers were at low risk of complications from bronchitis. This was apparently borne out by the data but skewed because doctors automatically sent asthma sufferers with bronchitis straight to hospital regardless of severity, so they rarely got any complications......

    Machine learning always needs to be assessed by someone with expertise to pick up these anomalies and set things straight, so completely autonomous decision making is fraught with danger. Yes, nine times out of ten it will probably be completely right but who takes responsibility for the tenth time?

    Alasdair
Reply
  • Mark,

    In that case I think we are on the same page. AI and machine learning sometimes throws up some strange answers. An example I can think of (which I think was reported by IET in E&T) was bronchitis diagnosis where the risk to the patient was being assessed by AI on a number of factors based on historical outcomes and it concluded that asthma sufferers were at low risk of complications from bronchitis. This was apparently borne out by the data but skewed because doctors automatically sent asthma sufferers with bronchitis straight to hospital regardless of severity, so they rarely got any complications......

    Machine learning always needs to be assessed by someone with expertise to pick up these anomalies and set things straight, so completely autonomous decision making is fraught with danger. Yes, nine times out of ten it will probably be completely right but who takes responsibility for the tenth time?

    Alasdair
Children
No Data