By 2033, will Human-Level AI decision making be regarded as 'trustworthy'?

So why is it important to the Engineering community that the AI decision making needs to be equipped with a universally accepted value system (ethically driven) and not 'something else'?

How artificial intelligence will transform decision-making | World Economic Forum (weforum.org)

#ResponsibleAI

Parents
  • Given the diversity of ethical values across the world, is it necessary for each country to develop its own ethical framework for AI? What would be the best mechanism to determine such frameworks? Would it involve governmental, democratic, or other processes? This seems to be a highly complex issue.

Reply
  • Given the diversity of ethical values across the world, is it necessary for each country to develop its own ethical framework for AI? What would be the best mechanism to determine such frameworks? Would it involve governmental, democratic, or other processes? This seems to be a highly complex issue.

Children
  • Thank you for your comment. I have picked one paper but there are others and would be interested in your thoughts. Schmid, A. and Wiesche, M., 2023. The importance of an ethical framework for trust calibration in AI. IEEE Intelligent Systems. Accessed 22/02/24 <The Importance of an Ethical Framework for Trust Calibration in AI | IEEE Journals & Magazine | IEEE Xplore>

  • I find this quote very inspiring and relevant: “If we are to harness the benefits of artificial intelligence and address the risks, we must all work together - governments, industry, academia and civil society - to develop the frameworks and systems that enable responsible innovation. We must seize the moment, in partnership, to deliver on the promise of technological advances and harness them for the common good.” UN Secretary-General António Guterres, AI for Good Global Summit, Geneva, 2019

    Expressing his support for the AI for Good initiative, which aims to use AI to tackle some of the most pressing issues and challenges that humanity and the planet face, hopefully more people and organizations will join and support it.

  • I looked through this paper that you suggested and found it very wooly. I cannot consider safety and reliability to be synonyms. It is very easy to design a system that can be reliably dangerous, much harder to design a system that is reliably safe.

    ‘It is straightforward that AI safety is closely related to reliability since AI safety requires, as a prerequisite, that the AI system be reliable. Thus, both terms can even be considered as synonyms.’

    The FMEA based framework also raises difficulties. Detectability and Severity are both determined by ‘Process Experts’ yet the paper also states ‘It must be noted that, in contrast to former technologies, like traditional automation, the behavior of AI-based technologies is rather unknown.’ How are these ‘Process Experts’ confirmed as trustworthy and reliable?

    As a professional engineer I am required to undertake Continuous Professional Development which is monitored by my professional body. I fail to see any similar process for AI systems. How are they updated as more knowledge becomes available?

    Looking in the other direction, if some of the information the AI system has been trained with is found to be invalid how is that corrected? This is a common problem, many papers are published in the  headlines and are then withdrawn in a couple of lines on some obscure page, for example:

     www.nature.com/.../nature.2017.21929

    Maybe all AI systems should be linked to something like ‘Retraction Watch’.

    https://retractionwatch.com/

  • well if these cases are as reported then it is already going expensively wrong...

    https://www.theregister.com/2024/02/23/opinion_column/

    " The virtual assistant told him that if he purchased a normal-price ticket, he would have up to 90 days to claim back a bereavement discount. A real-live Air Canada rep confirmed he could get the bereavement discount.

    When Moffatt later submitted his refund claim with the necessary documentation, Air Canada refused to pay out. That did not work out well for the company.

    Moffatt took the business to small claims court, claiming Air Canada was negligent and had misrepresented its policy. Air Canada replied, in effect, that "The chatbot is a separate legal entity that is responsible for its own actions."

    ....  " I find Air Canada did not take reasonable care to ensure its chatbot was accurate."

    And https://www.akingump.com/en/insights/blogs/ag-data-dive/eeoc-settles-over-recruiting-software-in-possible-first-ever-ai-related-case  

    "English language tutor provider used software programmed to automatically reject both female candidates over the age of 55 and male candidates over 60 for tutoring roles, in violation of the Age Discrimination in Employment Act"

    And more humorously

    https://www.bbc.co.uk/news/technology-68412620

    more on that here

    https://www.theregister.com/2024/02/23/google_suspends_gemini/

    Mike

  • As a professional engineer I am required to undertake Continuous Professional Development which is monitored by my professional body. I fail to see any similar process for AI systems. How are they updated as more knowledge becomes available?

    Looking in the other direction, if some of the information the AI system has been trained with is found to be invalid how is that corrected?

    Regulation can help develop trust, transparency and accountability among users, developers and stakeholders of AI. However without clear standards, regulations can be difficult to implement and enforce. It is important to ensure ethical use of AI, safeguarding human rights and safety. This has to be a balance of professional Engineer training, as you stated as well as policy and guidance for human-level AI. 

    It could be suggested that the frequency of updating ML models depends on several factors and is probably not limited to these i.e., type of model, type of data being used. The amount of data consumed depends significantly on the complexity and use.