By 2033, will Human-Level AI decision making be regarded as 'trustworthy'?

So why is it important to the Engineering community that the AI decision making needs to be equipped with a universally accepted value system (ethically driven) and not 'something else'?

How artificial intelligence will transform decision-making | World Economic Forum (weforum.org)

#ResponsibleAI

  • well if these cases are as reported then it is already going expensively wrong...

    https://www.theregister.com/2024/02/23/opinion_column/

    " The virtual assistant told him that if he purchased a normal-price ticket, he would have up to 90 days to claim back a bereavement discount. A real-live Air Canada rep confirmed he could get the bereavement discount.

    When Moffatt later submitted his refund claim with the necessary documentation, Air Canada refused to pay out. That did not work out well for the company.

    Moffatt took the business to small claims court, claiming Air Canada was negligent and had misrepresented its policy. Air Canada replied, in effect, that "The chatbot is a separate legal entity that is responsible for its own actions."

    ....  " I find Air Canada did not take reasonable care to ensure its chatbot was accurate."

    And https://www.akingump.com/en/insights/blogs/ag-data-dive/eeoc-settles-over-recruiting-software-in-possible-first-ever-ai-related-case  

    "English language tutor provider used software programmed to automatically reject both female candidates over the age of 55 and male candidates over 60 for tutoring roles, in violation of the Age Discrimination in Employment Act"

    And more humorously

    https://www.bbc.co.uk/news/technology-68412620

    more on that here

    https://www.theregister.com/2024/02/23/google_suspends_gemini/

    Mike

  • As a professional engineer I am required to undertake Continuous Professional Development which is monitored by my professional body. I fail to see any similar process for AI systems. How are they updated as more knowledge becomes available?

    Looking in the other direction, if some of the information the AI system has been trained with is found to be invalid how is that corrected?

    Regulation can help develop trust, transparency and accountability among users, developers and stakeholders of AI. However without clear standards, regulations can be difficult to implement and enforce. It is important to ensure ethical use of AI, safeguarding human rights and safety. This has to be a balance of professional Engineer training, as you stated as well as policy and guidance for human-level AI. 

    It could be suggested that the frequency of updating ML models depends on several factors and is probably not limited to these i.e., type of model, type of data being used. The amount of data consumed depends significantly on the complexity and use. 

  • I have looked through this paper as well. It is certainly better than the first one as it acknowledges a number of the problems and challenges to realistic AI. I have picked out a few key paragraphs:

    ‘3.1.2 Data Preprocessing.

    Before feeding data into an AI model, data preprocessing helps remove inconsistent pollution of the data that might harm model behavior and sensitive information that might compromise user privacy.’

    ‘3.4.3 Fail-Safe Mechanisms.

    Considering the imperfection of current AI systems, it is important to avoid harm when the system fails in exceptional cases. By learning from conventional real-time automation systems, the AI community has realized that a fail-safe mechanism or fallback plan should be an essential part of the design of an AI system if its failure can cause harm or loss.’

    ‘Incident sharing. The AI community has recently recognized incident sharing as an effective approach to highlight and prevent potential risks to AI systems [57]. The AI Incident Database [91] provides an inspiring example for stakeholders to share negative AI incidents so that the industry can avoid similar problems.’

    This is showing a change in approach from the first systems which were developed by the ‘games programmer’ mindset to a more engineering project based structure. It is interesting to wonder if a requirement to ‘do things properly’ will price AI out of most markets. Even writing a specification will be a challenge. How would the inputs be defined? It has already been noted that reliability requires sanitized data. How could performance and MTBF be tested? Is offering the same failure rate as a human acceptable? Probably not, you might as well just use a human and save the expenditure on AI. How much better than a human must the system be to justify the effort and cost?

    There is still a long way to go

    www.bbc.com/.../technology-68412620