By 2033, will Human-Level AI decision making be regarded as 'trustworthy'?

So why is it important to the Engineering community that the AI decision making needs to be equipped with a universally accepted value system (ethically driven) and not 'something else'?

How artificial intelligence will transform decision-making | World Economic Forum (weforum.org)

#ResponsibleAI

Parents
  • How do you define 'trustworthy'? Is it a decision made by a profesional who has been judged as competant by his or her peers. If that is so AI can never be trust worthy as there is no one to judge it's performance.

  • Surely getting the answer right is more important than being judged by peers.  Just because a bunch of people have judged you competent, it doesn't mean you are.

    If an AI medical bot can diagnose someone's illness more accurately than a doctor, should we ignore what it says because a doctor disagrees with it?

  • no, you should add that case to the program's training sequence so it does not make the same error again. Much as you would a human trainee, Assuming it is wrong of course.

    Mike

  • What is right? Not everything has a clear answer.

  • If AI can diagnose ALS (Amyotrophic lateral sclerosis) and cure it then I will be open minded. 

  • I would be interested in your thoughts - Li, B., Qi, P., Liu, B., Di, S., Liu, J., Pei, J., Yi, J. and Zhou, B., 2023. Trustworthy AI: From principles to practices. ACM Computing Surveys55(9), pp.1-46. Accessed 22/02/24 <Trustworthy AI: From Principles to Practices | ACM Computing Surveys

  • I have looked through this paper as well. It is certainly better than the first one as it acknowledges a number of the problems and challenges to realistic AI. I have picked out a few key paragraphs:

    ‘3.1.2 Data Preprocessing.

    Before feeding data into an AI model, data preprocessing helps remove inconsistent pollution of the data that might harm model behavior and sensitive information that might compromise user privacy.’

    ‘3.4.3 Fail-Safe Mechanisms.

    Considering the imperfection of current AI systems, it is important to avoid harm when the system fails in exceptional cases. By learning from conventional real-time automation systems, the AI community has realized that a fail-safe mechanism or fallback plan should be an essential part of the design of an AI system if its failure can cause harm or loss.’

    ‘Incident sharing. The AI community has recently recognized incident sharing as an effective approach to highlight and prevent potential risks to AI systems [57]. The AI Incident Database [91] provides an inspiring example for stakeholders to share negative AI incidents so that the industry can avoid similar problems.’

    This is showing a change in approach from the first systems which were developed by the ‘games programmer’ mindset to a more engineering project based structure. It is interesting to wonder if a requirement to ‘do things properly’ will price AI out of most markets. Even writing a specification will be a challenge. How would the inputs be defined? It has already been noted that reliability requires sanitized data. How could performance and MTBF be tested? Is offering the same failure rate as a human acceptable? Probably not, you might as well just use a human and save the expenditure on AI. How much better than a human must the system be to justify the effort and cost?

    There is still a long way to go

    www.bbc.com/.../technology-68412620

Reply
  • I have looked through this paper as well. It is certainly better than the first one as it acknowledges a number of the problems and challenges to realistic AI. I have picked out a few key paragraphs:

    ‘3.1.2 Data Preprocessing.

    Before feeding data into an AI model, data preprocessing helps remove inconsistent pollution of the data that might harm model behavior and sensitive information that might compromise user privacy.’

    ‘3.4.3 Fail-Safe Mechanisms.

    Considering the imperfection of current AI systems, it is important to avoid harm when the system fails in exceptional cases. By learning from conventional real-time automation systems, the AI community has realized that a fail-safe mechanism or fallback plan should be an essential part of the design of an AI system if its failure can cause harm or loss.’

    ‘Incident sharing. The AI community has recently recognized incident sharing as an effective approach to highlight and prevent potential risks to AI systems [57]. The AI Incident Database [91] provides an inspiring example for stakeholders to share negative AI incidents so that the industry can avoid similar problems.’

    This is showing a change in approach from the first systems which were developed by the ‘games programmer’ mindset to a more engineering project based structure. It is interesting to wonder if a requirement to ‘do things properly’ will price AI out of most markets. Even writing a specification will be a challenge. How would the inputs be defined? It has already been noted that reliability requires sanitized data. How could performance and MTBF be tested? Is offering the same failure rate as a human acceptable? Probably not, you might as well just use a human and save the expenditure on AI. How much better than a human must the system be to justify the effort and cost?

    There is still a long way to go

    www.bbc.com/.../technology-68412620

Children
No Data