This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

By 2033, will Human-Level AI decision making be regarded as 'trustworthy'?

So why is it important to the Engineering community that the AI decision making needs to be equipped with a universally accepted value system (ethically driven) and not 'something else'?

How artificial intelligence will transform decision-making | World Economic Forum (weforum.org)

#ResponsibleAI

  • Thank you for your comment. I have picked one paper but there are others and would be interested in your thoughts. Schmid, A. and Wiesche, M., 2023. The importance of an ethical framework for trust calibration in AI. IEEE Intelligent Systems. Accessed 22/02/24 <The Importance of an Ethical Framework for Trust Calibration in AI | IEEE Journals & Magazine | IEEE Xplore>

  • I would be interested in your thoughts - Li, B., Qi, P., Liu, B., Di, S., Liu, J., Pei, J., Yi, J. and Zhou, B., 2023. Trustworthy AI: From principles to practices. ACM Computing Surveys55(9), pp.1-46. Accessed 22/02/24 <Trustworthy AI: From Principles to Practices | ACM Computing Surveys

  • If AI can diagnose ALS (Amyotrophic lateral sclerosis) and cure it then I will be open minded. 

  • What is right? Not everything has a clear answer.

  • no, you should add that case to the program's training sequence so it does not make the same error again. Much as you would a human trainee, Assuming it is wrong of course.

    Mike

  • Surely getting the answer right is more important than being judged by peers.  Just because a bunch of people have judged you competent, it doesn't mean you are.

    If an AI medical bot can diagnose someone's illness more accurately than a doctor, should we ignore what it says because a doctor disagrees with it?

  • How do you define 'trustworthy'? Is it a decision made by a profesional who has been judged as competant by his or her peers. If that is so AI can never be trust worthy as there is no one to judge it's performance.

  • I have doubts about the original poster's concept "universally accepted value system". There probably is not even a global one.

    Even within one society ideas about acceptable are very subjective, and vary strongly over time.- in my lifetime homosexuality in the UK has gone from being a criminal offence to being allowed at age 16, the death penalty has left our statute books, probably never to return, the health and safety at work act and the HSE have been created, and had a huge effect and that is before we get onto who you should or should not be allowed to kill and how during a war, and how we decide which group we should be aligning with in external conflicts.

    If you want a less dramatic example look at the furore over small boats immigrants.

    I think to use a machine to decide if a persons actions are 'good' or 'bad' would be a most dangerous path. But programs pre-loaded with loads more history than one person can sensibly recall that can bring up examples  of case law that might be helpfully related to any current trial already exist. And that is already a sort of AI.

    BUT, such things are ideal as an adjunct to a human decision maker, not a replacement for it, much as a well equipped toobox improves the efficiency of a tradesman, but does not replace him.

    Electric jury ? no thanks.

    Mike.

  • I am not sure that AI does understand.  It does not have a consicence.  It just uses a large dataset and picks the best answer or most probable answer.  Is that answer correct, well not always.  Then there are the issue of how the question was asked.  Verbally or by keyboard?  If verbal there is an extra layer pf complexity added by the language and by accents.  I heard a very funny story recently of a Scottish chap who asked his google device about
    Mr Bates vs the Post Office
    Google suggested several websites and some were of an adult nature.

    As for the human factor.  This depends on the application the AI is used for.  In the world of banking it could help with number crunching and possibly find patterns that a human might not.  However on the stock market it would need a different slant.  Take the futures market.  This could be seen as informed gambling.  

    Even if you could install human values into AI what would those value be?
    Some cultures allow Capital Punishment for serious crimes this could be via electrocution or lethal injection or controversially by stoning.

    Consideration also need to be made when AI is used for medical or battle-spaces purposes.  AI can help with models or simulation but it cannot have the power to make the ultimate decision.  In the battle space the primary objective is to defeat the enemy combatant and the medical arena the primary object is to save the life of the person.  


    There is a place for AI and that place is as assistant to a human

  • I thing watching 'War games' is in order. Computers, even ones running AI programs just calculate what humans ask them to -  rubbish in rubbish out.  Mind you, there are some humans that are not much better.

    The machine has as much or as little moral as you care to calculate a variable called ' moral feeling' from each case to be considered and add a proportion of its value to the weights when calculating the final scores. You can do the same sort of thing for risk or any other parameter that you can create a score for.

    In that sense the AI decision maker will be as free fair and sensible as the patterns you train it on. I sometimes think that some computer people get out so little that they forget that there is a real analogue world outside.

    There are many quoted examples (though some of the funniest are probably apocryphal ) to remind students of this elementary problem.

    But to the OP question, no, of course not.

    Mike.