By 2033, will Human-Level AI decision making be regarded as 'trustworthy'?

So why is it important to the Engineering community that the AI decision making needs to be equipped with a universally accepted value system (ethically driven) and not 'something else'?

How artificial intelligence will transform decision-making | World Economic Forum (weforum.org)

#ResponsibleAI

  • Given the diversity of ethical values across the world, is it necessary for each country to develop its own ethical framework for AI? What would be the best mechanism to determine such frameworks? Would it involve governmental, democratic, or other processes? This seems to be a highly complex issue.

  • From my experience of human values, I would rather an AI didn't have them.  Just look at the World news at the moment.

  • I'm not sure I understand the question. There seems to be an implicit assumption that AI "understands" both what it's doing and what its results would be used for - as without that how could it apply any ethical values? As far as I can tell, AI doesn;t actually understand anything it all, it merely puts things together based on probabilities and a very large example data set. It can't really make moral judgements any more than a cog wheel can.

       - Andy.

  • I thing watching 'War games' is in order. Computers, even ones running AI programs just calculate what humans ask them to -  rubbish in rubbish out.  Mind you, there are some humans that are not much better.

    The machine has as much or as little moral as you care to calculate a variable called ' moral feeling' from each case to be considered and add a proportion of its value to the weights when calculating the final scores. You can do the same sort of thing for risk or any other parameter that you can create a score for.

    In that sense the AI decision maker will be as free fair and sensible as the patterns you train it on. I sometimes think that some computer people get out so little that they forget that there is a real analogue world outside.

    There are many quoted examples (though some of the funniest are probably apocryphal ) to remind students of this elementary problem.

    But to the OP question, no, of course not.

    Mike.

  • I am not sure that AI does understand.  It does not have a consicence.  It just uses a large dataset and picks the best answer or most probable answer.  Is that answer correct, well not always.  Then there are the issue of how the question was asked.  Verbally or by keyboard?  If verbal there is an extra layer pf complexity added by the language and by accents.  I heard a very funny story recently of a Scottish chap who asked his google device about
    Mr Bates vs the Post Office
    Google suggested several websites and some were of an adult nature.

    As for the human factor.  This depends on the application the AI is used for.  In the world of banking it could help with number crunching and possibly find patterns that a human might not.  However on the stock market it would need a different slant.  Take the futures market.  This could be seen as informed gambling.  

    Even if you could install human values into AI what would those value be?
    Some cultures allow Capital Punishment for serious crimes this could be via electrocution or lethal injection or controversially by stoning.

    Consideration also need to be made when AI is used for medical or battle-spaces purposes.  AI can help with models or simulation but it cannot have the power to make the ultimate decision.  In the battle space the primary objective is to defeat the enemy combatant and the medical arena the primary object is to save the life of the person.  


    There is a place for AI and that place is as assistant to a human

  • I have doubts about the original poster's concept "universally accepted value system". There probably is not even a global one.

    Even within one society ideas about acceptable are very subjective, and vary strongly over time.- in my lifetime homosexuality in the UK has gone from being a criminal offence to being allowed at age 16, the death penalty has left our statute books, probably never to return, the health and safety at work act and the HSE have been created, and had a huge effect and that is before we get onto who you should or should not be allowed to kill and how during a war, and how we decide which group we should be aligning with in external conflicts.

    If you want a less dramatic example look at the furore over small boats immigrants.

    I think to use a machine to decide if a persons actions are 'good' or 'bad' would be a most dangerous path. But programs pre-loaded with loads more history than one person can sensibly recall that can bring up examples  of case law that might be helpfully related to any current trial already exist. And that is already a sort of AI.

    BUT, such things are ideal as an adjunct to a human decision maker, not a replacement for it, much as a well equipped toobox improves the efficiency of a tradesman, but does not replace him.

    Electric jury ? no thanks.

    Mike.

  • How do you define 'trustworthy'? Is it a decision made by a profesional who has been judged as competant by his or her peers. If that is so AI can never be trust worthy as there is no one to judge it's performance.

  • Surely getting the answer right is more important than being judged by peers.  Just because a bunch of people have judged you competent, it doesn't mean you are.

    If an AI medical bot can diagnose someone's illness more accurately than a doctor, should we ignore what it says because a doctor disagrees with it?

  • no, you should add that case to the program's training sequence so it does not make the same error again. Much as you would a human trainee, Assuming it is wrong of course.

    Mike

  • What is right? Not everything has a clear answer.