By 2033, will Human-Level AI decision making be regarded as 'trustworthy'?

So why is it important to the Engineering community that the AI decision making needs to be equipped with a universally accepted value system (ethically driven) and not 'something else'?

How artificial intelligence will transform decision-making | World Economic Forum (weforum.org)

#ResponsibleAI

Parents
  • I am not sure that AI does understand.  It does not have a consicence.  It just uses a large dataset and picks the best answer or most probable answer.  Is that answer correct, well not always.  Then there are the issue of how the question was asked.  Verbally or by keyboard?  If verbal there is an extra layer pf complexity added by the language and by accents.  I heard a very funny story recently of a Scottish chap who asked his google device about
    Mr Bates vs the Post Office
    Google suggested several websites and some were of an adult nature.

    As for the human factor.  This depends on the application the AI is used for.  In the world of banking it could help with number crunching and possibly find patterns that a human might not.  However on the stock market it would need a different slant.  Take the futures market.  This could be seen as informed gambling.  

    Even if you could install human values into AI what would those value be?
    Some cultures allow Capital Punishment for serious crimes this could be via electrocution or lethal injection or controversially by stoning.

    Consideration also need to be made when AI is used for medical or battle-spaces purposes.  AI can help with models or simulation but it cannot have the power to make the ultimate decision.  In the battle space the primary objective is to defeat the enemy combatant and the medical arena the primary object is to save the life of the person.  


    There is a place for AI and that place is as assistant to a human

  • I have doubts about the original poster's concept "universally accepted value system". There probably is not even a global one.

    Even within one society ideas about acceptable are very subjective, and vary strongly over time.- in my lifetime homosexuality in the UK has gone from being a criminal offence to being allowed at age 16, the death penalty has left our statute books, probably never to return, the health and safety at work act and the HSE have been created, and had a huge effect and that is before we get onto who you should or should not be allowed to kill and how during a war, and how we decide which group we should be aligning with in external conflicts.

    If you want a less dramatic example look at the furore over small boats immigrants.

    I think to use a machine to decide if a persons actions are 'good' or 'bad' would be a most dangerous path. But programs pre-loaded with loads more history than one person can sensibly recall that can bring up examples  of case law that might be helpfully related to any current trial already exist. And that is already a sort of AI.

    BUT, such things are ideal as an adjunct to a human decision maker, not a replacement for it, much as a well equipped toobox improves the efficiency of a tradesman, but does not replace him.

    Electric jury ? no thanks.

    Mike.

Reply
  • I have doubts about the original poster's concept "universally accepted value system". There probably is not even a global one.

    Even within one society ideas about acceptable are very subjective, and vary strongly over time.- in my lifetime homosexuality in the UK has gone from being a criminal offence to being allowed at age 16, the death penalty has left our statute books, probably never to return, the health and safety at work act and the HSE have been created, and had a huge effect and that is before we get onto who you should or should not be allowed to kill and how during a war, and how we decide which group we should be aligning with in external conflicts.

    If you want a less dramatic example look at the furore over small boats immigrants.

    I think to use a machine to decide if a persons actions are 'good' or 'bad' would be a most dangerous path. But programs pre-loaded with loads more history than one person can sensibly recall that can bring up examples  of case law that might be helpfully related to any current trial already exist. And that is already a sort of AI.

    BUT, such things are ideal as an adjunct to a human decision maker, not a replacement for it, much as a well equipped toobox improves the efficiency of a tradesman, but does not replace him.

    Electric jury ? no thanks.

    Mike.

Children
No Data