By 2033, will Human-Level AI decision making be regarded as 'trustworthy'?

So why is it important to the Engineering community that the AI decision making needs to be equipped with a universally accepted value system (ethically driven) and not 'something else'?

How artificial intelligence will transform decision-making | World Economic Forum (weforum.org)

#ResponsibleAI

  • If AI can diagnose ALS (Amyotrophic lateral sclerosis) and cure it then I will be open minded. 

  • I would be interested in your thoughts - Li, B., Qi, P., Liu, B., Di, S., Liu, J., Pei, J., Yi, J. and Zhou, B., 2023. Trustworthy AI: From principles to practices. ACM Computing Surveys55(9), pp.1-46. Accessed 22/02/24 <Trustworthy AI: From Principles to Practices | ACM Computing Surveys

  • Thank you for your comment. I have picked one paper but there are others and would be interested in your thoughts. Schmid, A. and Wiesche, M., 2023. The importance of an ethical framework for trust calibration in AI. IEEE Intelligent Systems. Accessed 22/02/24 <The Importance of an Ethical Framework for Trust Calibration in AI | IEEE Journals & Magazine | IEEE Xplore>

  • It is an interesting question, and even more interesting reading some of the responses. It is correct that AI does not have a conscious, but that does not mean that ethics cannot be trained into it. Is it the right thing to do to instil ethics? I would have to say yes.

    Now though I think yes, this is obviously not a simple space to solve, especially picking up on point that was made surrounding varying ethics around the world. There are several national frameworks that have been put in place, of which some are laws and other just guidelines, to ensure AI is ethical. It is important that these are done at a generic and national level. The reason for this is similar to the difference between personal values and the law. If you were as an individual trying to instil your own ethical values, this is something that should not be done, as how do you know your personal beliefs are correct at a generic level? Personal beliefs differ greatly, and what you would not want is many systems differently biased on every single person’s values. However, if these ethics are implemented at a higher national level, it’s the same as laws, everyone should be following these regardless of individual beliefs (obviously not trying to get in a political debate of the ethics of some laws around the world). This should be done to ensure fundamental rights are not breached, especially within AI. It is fascinating to see how similar some of the guidelines, acts and laws that are forming for a lot of countries are, and this does need to be done.

    There is then a separate argument about how these systems are used, and this is similar to every tool already out there. Tools of any sort may be legal and ethical on its own, but people can misuse it and intentionally cause harm, think of malware, cyber-attacks, even physicals tools have the same issue! This needs to be thought of as a separate, but still very important, issue!

  • I find this quote very inspiring and relevant: “If we are to harness the benefits of artificial intelligence and address the risks, we must all work together - governments, industry, academia and civil society - to develop the frameworks and systems that enable responsible innovation. We must seize the moment, in partnership, to deliver on the promise of technological advances and harness them for the common good.” UN Secretary-General António Guterres, AI for Good Global Summit, Geneva, 2019

    Expressing his support for the AI for Good initiative, which aims to use AI to tackle some of the most pressing issues and challenges that humanity and the planet face, hopefully more people and organizations will join and support it.

  • Hi Kirsten. Very insightful comment on AI ethics. I am curious to know if you are involved in UNESCO or any of its activities or programmes related to AI ethics ? 

  • Personal beliefs differ greatly, and what you would not want is many systems differently biased on every single person’s values. However, if these ethics are implemented at a higher national level, it’s the same as laws, everyone should be following these regardless of individual beliefs

    However there's a challenging grey area in the middle. Recruitment is a very good example - or, in the IET world, judging whether an engineer meets the criteria to be e.g. Chartered. These decisions are much more human than we sometimes like to pretend they are, and even sharing across a panel of humans doesn't ensure impartiality and bias. Any AI system will either be working towards conscious or unconscious rules set by the incumbents or, as famously uncovered when Amazon tried to use AI, it builds its model and learning around what is already in place, which again is only there because of potentially biased rules set by the incumbents. For those who don't know the story, Amazon tried to use AI to sort CVs, giving as its training material the data on the existing high-flyers in the organisation. So it selected all the applicants who were white, male, and of a certain educational background - not because there was any evidence that those were the best at the job, but only because those were the people currently in place having been promoted by people who liked people like themselves.

    I suspect that the underlying issue is that to be successful AI needs lots of feedback of the effectiveness of its decisions. This is fine (and really useful) in, for example, controlling a central heating system where it can gently adapt to a particular house environment. But for safety critical situations where trial and error is unacceptable, or for systems such as recruitment where it can take many years to see the results (and even then they are hard to measure), it becomes much more problematic. I was recently involved in research in the potential use of AI in the railway signalling environment, and two of the key outputs were - no surprises - that there would be a massive challenge in defining the acceptable limits of what the system could do, and further massive challenges that even if you could define a boundary proving that the system was actually working within it. And that's an application where the ethical challenges are pretty straightforward - we want the trains to run, and we don't want them to crash.

    Getting all philosophical for a moment, even where we do manage ethical challenges by laws it's often (maybe usually) not as robust as we may like to believe. Various events in the UK over the last few years have shown that laws can simply be changed, or at the very least worked around, for expediency. What tends to be (slightly) clearer is being able to show a clear link between actions and consequences: if we engineer a system to do X then we can expect Y people to die in a 100 year period (you can tell I'm a functional safety assessor!) And for functional systems we're reasonably good at that, and it may be possible to engineer those requirements / limitations into an AI systems. But for many human systems, and again recruitment (or professional registration) is an excellent example, it's very difficult to even agree on what the "right" outcome is. In principal, if, say, we were designing a system to identify the best potential engineers then an AI system could analyse the characteristics of the "best" engineers from 1000s of companies, and distil that down to key parameters. But who decides who the "best" engineers are? Where does that data come from - probably only from those companies who decide to take part, which are therefore self selected? And fundamentally, is it only looking backwards, or is it looking forwards to what might be if took our human biases out of the equation?

    Actually I guess my instinct in response to the question is to think that AI will produce ethical outcomes that are no more "flawed" than existing human processes produce, whether or not we try programming our ethics into it or not. Any read of almost any headline story in the papers at the moment - or, even worse, the comments on any social media site! - suggests that humans aren't great at ethics anyway (or, more accurately, are really good at coming up with "ethical" explanations to justify the awful things they are doing). But I think my main concern is that people have a nasty tendency to believe what the computer tells them (this was another point that came through in our research I mentioned above). We've already seen over many years with non-AI IT systems the "computer says No" scenario, the present Horizon scandal being a particularly large case, but I suspect we've all come across minor day-to-day irritations of the person who believes their computer screen rather than the real world. Back to my recruitment scenario (but this could apply to any system) the risk is pretty scary that a badly trained system is then sold to multiple users such that it becomes impossible to get a job in large organisations because your profile doesn't fit what the system is looking for - and that people blindly believe it. So actually, back to Kirsten's point, the very presence of randomness between different human decision makers is what currently protects us from this.

    But the good thing is that lots of people are asking these questions! I was actually more worried a few years ago when, in particular, autonomous vehicles seemed to be being pushed through at an alarming rate, for no readily apparent reason. It does feel now that the risks of AI systems are now being taken a lot more seriously. Because personally I find that the answer to any question such as that posed at the start is actually a fourth one of "It's complicated - but that doesn't mean we should be thinking about it."

    Cheers,

    Andy

  • I am unfortunately not involved in those activities. I just actively engage and keep up to date with these type of topics within the AI realm to ensure that the wider implications and implementation of AI, such as ethics and safety, is as much of a priority as the development within the work that I undertake. Especially as I work within the defence sector.

  • Great response and I had to read it a few times over! I couldn’t agree more, and with any of these topics there are always going to be grey areas, ethics itself outside of AI has many grey areas. I think the main thing is focusing on what we can implement to make sure AI is as ethical and safe as we can make it, aligning to fundamental laws even though yes these do change and aren’t fool proof.

    In the examples you gave surrounding Amazon, this is a clear and great example of AI bias. It is often called selection bias, where the algorithm is only show a few select examples that you want the system to choose rather than a generic training set, which biases the whole system to those specific scenarios. I wonder if the system was trained on the job specification, and the CVS’s were sorted based off that and not what people deemed to be the best, there would not be that bias. Or whether, as to your point this would be going backwards. But I also believe this could remove some of the human bias we see every day. There are always examples of people from minority groups not getting interviews due to human bias against them, so would AI actually help to overcome that? (As long as the system is developed surrounding job specification and not who is at the company as yes that would give the biased system you gave as an example!).

    I do agree about the feedback, as yes, the AI could be to the point of ethical, but may need additional rules for specific application, which has come up within some of the research I undertake. But this should be identified when understanding the problem space and defining the use case to ensure this is included in the development plan. I think a key point you have made is defining acceptable limits to measure performance, some applications are a lot easier than others! And this is reliant on end user engagement as well as understanding the ethics and safety behind it. But in your example (and I don’t mean to sound condescending as it is your work, I am just musing!) I wonder if this can be done if it’s something that is decided to be a necessity. What I mean by that is with autonomous vehicles there are a lot of ethical and safety issues attached to them, but some cities around the world made this technology a must, so people had to solve it. There was engagement between many different areas to overcome these issues, and now they have autonomous taxis because it had to happen.

    I think your last comment is exactly what we all should be doing, we need to think about ethics regardless of how complicated it is. It is not up to the AI developers and teams to get in to ethical debates, but they do need to take the ethical considerations associated to their application and transparently show how these have been identified and put in to the system.

    This has been a great chain of comments, thanks all!

  • I looked through this paper that you suggested and found it very wooly. I cannot consider safety and reliability to be synonyms. It is very easy to design a system that can be reliably dangerous, much harder to design a system that is reliably safe.

    ‘It is straightforward that AI safety is closely related to reliability since AI safety requires, as a prerequisite, that the AI system be reliable. Thus, both terms can even be considered as synonyms.’

    The FMEA based framework also raises difficulties. Detectability and Severity are both determined by ‘Process Experts’ yet the paper also states ‘It must be noted that, in contrast to former technologies, like traditional automation, the behavior of AI-based technologies is rather unknown.’ How are these ‘Process Experts’ confirmed as trustworthy and reliable?

    As a professional engineer I am required to undertake Continuous Professional Development which is monitored by my professional body. I fail to see any similar process for AI systems. How are they updated as more knowledge becomes available?

    Looking in the other direction, if some of the information the AI system has been trained with is found to be invalid how is that corrected? This is a common problem, many papers are published in the  headlines and are then withdrawn in a couple of lines on some obscure page, for example:

     www.nature.com/.../nature.2017.21929

    Maybe all AI systems should be linked to something like ‘Retraction Watch’.

    https://retractionwatch.com/