3 minute read time.

The concern is not with the raw engineering challenges of developing the software and hardware parts of the solution, or with the nuances of creating a robust AI component, but with the understanding and successful integration of these solutions into our everyday lives.

Many of the components within systems that utilise AI are not built in the same way as software and hardware components and therefore traditional systems development practices and associated methodologies do not account for the additional challenges faced during the development of AI. Moreover, the methods and techniques used to implement an AI algorithm are very different to those recommended under conventional safety standards. Hence, there is a ‘gap’ in being able to demonstrate that AI complies with conventional good practice and this gap needs to be addressed through alternative safety arguments. AI components require additional activities at the design, implementation, testing and V&V stages. Consideration also needs to be given to the risk of AI adoption, the relevant governance, and ethics, as well as AI development activities such as data collection, pre-processing, machine learning and training. This publication takes a safety-related view and considers the risks, challenges, and potential solutions for using AI in safety-related systems. It introduces 10 key ‘pillars of assurance’ that can be used to underpin an assurance case for AI.

To further amplify on the above, it should be noted that there is a very real danger of expecting ‘human-like’ and ‘common-sense’ responses from systems that utilise AI. This is often compounded by their apparent proficiency in performing routine activities that can lead to over-reliance and misplaced trust in their outputs. However, AI systems can falter when presented with anomalous data, or queries outside of their population of training data. Let us not forget that by comparison humans have years of experience – a giant dataset of situations, likely outcomes, and risk levels, etc. We have become adept at ‘spotting the warning signs’ that something is not right and maybe we should seek a second opinion / investigate further. Computer systems implementing AI do not have this luxury – their dataset is typically limited to the experiences of their training, including any biases, limitations and gaps. Therefore, we cannot expect that solutions utilising AI will always provide results along the lines of ‘sensible’ experienced operators. In many cases, these AI-powered solutions are perfectly adequate, but there is often the risk of edge cases throwing them a curve ball that they are ill-equipped to deal with. It is the responsibility of us humans to bridge the gap between what can be built, and what functionality we expect it to provide. This is where both designers of solutions utilising AI, and the users of those solutions need to be aware of the limitations and caveats to their use. In this respect, readers are urged to take on board the issues raised in the ‘Human factors in AI safety’, ‘Maintenance and operation’, and ‘Verification, validation and assurance’ pillars. Where concerns such as the need for ‘AI safeguarding’, the use of standard operating procedures, and development of a functional safety argument are raised.

Another potential issue is that of system integrity, as the introduction of AI will increase the overall level of complexity of a system. This can have an adverse impact on its integrity, especially in circumstances where modifications and changes become necessary during development or whilst in service. The adoption of a systems engineering approach is recognised as an effective method for resolving challenges associated with system complexity and this can be relevant to the application of AI in functional safety. The reader is pointed towards pillars for ‘Hazard analysis and risk assessment’ and ‘Verification, validation and assurance’ for further details. In addition, for software integrity and by extension software security, the reader is referred to the ‘Security’ pillar.

Download your free copy of The Application of Artificial Intelligence in Functional Safety

 

Parents
  • I have always been interested in the duality of innovation. For every new invention there seems to be a good and an evil strand affecting life experiences. I first really noticed this for myself when the my credit card, which offered such advantages and flexibility over cash, was cloned. One of the books I am reading at the moment is Max Tegmark's Life 3.0 - Being human in the age of Artificial Intelligence, in which he postulates two possible future's, one with superintelligence working for humanity and another where the goals of the humans and the machines are so misaligned that it leads to dysfunction, even annihilation. I know this may be going off the main subject somewhat but I can't help thinking that human safety and wellbeing should be at the heart of all new development and I welcome investment of time and effort in making sure we align the objective of AI with improving functional safety.

Comment
  • I have always been interested in the duality of innovation. For every new invention there seems to be a good and an evil strand affecting life experiences. I first really noticed this for myself when the my credit card, which offered such advantages and flexibility over cash, was cloned. One of the books I am reading at the moment is Max Tegmark's Life 3.0 - Being human in the age of Artificial Intelligence, in which he postulates two possible future's, one with superintelligence working for humanity and another where the goals of the humans and the machines are so misaligned that it leads to dysfunction, even annihilation. I know this may be going off the main subject somewhat but I can't help thinking that human safety and wellbeing should be at the heart of all new development and I welcome investment of time and effort in making sure we align the objective of AI with improving functional safety.

Children
  • Hello Steve:

    I just happened to see your comments and you are correct  anything can be used for good or evil and there is nothing one can do to make sure it is only used for Good.

    I personally try to follow the dots as one significant news event results in another related event, maybe weeks, months or years later. 

    May I suggest you try and look at the old BBC TV series called CONNECTIONS by James Burke from 1978- It looks at how various discoveries, scientific achievements and world events are tied together.

    Peter Brooks

    Palm Bay