3 minute read time.

The concern is not with the raw engineering challenges of developing the software and hardware parts of the solution, or with the nuances of creating a robust AI component, but with the understanding and successful integration of these solutions into our everyday lives.

Many of the components within systems that utilise AI are not built in the same way as software and hardware components and therefore traditional systems development practices and associated methodologies do not account for the additional challenges faced during the development of AI. Moreover, the methods and techniques used to implement an AI algorithm are very different to those recommended under conventional safety standards. Hence, there is a ‘gap’ in being able to demonstrate that AI complies with conventional good practice and this gap needs to be addressed through alternative safety arguments. AI components require additional activities at the design, implementation, testing and V&V stages. Consideration also needs to be given to the risk of AI adoption, the relevant governance, and ethics, as well as AI development activities such as data collection, pre-processing, machine learning and training. This publication takes a safety-related view and considers the risks, challenges, and potential solutions for using AI in safety-related systems. It introduces 10 key ‘pillars of assurance’ that can be used to underpin an assurance case for AI.

To further amplify on the above, it should be noted that there is a very real danger of expecting ‘human-like’ and ‘common-sense’ responses from systems that utilise AI. This is often compounded by their apparent proficiency in performing routine activities that can lead to over-reliance and misplaced trust in their outputs. However, AI systems can falter when presented with anomalous data, or queries outside of their population of training data. Let us not forget that by comparison humans have years of experience – a giant dataset of situations, likely outcomes, and risk levels, etc. We have become adept at ‘spotting the warning signs’ that something is not right and maybe we should seek a second opinion / investigate further. Computer systems implementing AI do not have this luxury – their dataset is typically limited to the experiences of their training, including any biases, limitations and gaps. Therefore, we cannot expect that solutions utilising AI will always provide results along the lines of ‘sensible’ experienced operators. In many cases, these AI-powered solutions are perfectly adequate, but there is often the risk of edge cases throwing them a curve ball that they are ill-equipped to deal with. It is the responsibility of us humans to bridge the gap between what can be built, and what functionality we expect it to provide. This is where both designers of solutions utilising AI, and the users of those solutions need to be aware of the limitations and caveats to their use. In this respect, readers are urged to take on board the issues raised in the ‘Human factors in AI safety’, ‘Maintenance and operation’, and ‘Verification, validation and assurance’ pillars. Where concerns such as the need for ‘AI safeguarding’, the use of standard operating procedures, and development of a functional safety argument are raised.

Another potential issue is that of system integrity, as the introduction of AI will increase the overall level of complexity of a system. This can have an adverse impact on its integrity, especially in circumstances where modifications and changes become necessary during development or whilst in service. The adoption of a systems engineering approach is recognised as an effective method for resolving challenges associated with system complexity and this can be relevant to the application of AI in functional safety. The reader is pointed towards pillars for ‘Hazard analysis and risk assessment’ and ‘Verification, validation and assurance’ for further details. In addition, for software integrity and by extension software security, the reader is referred to the ‘Security’ pillar.

Download your free copy of The Application of Artificial Intelligence in Functional Safety

 

Parents
  • I look forward to reading this report. An approach with respect to safety critical calculations in medical physics relies on people ascribed 'medical physics expert'  MPE who acknowledge the roles and risks, the training and competence as part of the mitigation of calculational errors. Although errors are always possible, they occur within processes (often with other experts checking, and self checking), and the overall model is that the benefit outweighs the risk. 
    Devices that do not self-acknowledge error capability are implemented as part of the regulatory apparatus that lets MPE delegate checks and offer suggestions based on evidence prepared by teams of MPE or equivalent. (I might revise what I currently understand by evidence as discussion goes on..)
Comment
  • I look forward to reading this report. An approach with respect to safety critical calculations in medical physics relies on people ascribed 'medical physics expert'  MPE who acknowledge the roles and risks, the training and competence as part of the mitigation of calculational errors. Although errors are always possible, they occur within processes (often with other experts checking, and self checking), and the overall model is that the benefit outweighs the risk. 
    Devices that do not self-acknowledge error capability are implemented as part of the regulatory apparatus that lets MPE delegate checks and offer suggestions based on evidence prepared by teams of MPE or equivalent. (I might revise what I currently understand by evidence as discussion goes on..)
Children
No Data