3 minute read time.

The concern is not with the raw engineering challenges of developing the software and hardware parts of the solution, or with the nuances of creating a robust AI component, but with the understanding and successful integration of these solutions into our everyday lives.

Many of the components within systems that utilise AI are not built in the same way as software and hardware components and therefore traditional systems development practices and associated methodologies do not account for the additional challenges faced during the development of AI. Moreover, the methods and techniques used to implement an AI algorithm are very different to those recommended under conventional safety standards. Hence, there is a ‘gap’ in being able to demonstrate that AI complies with conventional good practice and this gap needs to be addressed through alternative safety arguments. AI components require additional activities at the design, implementation, testing and V&V stages. Consideration also needs to be given to the risk of AI adoption, the relevant governance, and ethics, as well as AI development activities such as data collection, pre-processing, machine learning and training. This publication takes a safety-related view and considers the risks, challenges, and potential solutions for using AI in safety-related systems. It introduces 10 key ‘pillars of assurance’ that can be used to underpin an assurance case for AI.

To further amplify on the above, it should be noted that there is a very real danger of expecting ‘human-like’ and ‘common-sense’ responses from systems that utilise AI. This is often compounded by their apparent proficiency in performing routine activities that can lead to over-reliance and misplaced trust in their outputs. However, AI systems can falter when presented with anomalous data, or queries outside of their population of training data. Let us not forget that by comparison humans have years of experience – a giant dataset of situations, likely outcomes, and risk levels, etc. We have become adept at ‘spotting the warning signs’ that something is not right and maybe we should seek a second opinion / investigate further. Computer systems implementing AI do not have this luxury – their dataset is typically limited to the experiences of their training, including any biases, limitations and gaps. Therefore, we cannot expect that solutions utilising AI will always provide results along the lines of ‘sensible’ experienced operators. In many cases, these AI-powered solutions are perfectly adequate, but there is often the risk of edge cases throwing them a curve ball that they are ill-equipped to deal with. It is the responsibility of us humans to bridge the gap between what can be built, and what functionality we expect it to provide. This is where both designers of solutions utilising AI, and the users of those solutions need to be aware of the limitations and caveats to their use. In this respect, readers are urged to take on board the issues raised in the ‘Human factors in AI safety’, ‘Maintenance and operation’, and ‘Verification, validation and assurance’ pillars. Where concerns such as the need for ‘AI safeguarding’, the use of standard operating procedures, and development of a functional safety argument are raised.

Another potential issue is that of system integrity, as the introduction of AI will increase the overall level of complexity of a system. This can have an adverse impact on its integrity, especially in circumstances where modifications and changes become necessary during development or whilst in service. The adoption of a systems engineering approach is recognised as an effective method for resolving challenges associated with system complexity and this can be relevant to the application of AI in functional safety. The reader is pointed towards pillars for ‘Hazard analysis and risk assessment’ and ‘Verification, validation and assurance’ for further details. In addition, for software integrity and by extension software security, the reader is referred to the ‘Security’ pillar.

Download your free copy of The Application of Artificial Intelligence in Functional Safety

 

  • Hello Bob:

    This may be "off  the topic" but I have been concerned with the safety of "high speed" privately owned passenger service which has just started here in Florida  called Brightline. The service involves trains going from Miami to Orlando passing though my country Brevard with currently no stops. The initial purpose for the train was to transport only tourists. The high speed trains share tracks with goods trains that only operate at about 35 mph and there are certain parts that only have one track over bridges that have to open and close for boat traffic.

    The problem in my County is that there are 37 rail/road crossings with some areas having 5-6 road crossing within maybe a mile. The train is allowed to travel up to 79 mph on crossings that only have straight up/down barriers. Above that speed they must use the box (swinging) barriers commonly used in the UK.

    They operate 12 hours a day and have to blow their horns well before arriving at the crossing - which effective is all the time when going through the center of a middle size city.

    We have already have a number of deaths due to drivers going around the barriers. The last crash (no one died) was when it hit a golf cart (only in Florida would that happen).

    The point I want to make is that train safety must include the effects on the surrounding communities, including the effect on fire and ambulance services.

    Peter Brooks

    Palm Bay Florida 

      

  • Hello Steve:

    I just happened to see your comments and you are correct  anything can be used for good or evil and there is nothing one can do to make sure it is only used for Good.

    I personally try to follow the dots as one significant news event results in another related event, maybe weeks, months or years later. 

    May I suggest you try and look at the old BBC TV series called CONNECTIONS by James Burke from 1978- It looks at how various discoveries, scientific achievements and world events are tied together.

    Peter Brooks

    Palm Bay   

  • I have always been interested in the duality of innovation. For every new invention there seems to be a good and an evil strand affecting life experiences. I first really noticed this for myself when the my credit card, which offered such advantages and flexibility over cash, was cloned. One of the books I am reading at the moment is Max Tegmark's Life 3.0 - Being human in the age of Artificial Intelligence, in which he postulates two possible future's, one with superintelligence working for humanity and another where the goals of the humans and the machines are so misaligned that it leads to dysfunction, even annihilation. I know this may be going off the main subject somewhat but I can't help thinking that human safety and wellbeing should be at the heart of all new development and I welcome investment of time and effort in making sure we align the objective of AI with improving functional safety.

  • My particular interest is in software for safety related & safety critical railway applications; my colleagues and I have been conducting Independent Software Assessments on railway systems for the past 20+ years, based on the applicable European Standards (EN50128 since 1999, IEC61508 and, recently EN50176, issued last year. However, these standards do not address recent technological advances, such as AI, Cloud Computing, Machine Learning, 6G Mobile Communications etc. I suspect that other forms of transportation and other industries are likley to be facing similar challenges, so I would be most interested to hear opinions from other IET members on this topic.

  • I look forward to reading this report. An approach with respect to safety critical calculations in medical physics relies on people ascribed 'medical physics expert'  MPE who acknowledge the roles and risks, the training and competence as part of the mitigation of calculational errors. Although errors are always possible, they occur within processes (often with other experts checking, and self checking), and the overall model is that the benefit outweighs the risk. 
    Devices that do not self-acknowledge error capability are implemented as part of the regulatory apparatus that lets MPE delegate checks and offer suggestions based on evidence prepared by teams of MPE or equivalent. (I might revise what I currently understand by evidence as discussion goes on..)