AI Functional Safety - Is it all just about the algorithms? - Engineering Discussions - IET EngX - IET EngX

AI Functional Safety - Is it all just about the algorithms?

AI requirements are now common place. They are increasingly applied in safety-critical autonomous systems to perform complex tasks, where failures can have catastrophic consequences.

Examples of such safety-critical autonomous systems include: self-driving cars, surgical robots, unmanned aerial vehicles

The IET AITN Committee recently took part in reviewing a functional safety paper

Areas they considered were missing:

The responsible person will need to ensure the activity is safe. The activity and environment (the Operating Design Domain (ODD), and system limitations need to are adequately covered.

The basics of safety analysis at the activity level e.g. energy trace barrier analysis, and good practice such as ISO 21448:2022 Road vehicles - Safety of the intended functionality should be addressed.

System safety architectures are a key to providing reliable deterministic safety functions when required –  safety architectures along with the ODD should be defined within the guidance.

Autonomous systems have a number of operational weaknesses particularly around the sensors and having system designs that assure and maintain the sensors need to be maintained.

Learning systems need matching dynamic monitoring system cases including security, maintainability and safety e.g. a digital shadow.

What are you main areas of concern in AI Safety critical systems and services and why?

  • Biggest challenge is a change in the environment, i would say, especially a small incremental one over time.  The algorithms will be making unfounded assumptions and suboptimal decisions unless they are up to date with the world. A trivial example would have been all that fuss over Y2K where no one thought we might ge to the year 2000 so never allowed for it.

  • I don’t know much about this topic, but I think it is very fascinating. For safety, we will need to make sure that AI systems can change with the environment and learn new things and ways of doing things. Like us humans do. We will also have to keep an eye on them, continuously check them, and tell them how they are doing. They will need to be made and built to be able to deal with unusual conditions very quickly, and they need to become more like humans in that area, so they can have different choices and plans to handle different situations. For a relevant example, self-driving vehicles need to be able to handle different roads and weather as they change and avoid crashing into things and dangers as they happen.