AI requirements are now common place. They are increasingly applied in safety-critical autonomous systems to perform complex tasks, where failures can have catastrophic consequences.
Examples of such safety-critical autonomous systems include: self-driving cars, surgical robots, unmanned aerial vehicles
The IET AITN Committee recently took part in reviewing a functional safety paper
Areas they considered were missing:
The responsible person will need to ensure the activity is safe. The activity and environment (the Operating Design Domain (ODD), and system limitations need to are adequately covered.
The basics of safety analysis at the activity level e.g. energy trace barrier analysis, and good practice such as ISO 21448:2022 Road vehicles - Safety of the intended functionality should be addressed.
System safety architectures are a key to providing reliable deterministic safety functions when required – safety architectures along with the ODD should be defined within the guidance.
Autonomous systems have a number of operational weaknesses particularly around the sensors and having system designs that assure and maintain the sensors need to be maintained.
Learning systems need matching dynamic monitoring system cases including security, maintainability and safety e.g. a digital shadow.
What are you main areas of concern in AI Safety critical systems and services and why?