10 minute read time.

Six Honest Serving Men of AI Functional Safety

Foreword: This blog focuses on UK based safety regulation and serves as a precursor to the “Safety of AI: what are its special needs?” Webinar in November.

“I keep six honest serving-men

   (They taught me all I knew);

Their names are What and Why and When 

   And How and Where and Who.”

- Rudyard Kipling, Just So Stories (1902)

One problem facing safety engineers in Artificial Intelligence (AI) is information curation. Due to a lack of centralised regulation, the speed at which information is produced, and the sheer quantity of redundant information that that leaves behind, I spent a lot of my “reading time” validating the information that I came across. When legislation is elusive, opinion seems to fill the gap. And in an emerging technology, best practice is only just finding its feet.

Because of this, any definitive information that I could have regurgitated masterfully curated and communicated in this blog would likely have ended up adding to the detritus in an already unnavigable system within months. And it still might.

However, in an attempt to add lasting value to you the readers, I will take a different approach. I will do what I can to emulate Kipling and endorse for curiosity, and provide a springboard off of which to start creating your own curation on AI Safety.

What is Functional Safety and Artificial Intelligence?

Let’s start with definitions and get on the same page.

Functional Safety -

This one is relatively easy to define.

“Functional Safety is the part of Equipment Under Control safety that relates to the correct functioning of electrical/electronic/programmable electronic safety-related systems”

  • BSI, BS EN 61508-4:2010.

Artificial Intelligence -  

This is much more complicated to define as there are so many definitions that are endlessly changing. So I shall create my own definition that fits the purposes of this work.

The latest high-level Governance for Autonomous Systems released by the UK loosely defines Artificial Intelligence by the unique regulatory requirements:

That is why we have defined AI by reference to the 2 characteristics that generate the need for a bespoke regulatory response.

  • The ‘adaptivity’ of AI can make it difficult to explain the intent or logic of the system’s outcomes:…
  • The ‘autonomy’ of AI can make it difficult to assign responsibility for outcomes”- AI Pro Innovation White Paper (August 2023)

One could, and for this blog will, conclude from this a short definition for Artificial Intelligence:

“A system that, through its capacity for adaptivity and autonomy, has the potential to generate difficulty in explaining the intent or logic behind the system's outcomes, and in assigning responsibility for those outcomes.”

Functional Safety and Artificial Intelligence

When we talk about AI Functional Safety we are talking about:

“— use of AI inside a safety related function to realise the functionality;
— use of non-AI safety related functions to ensure safety for an AI controlled equipment;
— use of AI systems to design and develop safety related functions.”
- PD ISO/IEC TR 5469:2024 - Artificial intelligence — Functional safety and AI systems

From this we can see that there are 3 distinct areas for consideration. 

  • AI Enabled Safety functions such as Machine vision programmes to identify and flag hazards and other live monitoring and awareness systems.
  • Safety functions to ensure AI Systems such as human in the loop, curated training data and Data & AI Mapping
  • AI Design and Development of safety functions such as LLMs used in evidence generation and software coding copilots.

I believe these 3 areas are a strong core, but not comprehensive to AI in functional safety. My colleague, Russel Bee, also highlighted the importance of safety-adjacent spaces, where systems not developed to functional safety standards could have an impact on the overall safety of a system. 

For example, an electronic permit to work system or work management system. If there is an AI agentic based workflow to approve a permit or a work order for a safety critical element the consequences of an incorrect AI recommendation could be significant. It’s important to think about the use case where the AI is being applied, what it’s secondary/tertiary impacts might be and how the output might be protected.” - Russell Bee

In summary, we can look at “Functional Safety and AI” as:

“The correct functioning of systems incorporating adaptive and autonomous systems that enable, control, develop or contribute to safety systems, functions or activities.”

Why is this such an important topic?

If you’re reading this, you already recognize the value of safety. There’s no need to convince you that mitigating risks is crucial. However, it’s essential to emphasise that safety risks in AI may far exceed those found in most other projects in the 21st century.

This potential is greater due to a number of compounding factors that I will group here as an overview. Please use these keywords to go do further reading.

  • Opacity & Misinformation

Foundational AI models are often opaque, making it challenging to explain their decisions and raising doubts about output accuracy. As non-deterministic systems, they can produce false positives or incorrect results, even at high confidence levels—sometimes surpassing human accuracy but posing risks in critical situations. Moreover, the infrastructure needed for these models increases system complexity and introduces hidden failure points, reinforcing the need for strong safety protocols.

  • Ethical Dilemmas

This is a huge topic, and includes: Accountability, Dependency, Bias, Privacy, Monopolisation, Discrimination, Unemployment, and even the Objectification of Human Judgement.

  • Unintended Consequences

AI systems, especially agentic ones, can behave unpredictably, leading to unintended or unforeseen actions. These outcomes are not always possible to anticipate, making it essential for safety engineering to rigorously test for and mitigate such risks.

  • Security/Misuse

I would like to lay Cybersecurity at the feet of Cybersecurity Experts. However, the risk of misuse or meddling of an AI enabled System, maliciously or otherwise, must be considered by safety engineers.

  • Transformative Potential

The capabilities of AI are advancing rapidly, offering unprecedented potential. However, this transformative power also brings increased risks, as theoretical models may not fully account for real-world applications, and regulatory infrastructures often struggle to keep pace. The greater the potential of a technology, the greater the impact of any associated risks.

I have separated the next factor as it is further flung:

  • AGI

Artificial General Intelligence (AGI) might seem like science fiction. However, many believe we could encounter broad AI that appears indistinguishable from AGI by the end of the century. No one is ready for such a capability at this stage and we need to start getting the regulatory infrastructure in now.

When has safety been considered?

A look into the growth of Safety in AI since its conception, is a blog unto itself. However I believe we can draw value from the last decade of UK regulatory developments.

2014: A Special Interest Group (SIG) was established to foster collaboration in RAS, leading to the development of the national strategy, RAS 2020.

2017: AI was incorporated into the Alan Turing Institute, enhancing research capacity. The Institute, in collaboration with government bodies, later published “Understanding artificial intelligence ethics and safety”.

2018: The Office for Artificial Intelligence was launched, overseeing the National AI Strategy. The Centre for Data Ethics and Innovation (CDEI) was established.

2019: The UK AI Council was formed, providing high-level leadership and advice on the AI ecosystem.

2021: The National AI Strategy was published, focusing on investment in AI, public trust, and effective governance.

2023: The Department for Science, Innovation & Technology published their Pro Innovation AI Regulation White Paper. The CDEI was closed, the UK AI Council was disbanded and the AI Safety Institute (AISI) was established. The Artificial Intelligence (Regulation) Bill was put to parliament.

2024: Automated Vehicles Act (AV Act) received Royal Assent. A pro-innovation approach to AI regulation: government response was published.

I think these milestones show that over the last decade the UK government has incrementally switched from specialised groups to more formal bodies. As well as this formalisation of AI focused organisations, it shows an increasing emphasis on Ethics and Safety alongside a, possibly counter intuitive, pro innovation approach to AI development and the journey towards novel, ubiquitous AI enabled technology. 

Looking forward, I hope to see the Uk retain their ability to reform in response to feedback and keep communication lines open with those on the ground. In that vein, I would encourage the readers to engage with their sector's regulatory bodies and be part of the conversation. 

How can we implement AI Functional Safety?

The million dollar question. And one that should be covered comprehensively by the subsequent Webinar:  Safety of AI: What are its special needs?

If you prefer a read over a listen, then the IET released a wonderful publication earlier this year -  The Application of Artificial Intelligence in Functional Safety | IET

However, Uk based Engineers seem to currently be waiting for two things to develop:

  1. A centralised, trusted and or standardised AI specific safety framework
    1. New Hazard ID methods
    2. Accurate Risk Classification
    3. Precedented Accountability Maps
  1. New mitigations to AI Risks
    1. Validation & Verification - Explainable Artificial Intelligence (XAI)
    2. Unexpected Consequences - Representational alignment
    3. Discrimination & Bias - Agent Evaluation

Where should I be looking to keep informed and Who are the big players?

Yes I messed up my brilliant format and combined two of the honest men. However, I think that it is important for all of those who work in Artificial Intelligence to keep themselves informed. I think the best way to do this in the decentralised world of AI regulation is to create a personal, permeable information network (or PPIN which I have proudly coined as I write this).

Essentially what I mean is:

  • Information Network: A curated selection of information sharing bodies.

  • Personal: Make sure the information you are collecting is relevant to your place in the AI world. You do not need to, and should not, sign up for every newsletter and conference.

  • Permeable: Make sure to allow information from others souces to inform you whenever you stumble across them. Don’t create an echo chamber. Also, when you question something, go look for that information and let it in. Maybe change your ecosystem. 

My PPIN looks like:

  • IET Technical Networks (AI & Functional Safety)

My colleagues on the Technical networks are well informed volunteers. The information shared is mostly concise and anecdotal. It’s information coming from people working in the AI sphere like you all.
Artificial Intelligence | IET TN

  • Alan Turing Institute

This is where I get most of my AI governance and Ethics Principles from and the research is continuous and adaptive to the area.

Understanding artificial intelligence ethics and safety | The Alan Turing Institute

  • Artificial Intelligence Safety Institute

The newcomer to my PPIE. They have AI safety focused content that is fairly comprehensive.

Safety cases at AISI | AISI Work

  • ISO, BSI and the AI Standards Hub

Standards are the bread and butter of Safety Engineers and as such, though dry, are always worth putting aside the time to read through and make note of. 

PD ISO/IEC TR 5469:2024 - Artificial intelligence — Functional

safety and AI systems

The Seventh Honest Serving Man - So?

Adding a seventh "man" to Kipling’s six may seem bold, but I think he’d approve. While the first six, Who, What, When, Where, Why, and How, serve to spark curiosity, “So” drives us to act.

So

The pace of AI development is only matched by the ever evolving information generated around it, both advancing faster than our ability to develop comprehensive safety practices. Staying informed and engaged is essential, followed closely by the act of supporting safety research to close a widening gap.

In the UK especially, regulatory bodies carry a unique responsibility and opportunity due to the pro innovation approach taken by the UK Government. By engaging with and supporting these bodies, we can help establish a foundation for cross-sector AI best practices and effective governance that benefits everyone. 

I’ll end this with Kipling’s ending of his verse:

“But after they have worked for me,
I give them all a rest.”

So, go take a break. Have a coffee, put your feet up. When you return, the work will still be there, even if it is under a different definition than it was at lunch.