3 minute read time.

The European Robotics Forum (ERF) 2026, held in Stavanger, centred on Robotics for the blue economy and growth in space. This theme reflects an increasing convergence between advanced robotics and application domains where reliability, autonomy, and sustainable long-term operation under uncertainty are essential rather than desirable.

Across the forum, a broader shift in emphasis was apparent. The discussion is moving beyond demonstrating technical capability towards addressing how robotic systems can be deployed and operated in real-world environments. This shift brings questions of system reliability, safety, and trust to the forefront.
In this context, standardisation is becoming more central.

One of the sessions highlighted ongoing work on ISO/IEC TS CD 22440, which addresses AI in safety-critical systems. The approach presented extends established safety engineering practices to account for the characteristics of AI-enabled components. This includes a structured lifecycle comprising fault analysis, mitigation, testing, statistical performance assessment, and monitoring.

 What is notable is the way in which AI introduces new categories of faults. These may arise from data-related issues such as insufficient coverage or drift, from limitations in model design, or from the interaction between the system and its operational environment. As a result, system performance can no longer be treated as fixed, but must be evaluated statistically, with explicit confidence levels and representative datasets.

Monitoring and supervision also take on an expanded role. The notion of an AI monitor, capable of detecting performance degradation in real time, alongside supervisory mechanisms that enable human oversight, reflects an understanding that validation does not end at deployment. Instead, it becomes part of a continuous operational process.

An important question is how these emerging approaches align with the established safety framework for industrial robotics, in particular ISO 10218-1:2025. This standard defines the safety requirements for industrial robot design and is grounded in principles of predictable behaviour, inherently safe design, and systematic risk reduction. From this perspective, AI-related standards should not be viewed as separate, but as an extension addressing additional sources of uncertainty introduced by data-driven components.

This connection was also reflected in workshop on Safety in Robotics - Limits and Perspectives, where a recurring point was that safety and certification need to be considered from the inception of a robotic system. Once architectural decisions have been made, later changes can become constrained in ways that are difficult to address without significant redesign. This reinforces the idea that safety is not a validation step applied at the end of development, but a design principle that shapes the system from the outset.

While Europe continues to generate strong research outcomes in robotics and AI, the ability to deploy these systems depends on establishing trust. Standards provide one mechanism for formalising and sharing this trust, making it possible to move from individual demonstrations to repeatable and scalable deployments.

At the same time, the existence of standards alone is not sufficient. Their impact will depend on how they are interpreted and adopted in practice by individual adopters, and on the extent to which they are integrated into industrial development processes. This connects to a wider theme at ERF: the need for closer alignment between research, industry, and investment, so that technical capability, operational needs, and economic incentives evolve together.

The focus on the blue economy and space serves as a useful reminder that many of the most relevant applications for robotics are also among the most demanding. In such contexts, the question is not only whether systems function, but whether they can be relied upon over time, under changing conditions, and at scale. In that sense, the increasing attention to standards is not a peripheral development. It reflects a necessary step in the transition from research-driven innovation towards sustained and reliable deployment.

Blog by Dr. Jelizaveta Konstantinova, IET Robotics and Mechatronics committee member

Join the discussion! 

From your experience, what has been the biggest challenge in applying robotics or AI safety standards in real‑world projects – and how early in the design process do you think these considerations need to come in? Share your thoughts in the comments below.