4 minute read time.

Make it safe, make it certain: software tools and safety critical systems
It is the software tools used to design and build safety critical systems that truly keep us all safe…

Andrew Banks & Nick Tudor

Why not join us at our full day event taking place on 21st November at Austin Court, Birmingham to find out more on this topic:  Software Tools in the Development of Safety Systems - IET Events (theiet.org) 

One of the more unsung aspects of our technologically driven society is the need for software-controlled safety critical systems. Aircraft, manned spaceflights, medical devices, and nuclear reactors all require the use of safety critical systems, with the software designed and built by software engineers. These keep the machines and devices doing what they are supposed to, should they go wrong there are likely to be fatalities.

More complex than it looks
To develop such software usually requires an awful lot of design and verification using techniques such as review, testing and analysis. While it must still be developed and verified by humans, there are tools to help with this. They could be something as simple as an excel spreadsheet recording test results and then doing comparisons against expected results through an independent verification tool.
A good example is a tool called an ‘auto-coder.’ It auto-codes from a diagram, another software tool, to produce some code. To know that the code satisfies what was in the diagram, we can have another tool to independently (and that’s a crucial word) check the auto-coder output. It will verify whether the code is complete, correct, accurate and has nothing extra that was asked for (or wasn’t asked for). That's what we mean by software tools.
Process, process, process

The processes we typically follow are well understood, well defined and there are commonly adopted industry standards. They do differ slightly, depending on the domain within which we’re working. With aerospace things are slightly different to automotive, but the core ideas are broadly aligned. In their worlds, the most important thing is when the software being built gets to market, it's safe; we need to get it right first time.
There are many examples of software systems (not software, but software systems) going wrong for various reasons and people have died. That's not saying the software has gone wrong. Software only does what we want it to do, it won't do anything else. If we haven't been clear about what we want to do, implementing that software can be ambiguous. If we only think of the one case where we want a system to do something, without giving thought to all the other cases where the system must trigger or may be needed to trigger, or better still not actually trigger in certain cases, we haven't done our job as a systems engineer or a safety engineer very well. What worries us is when it comes to defining the functionality to be invested in software. That's where we come in and what we mean by safety critical software. It applies in the nuclear domain, medical devices and in autonomous systems, for example.

Reducing human error

The real bonus of adding software tools into a software lifecycle life cycle is that it removes the human element. When humans write code, requirements, or undertake design, etc, we are not perfect, and we are not very good at doing the same thing over-and-over-again. Software tools are very good at repeatedly doing exactly the same thing. For example, in a code review, this is just what we want. If we analyse the code, we want to find every fault. If we run the same analysis twice, we want the same results. That gives us confidence that we found the same fault. When humans do a code review, depending on which day of the week it is, we may focus on different areas because we get bored very quickly. In code analysis, software tools are brilliant, because they will check hundreds and hundreds of different rules repeatedly.


…and repeat

If we do something that is automated, even better automatic, then we've got that absolute repeatability, time after time after time. When we run a test today and run a test on Monday, we want the results to be the same, so it’s possible to have confidence that the code is still functioning. Humans cannot achieve that level of repeatability or scale. We can set up a test run to run overnight, go to bed and when we return in the morning be confident it's done X thousand tests and see the results. However, we do need to qualify the tools to show that we know exactly what we expect it to do and that it does do exactly that. From a simplistic viewpoint, it's the repeatability, it's the accuracy, it's the consistency which are probably the three most important features that automated tools bring to the mix.
-----------------------------------------------------

What more can be done to make safety critical systems even safer? To what degree is it possible or desirable to remove human input altogether? Tell us what you think in the comments below!

-----------------------------------------------------

Andrew Banks is a Technical Specialist at LDRA Limited, which develops software tools to automate code analysis and software testing for safety-, mission-, security-, and business-critical systems.

Nick Tudor is CEO of D-RisQ, a UK high-tech company specialising in software and systems verification technologies.

Event details:  Software Tools in the Development of Safety Systems - IET Events (theiet.org)

  • What about using assistive technology, voice recognition software, within this framework.   ??   This note has been written with Dragon NaturallySpeaking version 16 suitable for PCs  made by Nuance, owned by Microsoft