Guidance on safe design of non-safety critical systems

Here's a very good question I've just been asked. For those of us who work in safety critical electronic / programmable designs we have nice clear guidance / standards in the form of IEC 61508 and its derived standards on managing the safe design. But what guidance is there for designers of electronic / programmable systems whose core function is NOT safety related, so it would be over the top to apply 61508 etc, but they could still create some sort of hazard if they were not properly designed? So they should probably be applying a safety V lifecycle at least as a thought process, to make sure that they've thought about the standards that should be applied and any additional safety requirements that should be put in for any residual risks. But how do they know that?

So taking e.g. computer monitor design (just because that's what I'm looking at right now!), if you're designing a new type of monitor, is there a guidance document that takes you through the process of thinking "what hazards might there be? Electric shock, EMC, glare" (I'm making this up by the way so don't shoot me if the example's slightly wrong!) "Ok, we've got standards for the first two, but not for the third, what requirements should we set for that?" "Ok, the designs done, have we met those safety requirements?"

Does anyone know of any good guidance documents etc? Particularly for systems involving software.

    tagging you as I thought you might have some good ideas.

I can appreciate that point of view of the person who originally asked this question (they asked a friend who asked me), I spent the first 10 years of my career designing non-safety critical systems, and the next 30 working on safety critical systems, and there's lots of things I could now tell my 25 year old self!

Thanks,

Andy

  • An interesting question. The first thought is what is not safety critical? This often comes up in FMEAs.

    The cable connecting an ABS sensor is apparently safety critical, but if it has plausibility monitoring as part of the system then the cable performance is not so relevant.

    The cable connecting the electric seat adjuster switch to the motor is safety critical, if it shorts and moves you away from the pedals and steering wheel you will lose control of the vehicle.

    Is the cable supplying power to the light in the vanity mirror safety critical? This is again a system question. The vanity mirror not being lit is unlikely to increase danger but a short circuit that catches fire and fills the car with smoke does increase danger. If the system includes the correct overcurrent protection this risk is removed. Modern cars seem to have a plethora of fuse compared to the one or two in older vehicles.

    Taking your example of a computer monitor, what is safety critical? There are the obvious electrical safety hazards. I doubt if it is a CRT so ionising radiation can be ignored. The actual function may be safety critical if it is used for monitoring or control  a hazardous system. If the system designers have done their job properly there should be some redundancy, but if they have assigned a 99% uptime for your monitor will it achieve this?

    I think that the key point is good risk assessment (easy to write about, hard to achieve in practice). This has to include information about the end use if known. In the other direction if you buy a reel of our cable from a distributor your risk assessment has to determine if the cable is safety critical in your system.

    I know this is rather cable related, but that’s part of my day job. The other part is radiation safety systems which are definitely safety critical.

  • I'm most familiar with RTCA DO-178C, the safety standard or software in aircraft.  That defines a number of safety levels from A (if it goes wrong, people will probably die) down to level D (it will inconvenience the flight crew, increasing the probability of a mistake).  Level E is not safety critical at all.  The amount of "ceremony" increases as you go up the levels.

    Level D should be meetable by good quality software that's been formally tested against a set of requirements.  Level A requires extensive testing and many reviews of requirements, designs, code and all the other work products.

  • Good point - and as an ISA it's one I get involved with quite a lot, as clients want (quite reasonably) to find reasons not to a) produce loads of documentation and b) then pay us loads of money to assesses it!

    There's a somewhat wordy (of course) description in 61508 part 4, 3.4.1, but I'd say most of would take the guideline that it's a system where a failure of it's primary function in a certain way can result in a hazardous condition. So a railway signalling system (my background) is clearly safety critical / safety related as its purpose is to stop trains crashing into each other, whereas for my somewhat trivial example of a computer monitor its primary purpose is to display information. (That loss of information information could itself be safety critical, e.g. in an alarm system, but then it's actually the alarm system you're developing and you'd know you've got issues to manage.)

    But yes Roger, I think you've very clearly described the type of thought processes engineering developers need to work through. It's a question of how does the new designer - who may even be working by themselves in a one person company - know that. Something between (in the UK) HASAWA blandly saying "do a risk assessment" and the 7 volumes of 61508 which most engineers are not going to read every day before breakfast!

  • Hi Simon, and similarly in the rail sector we have EN50126/8/9 which do the same. However what we regularly find (although this wasn't the example that started the question, which I believe came from another industry *) is that there are ancillary systems where these don't - or may not need - to be applied, even at the lowest level (Basic Integrity in the case of those standards). A borderline one is ventilation systems, if they are purely to maintain a comfortable temperature the supplier may not (rightly or wrongly) be asked to develop them to standards, if they are for smoke removal to allow safe evacuation during a fire (e.g. on the Elizabeth Line), and therefore have a clear "safety" function, they should be. 

    But probably a better example is the type of equipment which we come across in our daily life and in our homes, where we'd like to feel the developers knew how to think about hazards in the same way that those who do have to work to aerospace, rail, etc standards do.

    * My friend wouldn't tell me what the actual application was that prompted the question, it may be an "if I told you I'd have to shoot you" situation Slight smile

  • The much maligned CE mark was meant to address the safety issues for consumer products. I am only familiar with the Machinery Directive but I assume (always dangerous) that a similar approach was used for the other directives. This attempted to list all the likely risks and required them to be discounted or addressed.

    In general this raises the question who can design (and market) something. To achieve CEng you are required to have a measure of experience. Is this sufficient? Do we move towards ‘licenced’ engineers who can sign off designs?

    Software functionality is another complicated area. It can never be better than the original specification and is usually worse. In safety critical software system systems attempts are made to validate and test the critical functions. For supposedly non critical systems the tester is usually the end user.

    I am lucky that all my high risk systems are safe when the power is removed I don’t have to maintain control like in an aircraft or chemical plant

  • CE Marking addresses specific safety risks (for E/EE/PE equipment electric shock being the main one), but nothing else - so e.g. software switching things on and off when they shouldn't wouldn't be covered by general CE marking (it might be for specific risks for specific items). 

    We'd normally say the process should be: identify the hazards, decide which of these are controlled by standards (which could be CE marking and the standards to support that), and then if there are any risks left write some specific requirements to manage those. So requirement for CE marking is almost an output of the process rather than an input.

    I recently had to explain at some length to a manufacturer who really should have known better that showing their product was CE compliant didn't necessarily mean it didn't present any risks...and therefore didn't necessarily prove they'd done enough. But it all depends what the risks are, if the only risk is electric shock (or crush etc injuries in the case of the machinery directive) then using CE marking is a reasonable approach. But it is actually still the same thought process "what are the risks?" "electric shock." "Is that all?" "Yes" "How shall we manage it? " "CE marking"

    I wonder if the safety V lifecycle is taught in standard university engineering degrees?

    To achieve CEng you are required to have a measure of experience. Is this sufficient? Do we move towards ‘licenced’ engineers who can sign off designs?

    There are CEngs around who don't know about the safety V lifecycle - actually I was one when when I got mine. (But then I got mine before 61508 etc were published.) I think the point is that it's an approach that even the most experienced engineer who hasn't worked on safety critical systems may (perfectly reasonably) have not come across, but would find useful if they knew about it.

    And then there are those who theoretically know about it but still dive straight into the design without getting their hazards and safety requirements straight first - but that's another problem. And again I've certainly been guilty myself of that in the past!

  • "I assume (always dangerous) that a similar approach was used for the other directives. This attempted to list all the likely risks and required them to be discounted or addressed."

    Your mileage may vary. The person making the CE declaration must assure themselves that the essential requirements of all relevant directives are met, and be prepared to justify their signature. Thins may be the designer for one man in a  shed type businesses building audio amps into metal cases, based on a mixture of test results and  inspection, or it may be a director at the top of a large corporation on the basis of the advice of the relevant project manager in turn backed up by a design authority - which may be a someone who works there or a subcontractor . The actual technical rigor in deciding which standards actually do apply and how to verify their requirements are actually being met is very much a moveable feast.

    Mike

  • @AndyMillar Have you looked at what the ISA group have done (https://www.theiet.org/impact-society/factfiles/isa-factfiles/what-is-isa/) or perhaps Codes & Guidance (https://www.theiet.org/publishing/iet-codes-and-guidance/)? I'm thinking aloud so apologies if you've already noted this, or indeed if I'm going off at a tangent ...

  • HI Andrew,

    Yes, however because we're talking about everyday products where an ISA wouldn't be appointed the ISA group don't really consider it. (It's the ironic thing that safety related products, where you need an ISA, are easy because they are so dangerous you have to follow a strict process!) I don't think there's anything in the guidance, I've also looked through https://www.theiet.org/impact-society/factfiles/engineering-safety-factfiles/  - we seem to cover safety critical systems reasonably well, and health and safety in the workplace well, but don't seem to touch on safe design of non-safety critical systems. Maybe my friend and their friend have spotted a gap in the market. 

    Thanks,

    Andy

  • Hi Andy

    The safe design of non-safety critical systems won't be one for the Policy & Insight group.  But what about the System Safety Technical Network?

    Andrew