This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Not testing RCDs at x1 is omitting an essential test

Hi all


Following the last two weekends posts about RCD testing and trip times, in which I learnt a few things that I would never have known

as they are not documented in most tester manuals, a few more thoughts have come up.


On the hager site where they have "updated guidance on testing" their 30mA RCDs at 250mA they stated 2 things that were wrong.

This same mistake had been made in 2 videos as well.


They state that if you don't have a tester with a VAR that can be set to 50mA at x5 to give 250mA then you can use 300mA setting at

x1.

This is wrong. As I've found out over the last two weekends the tester does an unseen pretest before the main test. 300mA x1 will

pretest at about half 300mA and trip the RCD with the diplay showing "trp" and abort the test.



They also state "The x1 test is no longer a requirement but could of course be carried out".

I can't find anywhere that states it is no longer a requirement.


Regulation 643.8 requires that the instrument used complies with BS EN 61557-6.

There is a ‘Note’ to this regulation but Notes to Regulations only provide guidance and are not regulations.

The Note says: “Effectiveness is deemed to have been verified where an RCD meeting the requirements of Regulation 415.1.1 disconnects

within 40 ms when tested at a current equal to or higher than five times its rated residual operating current”.

Is this reg stating x 1 doesn't need to be done or is this being misinterpreted?


On the new test forms there is no longer a column for x1.

The other sparks I work with now only do x5 tests unless doing a MWC where it still has a x1 entry. However I still do all tests.

A 30mA RCD is supposed to trip when 30mA is detected. How are you going to know if it does that if you don't do a x1 test?

I tested one this week that passed x5 at 16.9ms but failed x1 with >300. When I ramp tested it it tripped at 75mA.

This proves that it needs to be tested at x1 as well, especially when used for additional protection as it must trip at 30mA when

going through the human body, not at the 75mA it was ramp tested at.


Also, as someone pointed out on another post, If someone mistakenly installed a 100mA (non-delayed) unit instead of a 30mA one -

chances are it would pass if only subjected to a 40ms/150mA test - yet it would hardly provide adequate additional protection.



As a side note and for the information to those who replied to my post about this pretest setting of half the selected current:

I don't think it's a half current pretest.

I have tried the VAR setting of 50mA x5 and it works. However, if it did pretest at half current then the 30mA RCD would trip at 25mA

as that is over the ramp test result of 22mA at 0 and 24mA at 180.

It even worked at 55mA without tripping and that would have been 27.5mA if it was half.

It did trip, though, set at 60mA, displaying "trp" so must have pretested at over 22/24mA.

Therefore I think this pretest current is somewhat less than half.

Too knackered after today's work to try to work out what the likly percentage of pretest current is but I bet some here will be able

to.


Any thoughts on this?



Parents
  • as I've discussed the x5 test is required because we don't have a x2 test and I'm not (personally) satisfied that passing the x1 test is OK, as I've had x5 with longer trip times than the x1 test.

    If some RCDs get slower with increasing current, how can testing give us confidence that the RCD will trip within 0.2s or whatever under actual ADS conditions, given a L-PE fault of negligible impedance will likely cause a residual current of several amps? Or indeed for additional protection when the actual shock current is very unlikely to be exactly 30mA or 150mA (or 250mA).


    Doesn't there come a point where we must admit that we can't just "test-in" quality - but we need to rely on correct RCD operation across the range having been "designed-in" and "built-in" and the testing need only provide some reassurance that the individual unit hasn't been significantly damaged.


    Nor should we let perfection become the enemy of the good - I think we need to admit that we can't practically test for all possible situations on site - so we shouldn't be even aiming to "prove" correct operation under all conditions - when all we can really achieve is to show that it's "reasonably likely" that the RCD will behave as required.


    We don't normally take micrometers to wires to double-check conductor sizes as as they should be, nor test MCBs or fuses on site, or disassemble isolators to check for 3mm contact clearances - are we really taking a proportionate position with RCDs? How many RCDs that appear to trip OK on the T button are really unsafe? (and I'm not thinking that this needs to be quite zero either). Also keep in mind that no testing can prove how the RCD will behave - it can only examine how it behaves at moment of the test - it may behave quite differently next year, or next month or even next day. We can't eliminate all risk - we should only be trying to limit it.


       - Andy.
Reply
  • as I've discussed the x5 test is required because we don't have a x2 test and I'm not (personally) satisfied that passing the x1 test is OK, as I've had x5 with longer trip times than the x1 test.

    If some RCDs get slower with increasing current, how can testing give us confidence that the RCD will trip within 0.2s or whatever under actual ADS conditions, given a L-PE fault of negligible impedance will likely cause a residual current of several amps? Or indeed for additional protection when the actual shock current is very unlikely to be exactly 30mA or 150mA (or 250mA).


    Doesn't there come a point where we must admit that we can't just "test-in" quality - but we need to rely on correct RCD operation across the range having been "designed-in" and "built-in" and the testing need only provide some reassurance that the individual unit hasn't been significantly damaged.


    Nor should we let perfection become the enemy of the good - I think we need to admit that we can't practically test for all possible situations on site - so we shouldn't be even aiming to "prove" correct operation under all conditions - when all we can really achieve is to show that it's "reasonably likely" that the RCD will behave as required.


    We don't normally take micrometers to wires to double-check conductor sizes as as they should be, nor test MCBs or fuses on site, or disassemble isolators to check for 3mm contact clearances - are we really taking a proportionate position with RCDs? How many RCDs that appear to trip OK on the T button are really unsafe? (and I'm not thinking that this needs to be quite zero either). Also keep in mind that no testing can prove how the RCD will behave - it can only examine how it behaves at moment of the test - it may behave quite differently next year, or next month or even next day. We can't eliminate all risk - we should only be trying to limit it.


       - Andy.
Children
No Data