This discussion is locked.
You cannot post a reply to this discussion. If you have a question start a new discussion

What do you consider a sample to mean during an EICR

I’m interested to hear peoples opinions on how they approach an EICR with regards to a sample? I’m asking because I was recently reviewing a couple of domestic EICRs for a client and raised a couple of questions one being that test results were only recorded for two of the six circuits. The response was that they were employed only to carry out a 20% sample. Personally I’ve always considered a 20% sample to mean that all circuits should be tested but only at 20% of the accessories connected to them will be fully tested and inspected. I’ve also always thought when carrying out an EICR for the purposes of private lettings that this practice is only an option when the previous records are available, and if you do choose to carry out a small sample you’d be likely to widen the search if you found any C2’s or C1s. What is everyone’s thoughts here, how does the community approach EICRs?

I was just surprised to see an unsatisfactory report where the sample hadn’t been widened and where four circuits had no test results recorded, not even Insulation resistance, it’s so quick getting IR results on a single phase board.

  • Although that presumably applies to circumstances where each test is independent. In real-life domestic situations, the condition of the wiring and accessories are likely to be highly correlated - all old or all new, all done by someone (in)competent etc. So if a few all pass, its more likely that all are good than would be expected if every socket had been independently installed.

  • Although that presumably applies to circumstances where each test is independent.

    Well, yes ... and no ... what it means, is you need to be careful what and how you sample.

    So, in dwelling, 20 % sample inspection of accessories on each circuit with more than 10 accessories (100 % otherwise), plus 100 % test of insulation resistance and protective conductor continuity might be considered OK, but not "20 % inspection and test" or indeed "20 % test" even if a separate 20 % is tested than the 20 % inspection sample.

    But it's still important to note that, doing this, the socket-outlet behind the washing machine might never get sample inspected ... at least if you don't record what has been sampled each time, and the next inspector has the records ...

  • Although that presumably applies to circumstances where each test is independent.

    Well, yes ... and no ... what it means, is you need to be careful what and how you sample.

    I have got by in my working life knowing just enough about statistics.

    A minimum population size of 100 for sampling seems very arbitrary.

    I fully agree with ww - I&T of single items will only be reasonably independent if samples are taken at random and I think that this just isn't going to happen. The important thing is to avoid bias, which may be more or less obvious.

    So perhaps you just "sample" sockets below 6 ft. It might be that a shorter electrician did them and somebody else did the higher ones, and if the shorter electrician's workmanship was the best, now we have bias.

    Only a 100% "sample" can be 100% confident, but if say 9 out of 10 sockets pass, the maths will tell you how confident you can be that the whole of (the rest of) the installation is sound.

    In the real world, I suspect that it is reasonable to conclude that if the first dozen accessories show excellent workmanship, the whole installation will be sound. By contrast, if the first half dozen are carp, you may as well conclude the EICR and recommend a total rewire.

    For a ring, IMHO you should test all the sockets - it is quick and easy; but for a radial, if the end of the line is sound, you have probably demonstrated that the Zs and polarity are sound throughout.

  • Only a 100% "sample" can be 100% confident, but if say 9 out of 10 sockets pass, the maths will tell you how confident you can be that the whole of (the rest of) the installation is sound.

    That's the point, with such a small sample size, and 9 out of 10 "passing", the maths doesn't provide confidence that 90 % of the installation is OK (or conversely, 10 % of the installation is potentially dangerous). But by the time the sample size is 100 or more, and you sample 90 % of those, then the confidence that the installation is sound increases.

    However, inspecting 5 out of 10 sockets internally, checking terminals are tight etc, and testing them all, or testing a larger sample, say 7, that includes the ones you didn't inspect, helps improve the confidence. So, effectively, your inspection and test regime has the 100 % coverage based on the samples you inspect and test, if the "population size" is less than 100.

    I think Lyle's earlier post sums up how he achieves this sort of approach (with an emphasis on inspection, which I agree is key).

  • Usually, the CU has numerous connections I consider to be loose when checking all the tightness, and this says one thing, the installer was not careful to tighten properly so every screw is suspect.

    Or it says that terminals with single screws can and do become loose over time for various very valid mechanical reasons - but isn't that a further argument for 100 % inspection?

    Other than that pedantic point, I think you have given a very good precis of an approach that is likely to pick up most issues.

  • Two or three centuries ago whilst at college I remember doing all sorts of equations for sampling and significance tests etc etc etc. I do remember it`s not often as you might first think and there are many considerations. and that`s with production runs so if you add into the mix random things like, for example, who did this wiring, who added to it, who added again etc etc then it becomes quite kerfuffled.

    More recently I was reminded of it when I saw an article about bayer`s theorum in relation to diseases and vaccines etc etc and again it reminded me of the stuff I once knew a bit about.

    So when I or anyone else think about sampling the we might be quite wrong or sometimes "just a bit right"

  • There is an assumption about the independence or not of the installation events - so if you know that each socket in a room was fitted by a different team member then finding one bad one tells you very little about the others. The events are about as independent as they could be.

    If however you know they were done by the same chap on the same day, then if he has used wood screws and pasta instead of the proper bolts, and earth sleeving on one, then it is likely he has on the others too - there is a correlation. Equally if one or two are neat and tidy ,they probably all are.

    In reality sampling theory is all very well  but the rules that apply to yields and tolerances on parts made on the same machine in the same factory are not anything like as reliable applied to the case of wiring installed and then added to by persons unknown, perhaps in many phases with different workers and different levels of pressure to rush it on the cheap or to do a good job.

    So how to estimate the correlation ?- well if there has been an obvious new extension, that was probably all done at once, especially if all the switches are the same pattern and the sockets are all the same style, but rooms with a mix of styles of fittings and heights indicate poor correlation.

    New does not mean better or worse but perhaps it does mean you only need to sample a few if they look well correlated.

    The problem is the need for having seen a few of each kind to know which category it should go in. 

    I agree with the dismantling can damage as much as it prevents in some cases -  so an earth test to the exposed screws or the earth pin may be not only faster but safer than a full tear down and R1 + R2, also a global L+N to E insulation at 250V may save the embarrassment of killing some prize item not isolated, and still finds 99% of real cable damage etc that a 500V wire by wire test would have .  Casting a wider net of looser tests, and a very good look and sniff will probably find more than a smaller  percentage of total dismantling.

    mike

  • If however you know they were done by the same chap on the same day, then if he has used wood screws and pasta instead of the proper bolts, and earth sleeving on one, then it is likely he has on the others too - there is a correlation. Equally if one or two are neat and tidy ,they probably all are.

    This is a load of rubbish, if you consider a dwelling (which we know usually falls outside the "sampling theory" statistics) may have been tampered with by anyone following installation - including DIY.


    There is an assumption about the independence or not of the installation events - so if you know that each socket in a room was fitted by a different team member then finding one bad one tells you very little about the others. The events are about as independent as they could be.

    Yes, that is exactly why the "sampling theorem" is important.

    In reality sampling theory is all very well  but the rules that apply to yields and tolerances on parts made on the same machine in the same factory are not anything like as reliable applied to the case of wiring installed and then added to by persons unknown, perhaps in many phases with different workers and different levels of pressure to rush it on the cheap or to do a good job.

    But I think in many installations, we are in the position of the latter case?

    So how to estimate the correlation ?- well if there has been an obvious new extension, that was probably all done at once, especially if all the switches are the same pattern and the sockets are all the same style, but rooms with a mix of styles of fittings and heights indicate poor correlation.

    That is based on the assumption that it was all OK on "day 1", and I agree it's not that simple .. but the statistics won't help here one way or the other.

  • We already know that the drive bye EICR is common every day practice for a significant number of contractors and a substantial number of people going to site are not qualified or competent to carry out inspecting at testing. 

    I have written a standard specification that sets out a MINIMUM level of inspection and testing to verify the objective of Regulation 651.1. It also sets out the competency requirement for the inspector and the report format together with an insurance requirement.

    It is useful for clients to specify what they want done and more useful for the decent honest contractor to specify what they are going to do rather than the cowboy who will have no interest in reforming their way.

    It is free issue to anyone who wants it by emailing me on info(the symbol for at)astutetechnicalservices.co.uk. And no I am not looking for work or any financial gain out of supplying this document.

  • If however you know they were done by the same chap on the same day, then if he has used wood screws and pasta instead of the proper bolts, and earth sleeving on one, then it is likely he has on the others too - there is a correlation. Equally if one or two are neat and tidy ,they probably all are.

    This is a load of rubbish, if you consider a dwelling (which we know usually falls outside the "sampling theory" statistics) may have been tampered with by anyone following installation - including DIY.

    Well, if you toss a coin 6 times and it comes up heads each time, you might want to turn it over to see whether it is a proper coin (1.5% chance) or a two-headed one.

    If it is a genuine one, then of course the chance of heads next time is 50%; but even then we cannot be sure because the way the coin is tossed may be biased.

    In practical terms, I tend to agree with Mike. If all of the workmanship that you have inspected is good, it is likely that the rest will be and based upon your population and sample sizes, one can calculate the degree of confidence. Then of course, you have to decide whether balance of probabilities is good enough, or do you want to be beyond reasonable doubt?