This discussion is locked.
You cannot post a reply to this discussion. If you have a question start a new discussion

Ipf Measurement on a 3-phase system

Hello All,
While checking an EIC and comparing design values vs measured values, a few fundamental questions have been puzzling me and I would be grateful of any advice or comments.
The installation is an embedded LV generator connected to the site 11kV ring via a 1.5MVA transformer. It’s a TN-S system with the N-E link at the transformer. The cables from the generator to the breaker panel are L=3x300, N=2x300, PE=1x240. The breaker panel is considered the point of connection and where Ipf and Ze were measured; Ipf=8.58kA and Ze=0.02Ohms.
  1. The calculated 3-phase symmetrical fault at the breaker panel (not including the generator contribution) was ~34kA (assuming a 250MVA fault level at 11kV) i.e. significantly higher that the measured Ipf. This lead me think that the actual fault level at 11kV must be much lower than 250MVA. On reflection, I’m thinking that the Ipf measurement is however a worst case measurement, as the meter only measures the impedance on the LV side of the transformer and the downstream cables i.e. assumes an ‘infinite source’ on the 11kV side, so the measured Ipf should be much higher than 8.58kA?

  • Maybe the Ipf needs to be multiplied by 2, as the measurement was with a 1-phase meter? The On-Site Guide states that ‘For three-phase supplies, the maximum possible fault level will be approximately twice the single-phase to neutral value.’ Thinking about this multiply by 2 (a round-up of 1.732), while this may be an acceptable approximation for domestic installations, I don’t think it is for an installation like this. Simply doubling the measured L-N value assumes that L and N impedances are the same (they are not) and doesn’t allow for the additional transformer winding impedance for a phase-phase fault. My understanding is also that a 3-phase (symmetrical) fault is effectively a single-phase calculation, so doubt the accuracy of this x2 factor in this case.

  • Could the discrepancies be due to meter inaccuracies at these low impedance readings e.g. a Ze measurement of 0.01Ohms vs 0.02Ohms has a significant impact on Ipf. Should the contractor be using a more specialist meter?

Thanks.

  • Well the answer could be 'all of the above' though that is not much help.

    I will try and simplify the question, to make sure I have it right and am not making some silly assumptions

    First assumption.


    You are interested in the fault levels on the LV side of the transformer, measured when the genset is not running in parallel with the supply - so the feed is from the 11kV side,  and its 250MVA fault level is attenuated via the regulation (effective series inductance and resistance ) of the transformer perhaps 2% or 5% or whatever of full load.

    Is the 250MVA a DNO figure or a guessed PIDOOMA figure ? Even if the 11kV line is very 'stiff' (i.e. totally non droopy) the effect of the TX will be to limit the fault level on the load side to 20-40  times the transformer kVA rating - some 50 - 75MVA might be more credible on the LV side.


    You have measured one phase  L-N fault loop at the panel end of the cables  - but the genset and its cables are out of the equation then. How much cable is there between the panel and the transformer secondary?

    IF the cables dominate, then maybe as you have reduced neutral, the doubling rule will slightly under estimate the 3 phases bolted fault current. but your cable sizes are not that different.


    Yes - these sort of impedances are at the end of the meter where the fruit machine nature of the least significant digits on the meter start to matter and so does how you bolt the meter to the point of measurement - conventional probes and leads are no longer to be trusted, (On the test house kit  a 4 wire connection is often used so the voltage reading part  is not upset by the voltage drop in the current drawing part - the probe 'leads' may  actually be coaxial cables, with the current down the braid, and the voltage reading up the middle. I have yet to see this on a field portable instrument, but it could be done.)


  • Thanks for your reply. 

    Correct - I am not considering here the generator cabling or generator fault contribution i.e. measuring from the breaker panel with the gen breaker open, back to the transformer.

    The 250MVA is a worst case figure, but as the transformer is the predominant factor in fault current limitation, the grid value used doesn't impact significantly on the fault level seen at the breaker panel i.e. 250MVA fault power on 11kV side vs transformer fault power of ~30MVA (based on 5% impedance).   

    The cable run between the breaker panel and transformer is ~40m. 

  • Please excuse my ignorance - why are there 3 lives and only 2 neutrals? (L=3x300, N=2x300, PE=1x240) Is this because of ballanced loads and less of a neutral required as a result?
  • yep. reduced neutral when we expect most loads either to be delta or well balanced star configurations.

    but 40m of 300mm core is  going to be about 1.5 milliohms, so on 230V  over 100kA -  and that is just one core. Basically the resistance of the cable is not really doing much to reduce the fault level - it is all in the TX, and perhaps, if the 11kV lines are very long,  a bit on that side.


    we can estimate the fault level another way - if it is a 5% regulation Tx then we can look at one phase only -

    a 1.5MVA TX, would give 1/3 of its MVAs on each phase. Let us pretend we had a 500kVA single phase Tx  - full load would be 2000A amps between friends (250V * 2000 amps = 500kVA, so 8% off maybe)

    If the regulation was for 5%  droop at full load, then the for 100% droop, we'd be looking at 20 tims this current, or about 40kA (single phase) This is a long way from the single phase measured value of 8kA - indeed if the 8kA is true, then at full load (~2kA) you will drop a quarter of the volts- if this was true, something would be cooking, so I hope the measured PSSC is in quite a bit in error.


    To be honest to measure PSSC of this magnitude is not easy,  - do you know how it was done ? As I alluded above,  anything looking like a normal meter lead (about 10-20milliohms/ metre of test lead) will dominate.

  • That PSCC cannot be correct Mike, there would be far too much volt drop on load. I suspect it was "measured" with an MFC and is probably wildly too low. I suggest you forget that number and assume 30-40 kA as Mike suggests above. What is your primary protective device, I expect an ACB, is that within its ratings? If so just forget the measured number, it really isn't important at this level, and is very difficult to measure accurately anyway. 5% transformer impedance is a reasonable assumption, and the major contribution to limiting the fault level.
  • Roger


    Following on from Mike.


    The max load current for your 1.5MVA Tx will be around 2165A. Assuming a 5% percentage impedance (check the data plate on the TX.) PFC = 2165 x 100/5 = 43.3kVA. That assumes an infinite bus on the HV side which it wont be so a bit less than 43.3kVA.


    Then the fault current will be further reduced by the current limiting effect of the circuit protection devices downstream of the Tx. The calculation assumes a of negligible impedance fault which it will not be. So lots of assumptions.


    The cables away from the Tx. will further limit the fault current. If you had stated the length of the cables I could have put it in to my Amtec but I would also have needed the TX percentage impedance.


    The maximum fault current, assuming an infinite bus an no contribution from stored energy in the installation, with the cables cold for a symmetrical fault will be the open circuit phase voltage (usually 250V if tapped for 433V) divided by the sum of the impedance of one phase winding and one line conductor with the neutral playing no part in the fault current.


    As for measurement of very low impedances and high fault current that is beyond the capability of hand held installation testers. They may have a resolution of 0.01 ohms but resolution is not accuracy. As Mike says lead resistance of ordinary testers comes in to play. Also the actual test current with installation testers on "high current" is around 5A. For the real deal you will need something like the Megger MIMS 1000 wich delivers up to 1000A of fault current with 4 wire Kelvin leads. I have used one once at the terminals of a transformer all kitted up with flame retardant overalls and full face protection and pressing the test button was a bum clenching moment.
  • Thinking about this multiply by 2 (a round-up of 1.732)

    My understanding is that 2x is meant to be an exact doubling - not a round up of √3. As you say, the assumption is that L & N conductors have the same impedance - with a bolted 3-phase fault, the fault itself effectively forms an artificial N point - so as with any balanced load, the N currents cancel and so the impedance of the N conductor makes no contribution to the loop impedance. So looking at any one individual phase, compared with the single phase fault case (which the loop meter measured when connected L-N) we have the same driving voltage but half the conductor length, so all else being equal we'd have double the current. (For a fault between just two of the three lines, where the N currents wouldn't entirely cancel, we might indeed have a √3 factor to consider).


    If you know the relative impedances of your L and N conductors you should, in principle at least, be able to adjust the 2x factor to better suit your situation. Although in practice, as others have already mentioned, loop meters are unlikely to give you an accurate starting figure so close to such a large supply, so might not be worth the effort.


     
    as the meter only measures the impedance on the LV side of the transformer and the downstream cables i.e. assumes an ‘infinite source’ on the 11kV side

      As I understand it most loop meters work on the principle of drawing a known current and noting the corresponding drop in voltage to calculate the impedance - as such they do take into account the entire supply, including the HV side (within the meter's limitations at least). A milli-Ohm on the HV side does make considerably less difference than a milli-Ohm on the LV side and will be scaled down accordingly by the transformer's ratio within your reading. (BS 7671's description of Earth fault loop impedance isn't exactly helpful in that regard).


       - Andy.
  • Andy


    The doubling of the line to neutral PFC meter reading is a rule of thumb and leads to overstating the actual value. Better to divide the phase to phase reading by 0.87. but you still need to be aware about the accuracy of the meter in the first place.


  • pressing the test button was a bum clenching moment.

    Know what you mean, but in this modern age it had better be clarified as your own bum, or written RAMS and permit ....

    Joking aside , a lot of our really scary test gear has connection points  to engage a remote control panel, which is better when you are in a test lab setting as you can then be on the far side of some concrete blocks watching either on camera, or if the energy is low enough, looking through the meshed  window - bit like the old playschool..

    In situ as others have said it may be more sensible to calculate, and verify with volt drop or even thermal survey once it is on load to find gross errors  like parallel cores not sharing evenly.
  • Hi All - thanks for your replies. 


    I have used the calculated values to verify equipment suitability and will ignore the measured value. Some good info re meters. 


    Andy / John - thanks for the comments re the x2 - this is clear to me now I think. I would say that the x2 is ball park at best i.e. what I think we want is 230V/(ZTransformer+ZLine), but what the x2 calc does is (230V/(ZTransformer+ZLine+ZNeutral))x2. John's dividing the p-p by 0.87 gives a more accurate 230V/(ZTransformer+ZLine).