This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Electrical outages. cyber attacks ?

What's the chances of the power outages and airport problems being cyber attacks.     Is that possible.   I would think so  ?


Gary

Parents

  • Until earlier this year the mandated setting of LoM RoCoF relays was at a rate of change of frequency of 0.125Hz/s, and that still applies to existing generators.



    I think we learned from the development of the internet that having all the units on a network behave in exactly the same way can be a recipe for disaster - the classic ethernet example was two nodes on the same segment attempting to transmit at the same time, causing a collision - both would spot the problem, cease transmission, wait for a length of time and try again. If both waited for exactly the same length of time the collision would repeat in exactly the same way and the whole sorry cycle would repeat ad finitum. It seems to me we potentially have a similar problem with the grid - if all the embedded generation in the country switches on or off at the same time it becomes difficult to impossible to keep the while grid in balance as the switch on won't happen until the grid is in balance and the act of switching on is likely to throw the whole thing back out of balance immediately.


    The ethernet's solution (if I understand correctly), was to introduce deliberate "randomness" into the decision making - contrary as that feels to building a stable, predictable and reliable system. So the length of time delays in the case of a collision are literally based on a local random number generator - that way it's likely that the two units won't re-transmit at the same time, but even if they do, they're (almost) certain to miss after a few more retries - and as a result the overall system works far more reliably.


    It strikes me that G83/G59 (or whatever the new one is) connections could include a degree of probability of disconnecting over a range of values, rather than all trying to switch at the same single threshold. For example rather than all units tripping out at say 200.1V & below - you could define a range - e.g. 205 to 195V and say for every percentage of that range there should be that percentage probability of disconnecting - so at 205V 0% of units should disconnect, at 200V 50% and 195V 100%  and similarly at all points in between. Each individual unit (not knowing what any other unit had decided to do) would simply disconnect if it's internal random number generator (as a percentage) produced a number smaller than the percentage into the range the current grid voltage was. Averaged over many such units the grid would see a very gradual, steady and predictable rate of disconnection or reconnection - giving the controllers far more time to adjust central generation to stay in balance.


       - Andy.
Reply

  • Until earlier this year the mandated setting of LoM RoCoF relays was at a rate of change of frequency of 0.125Hz/s, and that still applies to existing generators.



    I think we learned from the development of the internet that having all the units on a network behave in exactly the same way can be a recipe for disaster - the classic ethernet example was two nodes on the same segment attempting to transmit at the same time, causing a collision - both would spot the problem, cease transmission, wait for a length of time and try again. If both waited for exactly the same length of time the collision would repeat in exactly the same way and the whole sorry cycle would repeat ad finitum. It seems to me we potentially have a similar problem with the grid - if all the embedded generation in the country switches on or off at the same time it becomes difficult to impossible to keep the while grid in balance as the switch on won't happen until the grid is in balance and the act of switching on is likely to throw the whole thing back out of balance immediately.


    The ethernet's solution (if I understand correctly), was to introduce deliberate "randomness" into the decision making - contrary as that feels to building a stable, predictable and reliable system. So the length of time delays in the case of a collision are literally based on a local random number generator - that way it's likely that the two units won't re-transmit at the same time, but even if they do, they're (almost) certain to miss after a few more retries - and as a result the overall system works far more reliably.


    It strikes me that G83/G59 (or whatever the new one is) connections could include a degree of probability of disconnecting over a range of values, rather than all trying to switch at the same single threshold. For example rather than all units tripping out at say 200.1V & below - you could define a range - e.g. 205 to 195V and say for every percentage of that range there should be that percentage probability of disconnecting - so at 205V 0% of units should disconnect, at 200V 50% and 195V 100%  and similarly at all points in between. Each individual unit (not knowing what any other unit had decided to do) would simply disconnect if it's internal random number generator (as a percentage) produced a number smaller than the percentage into the range the current grid voltage was. Averaged over many such units the grid would see a very gradual, steady and predictable rate of disconnection or reconnection - giving the controllers far more time to adjust central generation to stay in balance.


       - Andy.
Children
No Data