Retrofitting Legacy Control Systems to Tackle Evolving OT Cyber Threats

Hi everyone,

I’m new to the EngX community and looking forward to learning from you all. I’d like to start a conversation about something I think many of us face and that is updating legacy control systems in power plants and other critical infrastructure, especially when it comes to growing OT cyber threats.

Lot of these systems were designed decades ago, with reliability in mind but little thought given to cybersecurity. Today, they’re exposed to new risks that weren’t imagined back then. The challenge is finding a way to retrofit these systems efficiently and without tearing everything apart or causing long periods of downtime.

In the UK, where our energy and infrastructure systems are heavily relied upon, even a small disruption can create big problems. So how do we make these updates both secure and practical?

I’m particularly interested in hearing how others have approached efficient retrofitting and what worked, what didn’t, and how you balanced the iron triangle of cost, time, quality and scope. Are there certain strategies or tools that helped modernize your systems without overhauling them completely.

Would love to hear your thoughts and experiences.

Thanks,

Taimur | MIET 

Parents
  • Hi everyone, and thank you for starting this important discussion.

    As someone working in the ICS/OT domain, I’ve seen first-hand how challenging it is to modernise legacy control systems in critical infrastructure—especially in sectors like power generation, where uptime, safety, and compliance are non-negotiable.

    You're absolutely right—many of these systems were built for reliability and longevity, not cybersecurity. But today, with increasing OT cyber threats and growing interconnectivity, we can't afford to ignore the risks. That said, a full system overhaul isn’t always feasible. I’ve found that successful retrofitting lies in balancing risk reduction with practical constraints like time, cost, and operational disruption.

    Here are a few approaches I’ve seen work in practice:

    Small blue diamond Risk-based retrofits using tools like Cyber-PHA or CyberHAZOP to prioritise high-impact upgrades.
     Small blue diamond Network segmentation and DMZs to isolate legacy equipment from enterprise IT and internet-connected systems.
     Small blue diamond Compensating controls such as protocol-aware intrusion detection, application whitelisting on HMIs, and read-only historian interfaces.
     Small blue diamond Secure remote access using jump servers with multi-factor authentication, session recording, and time-bound permissions.
     Small blue diamond Standards-based frameworks like IEC 62443 and NCSC’s Cyber Assessment Framework (CAF) to structure retrofit plans and align with regulatory expectations.

    One strategy that’s worked particularly well is the “wrapper” approach—layering modern protections and interfaces around legacy assets, allowing phased upgrades and limiting downtime. Conversely, what hasn't worked well is trying to lift-and-shift IT tools into OT environments without accounting for latency, determinism, or vendor lock-in.

    I'd be really interested to hear from others here:

    • Have you used similar strategies, or different ones that worked better?

    • What lessons have you learned in terms of balancing security, cost, and uptime during upgrades?

    - Simha

Reply
  • Hi everyone, and thank you for starting this important discussion.

    As someone working in the ICS/OT domain, I’ve seen first-hand how challenging it is to modernise legacy control systems in critical infrastructure—especially in sectors like power generation, where uptime, safety, and compliance are non-negotiable.

    You're absolutely right—many of these systems were built for reliability and longevity, not cybersecurity. But today, with increasing OT cyber threats and growing interconnectivity, we can't afford to ignore the risks. That said, a full system overhaul isn’t always feasible. I’ve found that successful retrofitting lies in balancing risk reduction with practical constraints like time, cost, and operational disruption.

    Here are a few approaches I’ve seen work in practice:

    Small blue diamond Risk-based retrofits using tools like Cyber-PHA or CyberHAZOP to prioritise high-impact upgrades.
     Small blue diamond Network segmentation and DMZs to isolate legacy equipment from enterprise IT and internet-connected systems.
     Small blue diamond Compensating controls such as protocol-aware intrusion detection, application whitelisting on HMIs, and read-only historian interfaces.
     Small blue diamond Secure remote access using jump servers with multi-factor authentication, session recording, and time-bound permissions.
     Small blue diamond Standards-based frameworks like IEC 62443 and NCSC’s Cyber Assessment Framework (CAF) to structure retrofit plans and align with regulatory expectations.

    One strategy that’s worked particularly well is the “wrapper” approach—layering modern protections and interfaces around legacy assets, allowing phased upgrades and limiting downtime. Conversely, what hasn't worked well is trying to lift-and-shift IT tools into OT environments without accounting for latency, determinism, or vendor lock-in.

    I'd be really interested to hear from others here:

    • Have you used similar strategies, or different ones that worked better?

    • What lessons have you learned in terms of balancing security, cost, and uptime during upgrades?

    - Simha

Children
No Data