Why Traditional Engineering Forecasts Fail in Complex Projects — and How AI Helps

Why Traditional Engineering Forecasts Fail in Complex Projects  and How AI Helps

Engineering forecasts play a central role in how complex projects are governed. Whether in energy, transport, aerospace, or infrastructure, forecasts influence investment decisions, resource allocation, and risk mitigation strategies. Yet many engineers will recognize a familiar pattern: forecasts that appear robust during planning gradually lose credibility once execution begins.

This is not primarily a failure of competence or effort. It is a consequence of how traditional forecasting methods interact with complexity.

The challenge of forecasting in complex engineering systems

Most engineering forecasts are built on deterministic foundations. Schedules, cost models, and performance baselines are typically constructed using structured logic, defined dependencies, and assumed execution stability. These approaches work reasonably well in controlled or repeatable environments.

However, large engineering programs rarely behave linearly or stably. As projects scale up, several characteristics emerge:

  • High interdependence between engineering disciplines
  • Frequent resequencing due to late information or design evolution
  • Resource constraints interacting with physical and contractual interfaces
  • Local changes propagating across the system in non-obvious ways

Traditional forecasting methods tend to assume that once a baseline is established, deviations are exceptions rather than the norm. Change becomes the dominant condition.

Why forecasts degrade during execution

A common issue in complex projects is that forecasts remain technically correct while becoming practically misleading.

Performance indicators such as schedule variance, cost variance, or trend-based extrapolations are often calculated accurately. The problem is that they are anchored to a structural assumption of stability that no longer exists.

Engineers may observe symptoms such as repeated movement of the critical path, erosion of float across near-critical activities, recovery actions that solve one problem while creating another, and forecasts that are revised frequently but still fail to converge.

 

These are not data problems. They are system behavior problems.

Traditional methods are not designed to sense volatility, interaction effects, or emerging patterns across thousands of activities and updates. As a result, forecasts tend to lag reality rather than anticipate it.

What AI changes from an engineering perspective

Artificial intelligence is often discussed in terms of automation or prediction accuracy. In complex engineering environments, its real value lies elsewhere.

AI is particularly effective at identifying patterns of instability across large, evolving systems. Rather than replacing engineering judgement, it augments it by revealing behaviors that are difficult to quantify consistently using manual or rule-based approaches.

Examples of what AI can help detect include:

  • Repeating combinations of activity slippage that historically led to delay escalation
  • Early signals that recovery actions are increasing system stress rather than reducing it
  • Relationships between design churn, resequencing frequency, and forecast failure
  • The growth of near-critical activity sets that precede major program disruption

Importantly, these insights emerge from learning across historical execution data, not from redefining engineering logic. The underlying schedule and cost structures remain engineering led.

Lessons from applied use in complex programs

In applied project environments, AI performs best when used as an early warning and sensing layer, not as a black box forecasting engine.

Successful implementations tend to share common principles:

  • AI outputs are reviewed alongside traditional metrics, not instead of them
  • Models are trained to recognize execution patterns, not to generate deterministic dates
  • Engineers remain responsible for decisions; AI highlights where attention is needed
  • Transparency and explain ability are prioritized over theoretical accuracy

When used in this way, AI helps engineers ask better questions earlier. Instead of asking “are we late?”, teams can ask “is the system becoming unstable?” or “which interactions are most likely to undermine recovery plans?”

This shift is subtle but powerful.

Implications for engineering governance

The introduction of AI into forecasting has implications beyond tools and analytics. It challenges how engineering teams think about control, assurance, and decision-making.

Rather than treating forecasts as static outputs, they become dynamic indicators of system health. Governance discussions shift from defending numbers to understanding behavior. This aligns well with how experienced engineers already think, but often lack the data support to demonstrate.

Crucially, AI does not remove the need for engineering judgement. It increases its importance. Engineers must interpret signals, contextualize insights, and decide how to intervene in complex systems responsibly.

Looking ahead

As engineering projects continue to increase in scale and complexity, the limitations of traditional forecasting approaches will become more visible. AI offers a practical way to extend existing engineering methods, not by replacing them, but by adding structural awareness to how forecasts are interpreted and used.

The organizations that gain the most value will be those that treat AI as an engineering support capability rather than a reporting shortcut.

I would be interested to hear how other engineers are experiencing forecasting challenges in complex projects, and whether AI or advanced analytics are being explored as part of their engineering control approach.

This article builds on broader practitioner discussions around forecasting and AI that I have shared with different professional communities.

Parents
  • It's an interesting idea, however I suspect that there will be a lot of human factors hidden within the available (or not) data about how and why issues occur and how they propagate through the system.

    In many sense it's the same issues seen in health and safety and accident investigations where blame is apportioned based on the type of investigator stop rules [Rasmussen, Cognitive systems engineering, ch 6, p138ff - Lawyers look for 'blame', Medics look for injuries, Engineers look for faulty components, etc.].

    Managers are not paid to flag up problems to senior execs until it's too late. All those 'it was all greens, until it wasn't' problems [see Seddon J, "I want you to Cheat"] where the conventional aim is to be the second (not the first) to report delays, so that your delay can ride on the coat tails of others (or be hidden by the delays of others).

    AI summarises data, whether good or bad data, and most projects are one off specials with their own unique problems (otherwise they'd already have been completed and actioned at lower levels), so it become hard to find common factors that can be applied forward. 

    A lot of 'good planning' is the avoidance of 'speedy capitalism' [again, see Rasmussen, accident trajectory diagram (fig 6.3, p149): failed plans are accidents!]

    AI is definitely worth investigating, starting with previous investigations about why planning 'always' fails when it contacts the enemy of 'reality' Wink  [Who are the system owners, and do they actually want to stop such failures..? - link back to Rasmussen's 'blame']

  • Thank you for the thoughtful response, I agree with much of this, particularly the Rasmussen framing and the parallels with accident investigation and H&S. Forecast failure is rarely technical; it is usually the result of human, organisational, and incentive-driven behaviours, including the “all greens until it wasn’t” dynamic you describe.

    I don’t see AI as removing bias or replacing judgement. It reflects the system as it is which is precisely why it must be interpreted through a cognitive-systems lens. Its value is less in explaining why people hide issues, and more in surfacing early patterns of instability (e.g. critical path churn, float erosion, resequencing) that often precede those behaviours and become normalised over time.

    In that sense, AI is best seen as a disturbance detector, not a decision-maker,  a way to make latent degradation visible earlier, before organizations fully migrate into Rasmussen’s “acceptable failure” zone. Whether that leads to better outcomes, as you rightly note, ultimately depends on whether system owners genuinely want to act on what is revealed.

    Appreciate the depth of the your critique, it strengthens the discussion.

Reply
  • Thank you for the thoughtful response, I agree with much of this, particularly the Rasmussen framing and the parallels with accident investigation and H&S. Forecast failure is rarely technical; it is usually the result of human, organisational, and incentive-driven behaviours, including the “all greens until it wasn’t” dynamic you describe.

    I don’t see AI as removing bias or replacing judgement. It reflects the system as it is which is precisely why it must be interpreted through a cognitive-systems lens. Its value is less in explaining why people hide issues, and more in surfacing early patterns of instability (e.g. critical path churn, float erosion, resequencing) that often precede those behaviours and become normalised over time.

    In that sense, AI is best seen as a disturbance detector, not a decision-maker,  a way to make latent degradation visible earlier, before organizations fully migrate into Rasmussen’s “acceptable failure” zone. Whether that leads to better outcomes, as you rightly note, ultimately depends on whether system owners genuinely want to act on what is revealed.

    Appreciate the depth of the your critique, it strengthens the discussion.

Children
No Data