As the digital economy continues to scale, the pressure on data centres to deliver performance without excessive power use is rising fast. Operators are being asked to do more with less - less space, less carbon, and less certainty about future demand. This challenge has triggered major investment in more efficient power and cooling systems. But a less obvious part of the infrastructure stack is starting to demand attention: switching.

Whilst switching has long been seen as a stable, almost background function of data centre operations, its role in energy consumption is growing. As data volumes surge and workloads become increasingly complex, especially in AI-rich environments, the traditional model of optoelectronic switching is struggling to keep pace. The result is unnecessary power consumption, increased cooling loads, and constraints on scalability.

Switching and the performance bottleneck

Every time data is moved inside a data centre - between central processing units (CPUs), graphics processing units (GPUs), storage or servers - it is routed through a switch. In most modern data centres, that process involves converting signals from light to electricity (and back again) to make routing decisions. These conversions take place millions of times per second and have a cost - in energy, in heat and in latency.

The more data that moves, and the more frequently it moves, the bigger that cost becomes. In AI deployments in particular, the pattern of data flow can be highly unpredictable and bandwidth-hungry, with east-west traffic dominating. This dynamic traffic pattern is exactly where traditional switching starts to create friction - not just in terms of throughput, but in its contribution to energy and thermal loads.

Rethinking the switch

To address this, some operators are exploring the shift to optical switching. Unlike conventional designs, optical switches route data entirely in the light domain, eliminating the need for constant conversion. This substantially reduces energy use and eliminates much of the heat produced by switching hardware.

One company at the centre of this innovation is Finchetto, a UK-based developer of passive optical switching technology. Finchetto’s fully optical switches operate at the packet level and can be deployed directly in the rack, where traffic loads are highest and heat is hardest to manage.

“Finchetto’s technology has the potential to reduce switching-related energy use by over 90 percent in high-density environments,” said Darren Watkins, Chief Revenue Officer at VIRTUS Data Centres. “Just as importantly, by removing a key source of localised heat, it eases the strain on cooling systems and unlock broader efficiency gains across the entire data centre.”

Knock-on effects on energy and cooling

This has significant implications for wider facility design. If less energy is consumed by the switching fabric, and less heat is generated at the rack level, operators can make more strategic decisions about power provisioning, layout and cooling strategy.

One practical example is the potential to simplify airflow design or reduce dependence on perimeter cooling systems. In new-build environments, lower thermal loads from switching can create opportunities to optimise heat reuse schemes. Waste heat, once seen as an issue, becomes an asset - usable in nearby commercial or residential settings, or even in closed-loop industrial applications.

In addition, with less localised heat to manage, rack densities can be increased, and cooling zones can be better balanced across the hall. This is particularly important as operators aim to scale capacity without expanding footprint.

Integration, not overhaul

Watkins says, “A key concern for operators is whether new technologies like optical switching can be integrated without disrupting existing infrastructure. The best solutions are compatible with standard spine-and-leaf topologies and can operate within common Ethernet-based environments.

“This makes it possible to start with targeted deployments in the highest-value areas - such as AI clusters or compute pods - and scale gradually. The barrier to adoption is reduced, allowing teams to test real-world impact before committing to wider rollouts.”

The immediate benefits are compelling. Lower power use, reduced cooling demand, and improved latency all contribute to performance and sustainability goals. And in a market increasingly focused on carbon metrics, energy reporting, and total cost of ownership, even small gains at the infrastructure layer can deliver meaningful returns.

Future-ready infrastructure requires systems thinking

The story of smarter switching is part of a larger trend. Data centre infrastructure is becoming more integrated, and performance is increasingly viewed as a system-wide outcome. Decisions about switching no longer sit purely with the network team - they have implications for electrical design, cooling capacity, space planning and sustainability reporting.

“We can’t keep looking at power, cooling and network architecture in isolation,” warned, Darren. “A more efficient switch doesn’t just reduce power, it changes how you think about heat, density, even rack layout. It’s a foundational design consideration now, not an afterthought.”

This shift in mindset is driving a new wave of infrastructure choices. Operators are seeking components that not only perform well individually but improve the performance of everything around them. In that context, switching is no longer a commodity. It’s a lever for energy efficiency, resilience and scale.

A small change with wide impact

As demand grows and constraints tighten, data centres must find new ways to improve efficiency without compromising performance. Smarter switching, particularly at the rack level, is emerging as one of the most effective and underused tools in that effort.

By eliminating unnecessary conversions and cutting down on heat at the source, optical switching technologies allow facilities to run cooler, cleaner, and more efficiently. And by fitting into existing network environments, they offer a realistic path to improvement without disruption.