Text Steve Mills ––– Photography
In traditional manufacturing, raw materials are refined, assembled, and packaged into products as efficiently as possible. Similarly, AI factories process vast amounts of unstructured data such as text, images, audio and video, through various machine learning models to produce intelligent outputs in the form of tokens. These tokens manifest as predictions, recommendations, insights and decisions that power everything from personalised content feeds to medical diagnostics and scientific breakthroughs. For companies like Meta, the tokens generated by an AI factory fuel richer, more relevant content that enable immersive and engaging experiences for users. Whether it’s suggesting the next video to watch, identifying harmful content, or translating languages in real time, the AI factory is the engine driving new opportunities.
Impact on data centre infrastructure
As AI models grow in complexity and capability, the power density of the systems will increase dramatically. Just a few years ago, AI racks operated at 10–20 kW. Today, they are pushing 100 kW, with projections reaching 1 megawatt per rack in a few years. This exponential leap is driven by innovations in AI hardware and system design that pack components more densely to reduce the latency and energy consumption required to produce each token.
While system density improves efficiency, it demands rapid change. This evolution is reshaping data centre design in profound ways, giving rise to a new generation of AI-optimised infrastructure. Several key trends are emerging:
Transition to direct current (DC) power
Traditional data centres rely on alternating current (AC) power distribution with numerous small power supplies within each rack converting AC to 12 V direct current (DC) for the electronic components. However, this model is becoming inefficient for high-density AI workloads. AI factories are transitioning to ±400 V DC distribution using busbars and centralised power shelves, similar to the Open Compute Project’s Open Rack architecture. This shift enables more efficient power delivery, reduces conversion losses, and minimises the size of power infrastructure within the rack. As power demands continue to rise, data centres will adopt 400 V, 800 V or even 1200 V DC distribution systems to further improve efficiency and scalability.
Liquid cooling becomes dominant
Once the AI system receives all of that power, liquid cooling is needed, since traditional air cooling is no longer sufficient for high-density AI workloads. Liquid cooling technology has existed since the 1960s in supercomputing and is currently used for modern niche applications such as cryptomining, but must now scale rapidly for AI infrastructure.
AI factories are currently adopting single-phase liquid cooling technologies for this generation of product, with many new cooling innovations arriving in the very near future. These methods offer superior thermal performance, enabling higher computing densities and more reliable operation. However, implementing liquid cooling at scale requires significant changes to data centre designs including new component capabilities, operational processes and maintenance protocols.
Expanding data centre power reuse
One of the benefits of high-density AI workloads is the heated coolant which comes from the AI systems. The energy that is now stored in the coolant becomes a commodity that can be harnessed for beneficial applications. Instead of dissipating this heat into the environment, AI factories can channel it into district heating systems to warm homes and businesses, preheat industrial processes, or support agriculture in northern regions while reducing energy costs and environmental impact.
Much like the industrial revolution transformed manufacturing, the rise of AI factories is poised to transform the digital economy. As we stand on the brink of this new era, the convergence of data, computing capability and infrastructure will enable the next chapter of human progress - one token at a time.