AI Infrastructure Investment is Accelerating: Are Enterprises?

Cockroach Labs’ Isaac Wong offers commentary on how AI infrastructure investment is accelerating and asks the question of whether enterprises are doing the same. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.
The recent formation of the AI Infrastructure Fund—anchored by xAI, Nvidia, BlackRock, Microsoft, and MGX with a target of $100 billion—marks yet another signal that infrastructure is the foundation of AI’s future. As training requirements for large models push compute demands to new extremes, investment in scalable and resilient infrastructure isn’t just prudent—it’s essential.
But while hyperscalers, sovereign wealth funds, and industry leaders are making bold moves, there’s a critical question that remains unanswered: are enterprises keeping up with the pace of change?
Right now, 44% of companies report AI spending as their single largest budgetary line item in tech spending. But AI is only as good as the infrastructure that supports it. If the backbone isn’t evolving, much of that investment risks being underutilized, or worse, wasted.
Infrastructure: The Quiet Partner in AI Progress
The pace of announcements in the AI space is telling. Nvidia and xAI joining the infrastructure fund is the latest in a wave of infrastructure focused deals. ADQ and Energy Capital Partners recently announced a partnership to invest over $25 billion in projects to power data centers, SoftBank is seeking over $16 billion in loans for AI growth, and Nvidia’s CEO claimed the world needs to increase its computing power.
McKinsey recently found that global demand for data center capacity could rise at an annual rate of roughly 20% over the next five years – and to avoid a supply deficit, at least double the capacity that has been built since 2000 will need to be built in a quarter of the time. It’s not just a question of more power; it’s a question of whether the infrastructure layer can keep up with the exponential growth in model complexity and data volumes.
Multi-cloud strategies are likely to accelerate alongside AI adoption. AI tooling is beginning to abstract away the differences between cloud environments, reducing vendor lock-in and enabling real agility in data movement and compute orchestration. Within a few years, building cross-cloud platforms won’t be the exception – it will be the default.
So the question becomes: if AI and data capacity are going to increase at breakneck speed, what are enterprises doing now to ensure their infrastructure isn’t left behind?
Plan for Tomorrow’s Workloads–Not Today’s
To ensure they’re ready for this influx of data, enterprises should be conducting much more extensive tests on the capabilities of their AI platforms. Simply testing AI systems against current workloads is a mistake.
Think of it like outfitting a teenager with a wardrobe that fits today, without accounting for growth. It doesn’t make sense to buy five years’ worth of clothes in the same size—and yet, that’s exactly how many enterprises are sizing their infrastructure.
Instead, organizations should be asking: can our system handle more data, and if not, what do we need to do to accommodate it?
The truth is, legacy systems weren’t built with AI-scale throughput in mind. If compute power increases by orders of magnitude we’re also talking about similar growth in the volume and velocity of data. Databases that can’t scale linearly or elastically will become serious liabilities and in some cases they already are — putting a horrible strain on the business, further highlighting the dire need to modernize the underlying systems.
Enterprises are going to have to be scalable and agile to meet the data demands of the AI era. The cost of inaction is steep: outages, slowdowns, expensive migrations, and missed opportunities to capitalize on AI-driven insights.
Engineering for Scale and Resilience
It’s not just about adding more servers. Enterprises need to rethink how their infrastructure is architected—resilience, scalability, and observability need to be built-in, not bolted on. AI will touch every part of the business, and infrastructure must be ready to support those interactions—across clouds, across regions, in real time.
As the life blood of AI, the importance of data over the next few years cannot be overstated. Computing power expects to grow with the creation of new data centers, and with it the amount of data that enterprises can use for better business outcomes. For engineering and IT leaders, the implications are clear: prioritize testing infrastructure beyond current workloads. Run simulations that reflect where your data strategy is going, not where it’s been.
We’re entering a phase where AI isn’t just a layer on top of existing platforms—it’s driving the architecture decisions themselves. And the businesses that get this right will be the ones that use AI not just to improve processes, but to differentiate strategically through data, talent, and agility.
The message is simple: Infrastructure is the foundation. If we want to support the AI systems of tomorrow, we need to invest in and stress-test that foundation today.