Is CoreWeave AI's Global Crossing?
Neoclouds like CoreWeave echo telecom new entrants of the late 1990s/early 2000s —scaling ahead of demand—raising concerns about the quality of AI compute infrastructure demand.
The risks inherent in Coreweave's business model have been widely discussed. The value of its core asset—GPU compute capacity—is subject to rapid depreciation driven by Nvidia’s aggressive annual release cycle of increasingly advanced and powerful chips. Furthermore, CoreWeave faces extreme revenue concentration, with its top two customers accounting for 77% of its 2024 revenue—Microsoft alone making up 62%.
While valid, concerns about CoreWeave’s business model are secondary to a broader and more pressing industry risk: the quality of AI compute infrastructure demand. CoreWeave and other neoclouds —emerging AI-focused cloud providers— are aggressively building out data center capacity in anticipation of future demand, creating a speculative dynamic highly exposed to the cyclical risk of oversupply in AI compute capacity. Another red flag is the circular nature of some demand sources. For instance, Nvidia not only supplies GPUs to CoreWeave but is also both an investor in and a customer of the company, effectively generating demand for its own product, raising questions about the true sustainability of the current AI infrastructure spending growth trajectory.
This quality of demand concern is an additional cyclical risk to Nvidia, alongside the significant risk to its China revenue. This China revenue risk was outlined in my post Nvidia Navigating Elevated Expectations, and stems from the upcoming U.S. export restrictions under the Framework for the Diffusion of Advanced Artificial Intelligence Technology, with compliance required by May 15, 2025.
CoreWeave and Global Crossing - Different Cycle, Similar Story
While comparisons between today’s buildout of AI compute infrastructure and the late1990s/early 2000s telecom fiber expansion are common, the company-level parallels are even more striking. The comparison between CoreWeave and Global Crossing—a telecom upstart that rode the dot-com wave and ultimately filed for bankruptcy—is particularly noteworthy.
Global Crossing and CoreWeave’s risks are quite similar. Both companies built ahead of demand, required immense upfront capital investment, and bet on being indispensable to fast-growing ecosystems—AI compute in CoreWeave’s case, internet data in Global Crossing’s. And both faced (or face) existential risks tied to timing: should demand growth slow, competition intensify, or pricing compress, the economics can unravel quickly. Global Crossing suffered as the market overbuilt in a commoditizing broadband capacity environment. CoreWeave could face similar pressure if AI compute outsourcing needs are reduced or AI demand growth fails to keep pace with current expectations.
It’s notable that both were founded and led by industry outsiders. Global Crossing’s founder, Gary Winnick, came from a Wall Street background, while CoreWeave’s CEO had previously worked as an energy sector investor and portfolio manager before entering the cryptocurrency mining business and then later pivoting to AI infrastructure services. These were financial operators bringing a capital markets mindset to highly technical markets.
Both companies strategies essentially focused less on innovation and more on capitalizing on a secular wave of demand: Global Crossing for internet bandwidth, and CoreWeave for AI compute capacity. Although each promoted itself as a critical enabler of the next era of technology, aggressively scaling capital-intensive infrastructure underpinned by optimistic market forecasts, their underlying services both had the commodity like feature of limited pricing power in the event of a market environment of surplus capacity.
One additional key parallel lies in their debt-fueled expansion strategies. Global Crossing relied heavily on debt to build out its global network, capitalizing on bullish investor sentiment and easy credit. Similarly, CoreWeave and other neoclouds rapid rise has been bankrolled by debt and structured financing—most notably, multibillion-dollar GPU-backed loans and equity commitments from firms like Blackstone. As of year-end 2024, CoreWeave held $7.9 billion in total debt and had $15 billion in lease obligations. In both cases, largely speculative capacity buildouts were driven by projected demand—an inherently risky approach for businesses with highly leveraged capital structures.
Unpacking CoreWeave’s Revenue Visibility
Although CoreWeave’s S-1 filing emphasizes value-add opportunities like optimizing Model FLOPS (Floating Point Operations Per Second) utilization, a metric for measuring how efficiently a GPU is being used, at its core the company is selling outsourced AI compute capacity. While technically complex, this is a business that exhibits commodity-like traits. Differentiation exists, but price remains a primary competitive factor.
Furthermore, the company’s revenue visibility may be overstated. CoreWeave’s S-1 highlights that “ We generate substantially all of our revenue from committed long term-contracts” which accounted for 88% and 96% of revenue in 2023 and 2024 respectively, with the balance from “on demand” or spot revenue. The company also states that “committed contracts generally have a fixed price for their duration.”
While CoreWeave cites long-term commitments from Microsoft and OpenAI as foundational to its growth outlook and revenue visibility, pointing to its $15.1 billion in remaining performance obligations as of December 2024, the precise terms of these contracts remain unclear.
These revenue streams, which comprise the majority of CoreWeave’s remaining performance obligations, may not be as secure as they appear. It’s unlikely that Microsoft would lock itself into a rigid take-or-pay structure with a relatively new AI data center operator lacking negotiating leverage. This concern has been reinforced by a report that Microsoft has backed out of parts of its agreements with CoreWeave due to delivery issues and missed deadlines.
Adding to the uncertainty, recent reports suggest Microsoft is scaling back on its capex plans, while OpenAI is aggressively building out its own AI infrastructure through its Stargate JV with SoftBank, further highlighting the risk that future demand from CoreWeave’s two largest customers could fall short of expectations.
Quality of Demand Risk
While the largest cloud providers dominate AI infrastructure spending, neocloud players now account for a meaningful share of demand for Nvidia GPUs and AI compute systems. This cohort of companies includes reinvented crypto miners pivoting into AI, smaller cloud players shifting to AI services, and early-stage startups chasing generative AI growth.
Fueled by a wave of equity and debt financing—often collateralized by chips and AI systems—many neoclouds are building out compute capacity ahead of realized demand. This speculative build-out leaves them especially vulnerable to oversupply risks.
There’s also a concern about the quality of demand, not just from the buyers of GPUs, but the demand from AI data center customers. Much of it is driven by VC-backed, high cash burn startups rather than mature, revenue-generating enterprises—raising questions about the durability of this demand. Nvidia CEO Jensen Huang acknowledged this dynamic in the company’s August 2024 earnings call, noting: “The number of generative AI startups is generating tens of billions of dollars of cloud renting opportunities for our cloud partners.”
Ripple Effects from Surplus AI Compute Capacity
Nvidia remains the valuation umbrella and sentiment anchor for AI-themed stocks. Heading into the second half of 2025, any decline in Nvidia’s revenue expectations will send ripple effects across the broader AI ecosystem. In a surplus AI compute capacity environment, neocloud players—that have built capacity ahead of demand—will be the first to feel the impact. Among hardware suppliers, Dell and Supermicro (SMCI) are particularly exposed, as CoreWeave and other neoclouds have been key growth drivers for their AI server sales. A pullback from these speculative buyers would not only pressure revenue estimates but likely trigger multiple compression.
For cloud players like Amazon, Meta, and Google, a slowdown in AI infrastructure spending could be a net positive. If reduced capex aligns with accelerating AI application adoption, improved free cash flow and diminished investor concerns over overspending could catalyze a rebound in these stocks, reversing recent underperformance.
However, a slowdown in AI compute demand would hit CoreWeave directly—pressuring pricing and utilization. Given its debt-heavy capital structure and speculative expansion, CoreWeave could increasingly resemble Global Crossing: a cautionary tale of over estimating demand and financial overreach.