Meta raises AI capacity with up to $27B Nebius infrastructure deal
Meta agreed to buy up to $27 billion of AI infrastructure from Nebius over five years, including $12 billion of dedicated capacity and optional overflow tied to Vera Rubin deployments. Plan for tighter next-generation GPU supply as hyperscalers lock in capacity years ahead of spot demand.

TL;DR
- Meta signed a five-year AI infrastructure deal with Nebius worth up to $27 billion, with $12 billion of dedicated capacity and up to $15 billion more available as demand grows, according to the deal summary.
- Nebius said the agreement is built on “one of the first large-scale deployments” of NVIDIA Vera Rubin systems across multiple locations, making Nebius's announcement the clearest signal yet that next-gen GPU capacity is being reserved years ahead.
- For engineers planning large training or inference footprints, the practical change is not a new API but a tighter supply picture: Meta is locking in future compute while NVIDIA is already framing Vera Rubin as a major step-change platform in GTC slides and Jensen remarks.
What exactly did Meta buy?
Meta's purchase is a capacity reservation, not a model launch. In Nebius's announcement, the companies describe a five-year agreement under which Nebius will provide $12 billion of dedicated capacity “across multiple locations,” with the infrastructure tied to early large-scale deployment of NVIDIA Vera Rubin.
A second report in the CNBC summary screenshot adds the commercial structure: Meta will spend up to $27 billion total, split between the $12 billion committed block and as much as $15 billion of additional compute it can draw on later. That matters operationally because it gives Meta guaranteed baseline supply plus overflow headroom, a pattern that looks closer to utility capacity planning than opportunistic GPU buying.
Why this matters for AI infrastructure planning
The infrastructure angle is the real story. Nebius is tying the deal to Vera Rubin, and the GTC slide shows NVIDIA positioning that platform as a next-generation system architecture rather than a routine refresh. Even without full public deployment details here, “one of the first large-scale deployments” in Nebius's wording implies hyperscalers are reserving upcoming capacity before broad market availability.
That lines up with NVIDIA's own demand framing. In remarks from GTC, Jensen Huang pointed to more than “$1T+” of AI infrastructure growth through 2027, with the accompanying slide calling out inference as a major driver. The immediate takeaway for engineers is that future serving and training economics will be shaped not just by chip specs, but by who locked in supply earliest. Meta's Nebius agreement is a concrete example of that shift.