Vercel Sandbox benchmarks sub-500 ms node -v cold starts
Vercel said Sandbox is now the fastest microVM-based runtime, with fresh node -v cold starts now largely under 500 ms after a month of tuning. The update also puts persistent sandboxes into beta and expands plans for a programmable firewall, so teams should re-check runtime and security settings.

TL;DR
- Guillermo Rauch's team update said Vercel Sandbox now ranks as the fastest microVM-based sandbox, while the live ComputeSDK leaderboard currently shows Vercel at 0.38s median TTI, behind only Daytona's 0.10s.
- Tom Lienard's benchmark note, via Cramforce said the team cut more than 2.2 seconds from a fresh
node -vrun in about a month, bringing cold starts to largely sub-500ms. - Alongside the speed push, Cramforce flagged two roadmap items: persistent sandboxes are already in beta, and the mutable firewall is headed toward fully programmable policies.
- The official persistent sandboxes beta post says stopped sandboxes now auto-snapshot and resume from saved state, while a customer note from Zeeg points to Sandbox plus Egress as a production isolation setup for Slackbot sessions.
You can check the live provider leaderboard, drill into the Vercel benchmark page, read Vercel's persistent sandbox beta writeup, and browse the firewall docs. The weird bit is how much of the story is now about state and network control, not just startup time.
Benchmarks
The screenshot in Cramforce's post lines up with the live ComputeSDK leaderboard, which measures Time to Interactive as the span from compute.sandbox.create() to the first successful runCommand(). On the current board, Vercel posts 0.38s median TTI, 0.46s P95, 0.50s P99, and 100% success.
That is a sharp move from the older Vercel profile snapshot Exa surfaced during research, which still showed 1.73s median sequential TTI. The live benchmark page now has Vercel second overall, ahead of E2B, Blaxel, and the rest, with only Daytona still faster.
Cold starts
According to Lienard's update, the team shaved more than 2.2 seconds off a fresh node -v run in roughly one month. That matches the benchmark delta, where Vercel's live median TTI is now well under half a second.
Rauch's post frames the claim more narrowly than the screenshot does: fastest microVM-based sandbox, not fastest sandbox overall. That qualifier matters because Daytona still leads the broader ComputeSDK table.
Persistent sandboxes
Vercel had already shipped the feature into beta on March 26 in its official changelog. The model changed from disposable sessions plus manual snapshots to a durable sandbox identity defined by a name, filesystem state, and configuration.
The beta mechanics are simple:
- stopping a sandbox automatically snapshots the filesystem
- resuming boots a new session from that saved state
- the beta is available through
@vercel/sandbox@betaorsandbox@beta - the working with Sandbox docs position it as the default path for long-running work
Firewall controls
The other roadmap item in Cramforce's post is a fully programmable firewall. Vercel's current firewall docs already expose runtime-updatable egress policies, and the February advanced filtering changelog added SNI filtering plus CIDR blocks.
The documented policy model already has three modes:
allow-all, the default unrestricted internet policydeny-all, which blocks all outbound access- allowlists, which can be changed at runtime without restarting the sandbox
Vercel's own examples are aimed squarely at agent workloads: fetch data with open internet, then lock the sandbox down before running untrusted code; let code reach specific package sources or buckets; keep credentials usable without exposing them to arbitrary outbound calls.
Egress in production
The cleanest production datapoint in the evidence pool is Zeeg's note that Sandbox plus Egress is already being used to isolate every session for a Slackbot called Junior. It is a small detail, but it is more concrete than benchmark chest-thumping, because it shows the network controls landing in an actual agent workflow.