Cursor releases SDK for CI/CD, local or cloud agents, and starter apps
Cursor shipped a TypeScript SDK that exposes its runtime, harness, and models for CI/CD jobs, background automations, and embedded agents. The launch lets teams treat Cursor as programmable agent infrastructure, though it still depends on Cursor API access.

TL;DR
- Cursor shipped a public beta TypeScript SDK that exposes the same runtime, harness, and models used by its desktop app, CLI, and web app, according to cursor_ai's launch post.
- The SDK can run agents against a local working directory or Cursor-managed cloud VMs, with cursor_ai's runtime note and the official changelog entry both framing local and cloud as first-class modes.
- Cursor bundled real starter code on day one: cursor_ai's cookbook post points to a coding-agent CLI, a prototyping tool, and an agent-kanban app in the cookbook repo.
- The interesting catch is auth. As cramforce's reply noticed and ericzakariasson's clarification confirmed, even local runs still connect to the Cursor API rather than operating as a fully disconnected harness.
- Cursor is already pitching this as infra for teams, not just an IDE feature. cursor_ai's customer thread names Rippling, Notion, C3 AI, and Faire, while nicoalbanese10's provider post shows the community immediately wrapping it for the AI SDK ecosystem.
You can read the official launch post, browse the cookbook examples, and even inspect an unofficial AI SDK community provider that appeared a few hours later. The launch also quietly spells out a more ambitious stack than a plain model wrapper: the blog says SDK agents inherit MCP connectivity, skills loading, hooks, and subagents, while ericzakariasson's demo post shows the same cloud-computer setup already being used to record demos.
What shipped
Cursor's official line is simple: programmatic access to the same agent that already powers Cursor surfaces, exposed through @cursor/sdk in public beta via the launch post and the changelog.
The key positioning from cursor_ai's launch post is that this is meant for CI/CD jobs, end-to-end automations, and embedding agents inside products, not just scripting the editor.
The public beta package centers on a small TypeScript API. The changelog example shows Agent.create, a model selection like composer-2, and a streamed run loop over run.stream().
Local and cloud
The runtime split is one of the most concrete parts of the launch. cursor_ai's runtime note says SDK agents can run locally or in Cursor's cloud, and the official blog post adds that cloud runs get a dedicated VM and configured development environment.
According to the launch post, cloud agents can keep running through interruptions and can do repo-native work like opening PRs, pushing branches, and attaching demos. The same post describes local mode as the fast iteration path, while cloud, self-hosted workers, and the Agents Window share the same underlying runtime.
That local versus cloud handoff is already visible in product usage. In ericzakariasson's demo post, agents record demos of their work, and in ericzakariasson's cloud-computer reply he says those demos are recorded in cloud computer, at least for now.
Harness features
The bigger story is that Cursor is productizing its harness, not only its model access. The launch post says SDK agents inherit codebase indexing, semantic search, context management, MCP server support, automatic skill loading from .cursor/skills/, hooks from .cursor/hooks.json, and subagents.
That is a more opinionated stack than the usual "bring a model, wire your own loop" SDK launch. ryolu_'s summary called it a multi-model harness on local and cloud, which lands closer to the actual announcement than treating this as a thin API wrapper.
Cursor also has outside evidence that its harness matters on its own. In altryne's WolfBench post, WolfBenchAI claimed Cursor was the strongest harness they had tested so far, even before the SDK release.
Starter apps
Cursor did not ship the SDK as docs-onlyware. cursor_ai's cookbook post open-sourced three starter projects on day one, and the cookbook repo adds a quickstart plus runnable examples.
The starter set breaks down cleanly:
- coding-agent-cli: a terminal entry point for spawning agents from scripts or shells, also visible in the existing CLI UX screenshot from nummanali's CLI post
- app-builder: a prototyping app for scaffolding and iterating on projects in a sandboxed cloud environment, per the cookbook repo
- agent-kanban: a board view for grouping cloud agents by status, previewing artifacts, and launching new runs, as shown by ericzakariasson's kanban demo
- quickstart: a minimal Node example for creating a local agent, sending a prompt, and streaming events, according to the cookbook repo
The examples also make the launch feel less abstract. The kanban app is a dead giveaway that Cursor expects teams to manage many background runs at once, not just fire one agent per request.
API key boundary
The sharpest caveat surfaced in replies, not the hero copy. cramforce's reply pointed out that the SDK requires a Cursor API key, which makes it different from agent harnesses that only need model-provider credentials.
Eric Zakariasson from Cursor replied in ericzakariasson's clarification that the SDK works like the CLI or desktop app, in either local or cloud environments, while still connecting to the Cursor API. The cookbook repo matches that detail by telling users to fetch a key from the Cursor integrations dashboard and export CURSOR_API_KEY before running examples.
That means "local" here describes where the agent works, not a fully self-contained control plane. Kim Monismus spelled out the business implication in kimmonismus's platform-read, arguing that SDK usage turns Cursor's agent runtime into billable infrastructure rather than a seat-bound IDE feature.
Early ecosystem
The first wave of reuse showed up almost immediately. In nicoalbanese10's provider post, Nico Albanese said it took only a few prompts to build a community provider for the AI SDK, and linked the cursor-ai-sdk-provider repo.
That repo matters because it adapts Cursor agents to the AI SDK's generateText and streamText interface, while still relying on @cursor/sdk underneath. Its README also notes a boundary: Cursor-native tools like MCP and subagents are not automatically forwarded through the AI SDK abstraction.
Cursor's own customer list suggests why that wrapping work started so quickly. cursor_ai's customer thread says Rippling, Notion, C3 AI, and Faire are already using the SDK for background agents, ticket-to-PR flows, and self-healing codebases, which is a much larger target than editor automation.