Skip to content
AI Primer
release

CopilotKit releases A2UI v0.9 with AG-UI support and npx create flow

CopilotKit released A2UI v0.9 for declarative generative UI, where agents emit JSON and frontends render from a component catalog. The update adds AG-UI support, live incremental rendering, and a shared web core across React, Angular, Flutter, and Lit.

3 min read
CopilotKit releases A2UI v0.9 with AG-UI support and npx create flow
CopilotKit releases A2UI v0.9 with AG-UI support and npx create flow

TL;DR

  • CopilotKit shipped A2UI v0.9 as a declarative generative UI layer where, per CopilotKit's launch post, agents emit JSON and the frontend renders from its own component catalog.
  • Day-one AG-UI support is the headline integration, and CopilotKit's feature list says v0.9 also adds live incremental rendering plus simpler transports across AG-UI, MCP, WebSockets, REST, and A2A.
  • In CopilotKit's blog post, the team says the big protocol shift is that A2UI v0.9 moves the UI schema into the system prompt, so agents no longer need separate structured-output plumbing to generate UI reliably.
  • Google's A2UI announcement frames the spec as a framework-agnostic way for local or remote agents to declare UI intent while clients keep control of rendering and design systems.

You can read CopilotKit's write-up, compare it with Google's launch post, and dig into the draft A2UI v0.9 spec plus the separate transports doc. The useful bit is the split of responsibilities: agents describe intent, clients render native components, and CopilotKit is trying to make that work with the AG-UI stack teams are already wiring up.

JSON instead of structured output

CopilotKit's own blog says A2UI v0.9 changes the setup model by putting the UI schema in the system prompt, not in a separate structured-output contract. That lines up with Google's description of A2UI as a standard for declaring UI intent rather than shipping pre-rendered UI from the model.

The practical split is simple: the agent sends JSON, the app maps that JSON onto its own component catalog. CopilotKit's follow-up post calls that the project's declarative generative UI path, distinct from more controlled or fully open approaches.

Shared web core and transports

CopilotKit's v0.9 inventory is mostly plumbing:

  • bring your own components
  • live streaming for incremental rendering
  • a shared web-core library
  • renderer targets for React, Angular, Flutter, and Lit
  • transport options including AG-UI, MCP, WebSockets, REST, and A2A

The official A2UI transports page makes the same architectural bet explicit: A2UI is transport-agnostic, and the transport's job is just to move JSON messages from agent to renderer.

The create flow ships now

The day-one adoption story is not just protocol support. CopilotKit's launch post includes a one-command bootstrap, npx copilotkit@latest create my-app --framework a2ui, and the embedded demo video shows that flow generating a starter app from the CLI.

CopilotKit also says it is working with Google on the rollout, while the CopilotKit repository pitches the project as the team behind AG-UI and a frontend stack for agent-native apps. For teams already using AG-UI, that makes A2UI v0.9 look less like a fresh protocol bet and more like a renderer layer they can slot into the stack they already have.

🧾 More sources

TL;DR1 tweets
Core launch facts and the official framing from CopilotKit's launch thread.
JSON instead of structured output1 tweets
Explains the declarative model and the shift to schema-in-prompt behavior.