CopilotKit launches Open Generative UI with openGenerativeUI: true
CopilotKit open-sourced Open Generative UI, a flag that lets agents stream interactive UI components directly into chat. The release packages a concrete alternative to raw-code UI generation into a reusable dev toolkit.

TL;DR
- CopilotKit says its new Open Generative UI feature lets agents stream interactive UI into chat, and the setup is a single
openGenerativeUI: trueflag, according to CopilotKit's launch post. - The launch shipped with a public live demo and an open-source GitHub repo, both linked in CopilotKit's demo post and CopilotKit's repo post.
- CopilotKit framed the release as an open implementation of the same interaction pattern Anthropic had just popularized, as CopilotKit's Claude reply put it, "We open-sourced it for you."
- The company is also turning the launch into a developer push, with a DeepLearning.AI course waitlist in CopilotKit's course link and a community demo fest that CopilotKit's results post said drew 200-plus registrations.
You can try the live build, browse the repo, and watch CopilotKit's short launch clips in the main announcement and an earlier teaser. There is also a DeepLearning.AI course page tied to the same theme, and the demo fest wrap-up gives a quick read on how hard CopilotKit is pushing generative UI as a category.
The flag
CopilotKit's core claim is simple: instead of having an agent emit raw text or code snippets, Open Generative UI lets it return live UI components inside the chat surface. The launch post says developers enable it with openGenerativeUI: true, then stream the generated interface back into the conversation.
That makes this feel more like packaging than research theater. CopilotKit's reply to Claude explicitly positioned it as an open-source answer to the new Claude-style UI pattern, and CopilotKit's earlier teaser said developers could use the same approach in their own apps.
Live demos
The public launch path has three pieces:
- A hosted live demo, linked by CopilotKit's demo post
- An open-source OpenGenerativeUI repository, linked by CopilotKit's repo post
- A broader open-source stack reference, where CopilotKit's earlier thread said the experience was built with CopilotKit and AG-UI
The product pitch is consistent across the posts: agents should be able to return forms, charts, and app-like controls instead of plain text. DeepLearningAI's course announcement used almost the same language, describing agents that "return forms, charts, and interactive UI."
Demo fest
CopilotKit did not just ship code. It wrapped the launch in a small community campaign around a Generative UI Demo Fest.
According to CopilotKit's registration post, the format was three-minute demos judged by peers. Later, CopilotKit's results post said the event drew more than 200 registrations, 12 presenters, and named Alphform AI as the winner, with KAEN and the CopilotKit Playground tied for second.
That is a useful extra datapoint for where this launch sits. CopilotKit is trying to turn generative UI from a one-off demo pattern into a recognizable builder ecosystem, with code, examples, and now a training and showcase loop around it.