CopilotKit shipped hooks that let agents inspect app state and call frontend actions, then paired them with Shadify for ShadCN-based UI composition. It gives embedded agents a cleaner path from chat to in-app behavior.

useAgentContext so an agent can "see" UI state and useFrontendTool so it can "act" inside the app.CopilotKit's launch thread frames the release around a common limitation in agent SDKs: "Most Agents can only chat" and "can't read your UI or do anything in your app." The two new hooks split that problem in half. useAgentContext, documented in the hook reference, is presented as the visibility layer; useFrontendTool, documented in its reference, is the action layer.
That split matters for implementation because it gives developers a cleaner boundary between observation and invocation. Rather than forcing an agent to infer app state from text or bounce everything through backend tools, CopilotKit is explicitly exposing frontend context and frontend actions as separate primitives, with the docs post claiming both are "simple and ready in minutes." The attached demo video UI-aware agent demo shows the intended workflow moving from code to a browser app where UI is assembled interactively.
CopilotKit is not shipping the hooks in isolation. The Shadify announcement describes "Generative UI built on ShadCN" where developers "describe a UI" and let a LangChain agent compose from ShadCN components. That turns the new hooks into part of a broader loop: the agent can inspect the current interface, call frontend-side capabilities, and then generate or update visible UI from an existing component system.
The supporting repost from Ata's demo share repeats the same core behavior, which suggests Shadify is the showcase implementation for these primitives rather than a separate product line. The extra context from Mike Ryan's post is useful because it describes the outcome more concretely: an agent can "stream back a user interface from your components." For engineers building in-app copilots, that is the technical shift here: CopilotKit is moving from chat orchestration toward UI-aware, component-level agent interactions inside the frontend.
Most Agents can only chat 🥀 They can't read your UI or do anything in your app. useAgentContext + useFrontendTool fixes that. One lets your agent see. The other lets it act. Simple and ready in minutes 👇
👀 useAgentContext: docs.copilotkit.ai/reference/v2/h… 🔨 useFrontendTool: docs.copilotkit.ai/reference/v2/h…
✨Introducing Shadify: Generative UI built on ShadCN Describe a UI and allow your @LangChain agent to compose from @ShadCN on the fly, using AG-UI. Then export it as React code. It's open-source: github.com/tylerslaton/sh…
Introducing Shadify: Generative UI built on ShadCN Simply describe a UI and watch your @LangChain agent compose from @ShadCN on the fly, using AG-UI and @CopilotKit. Then export it as React code. Repo below. Try it out here: shadify.copilotkit.ai