Glif adds single-agent storyboard-to-Seedance animation from chat prompts
Glif users showed a chat agent generating GPT Image 2 storyboards and passing them straight into Seedance 2 for anime shorts. The flow collapses storyboard prep and animation into one conversation, but still leans on seeded references and prompt setup.

TL;DR
- venturetwins' demo post shows Glif handling a storyboard-to-video run inside one chat, generating panels with GPT-Image-2 and then animating them with Seedance 2.
- In the main demo, the output is an anime short, and the attached screen recording shows the workflow moving from prompt to storyboard to finished clip.
- venturetwins' follow-up says the agent was primed with a screenshot of another creator's workflow, which suggests the chat can inherit a format before it starts generating.
- According to the follow-up post, users can also upload an image or video and have the agent study and replicate it, pushing the tool closer to reference-driven video generation.
Glif turning a chat prompt into storyboard panels and a finished anime clip
You can watch venturetwins' demo collapse storyboarding and animation into one conversation, then open the follow-up to see the setup trick: the agent was prepped with a screenshot of an earlier workflow example. The same post also claims Glif can study uploaded images or video references and replicate them, which is the more interesting creative control hook here.
Single-agent chat
The core pitch is simple: one chat agent handles both phases. In venturetwins' post, the user asks for an anime short, Glif generates a storyboard with GPT-Image-2, then passes that storyboard into Seedance 2 for animation.
That removes the usual handoff between image prompting and video prompting. The screen recording makes the appeal obvious, because the whole run stays inside a single conversational interface.
Storyboard handoff
The follow-up adds the missing workflow detail. According to venturetwins' follow-up, the agent was primed by dropping in a screenshot of another post showing the same storyboard-to-animation pattern.
The screenshot in
shows the agent loading a "Boot Session And Check The Latest" skill and a "Seedance 2 Prompting" skill before asking what subject and style the user wants. That suggests the one-chat flow still depends on some explicit prompt scaffolding under the hood.
Reference uploads
The most concrete new capability in the thread is reference conditioning. venturetwins' follow-up says users can upload an image or video, have the agent analyze it, and ask for a replicated result.
That changes the story from simple prompt chaining to style and motion transfer by example. fabianstelzer's repost helped amplify the main demo, but the real reveal is in the follow-up: Glif is framing this as a controllable workflow, not just a one-off anime short.