Seedance 2.0 adds ComfyUI video extension for broadcast-shot workflows
Creators shared repeatable Seedance 2.0 workflows for ComfyUI clip extension, GPT Image 2 shot planning, and fake-broadcast or iPhone footage. The examples push Seedance beyond isolated shorts into longer, more controllable production pipelines.

TL;DR
- hellorob's ComfyUI workflow turned Seedance 2.0 from a one-off clip generator into a continuation tool, with a handoff process that ComfyUI's workflow page says trims to the closest generated frame and then rebuilds the final clip with stitched audio.
- The strongest control pattern in this batch was storyboard first, render second: MayorKingAI's filmmaking thread used GPT Image 2 to build a production-plan sheet, while MayorKingAI's OpenArt Smart Shot thread showed the same idea productized as a shot-plan workflow on OpenArt Smart Shot.
- Fake-camera realism is getting weirdly specific. fabianstelzer's iPhone-footage example pushed an "authentic iPhone footage" look, while chrisfirst's ESPN-style breakdown and chrisfirst's body-cam prompt specified scorebugs, broadcast grain, chest-cam shake, and even how pants should fall.
- Seedance is also sliding into agent stacks instead of living as a standalone generator: techhalla's Agent One thread used it inside InVideo for extensions, while rainisto's MCP screenshot routed BeatBandit storyboards through Higgsfield and Seedance from Cursor.
- The broader pattern is less about prettier 15-second clips and more about repeatable production scaffolding. According to AIGCLIST's Seedance 2.0 overview, the model supports multimodal references, native audio, and multi-shot storytelling, which is exactly what these creator workflows are leaning on.
You can browse the OpenArt Smart Shot page, grab the ComfyUI extend-video workflow, and watch people use the same model for everything from fake ESPN cutaways to phone-shot military footage. The oddest reveal is how often the workflow starts outside Seedance itself: GPT Image 2 for planning, agent chat for orchestration, then Seedance for the motion pass.
ComfyUI extension
The clearest new workflow in this evidence set came from clip extension, not generation from scratch.
According to the workflow page, the process is:
- Load the source video and extract frames, FPS, and audio.
- Prompt Seedance 2.0 with what happens immediately after the last frame.
- Compare each generated continuation frame to the last real frame.
- Pick the darkest difference frame as the seam.
- Trim from that point, concatenate the old and new frames, then rebuild the video with audio.
That is a much more useful pattern than "generate another shot and hope." the workflow link post explicitly describes audio concat and seam selection as part of the graph, which is why this looks more like an editing primitive than a demo trick.
Shot plans
A lot of creators independently landed on the same structure, GPT Image 2 for preproduction, Seedance for motion.
MayorKingAI's manual version turned a prompt into a production sheet with:
- character references
- environment design
- floor plan
- storyboard panels
- palette
- lighting and mood notes
Then the Seedance follow-up prompt converted that sheet into a second-by-second timeline with named camera moves. OpenArt's Smart Shot productized the same move. On the Smart Shot page, the interface exposes scene description, character and environment references, sheet quality, and a two-step flow, preview sheet first, create video second.
The practical shift is simple: creators are replacing one giant cinematic prompt with a planning artifact and a render pass. the Flaming Mountains breakdown is basically a shot list disguised as a prompt.
Fake broadcasts
The most shareable Seedance work right now is not fantasy spectacle. It is camera grammar.
Fabian Stelzer's Glif example aimed for "authentic iPhone style video" and landed because the footage feels casually captured, not impeccably composed. Chris First's sports-broadcast workflow got even more literal. the image setup post first generated a still that looked like an ESPN crowd cutaway, then the Seedance prompt post constrained the video into one continuous take with:
- telephoto broadcast framing
- fixed scorebug and lower-third graphics
- subtle crowd-reaction timing
- announcer-style audio
- strict no-cut behavior
The body-cam version used the same playbook. chrisfirst's body-cam prompt specified FOV, rolling shutter wobble, auto-exposure shifts, radio compression, and the exact action beats of a COPS-style arrest clip. The result is a reminder that realism now lives in format cues as much as in faces.
Agent pipelines
A second pattern is Seedance showing up as the motion engine inside larger agent systems.
InVideo's Agent One thread broke the pipeline into chat orchestration, reference generation, logo and host asset creation, then Seedance-based extension to keep the show visually consistent. The thread shows the agent generating reference boards, reaction clips, and final extensions inside one notebook-like flow.
Rainisto's MCP setup pushes the idea further upstream. the Cursor screenshot shows Cursor calling BeatBandit for story structure and Higgsfield for Seedance rendering, with the agent rewriting prompts, attaching storyboard images, and submitting the job. That is less "AI video app" and more composable production graph.
Reference-first continuity
Once creators start chaining shots, the main problem stops being beauty and becomes continuity.
The continuity workaround repeats across multiple threads:
- use still references for character and environment locks
- generate the first clip
- feed that clip back as a reference for the next shot or extension
- reinforce the handoff with a second-by-second timeline
Techhalla used that pattern in OpenArt Smart Shot, then reused the generated video as a reference for the next Seedance pass. the continuity post says that keeps music, voices, and visuals aligned across shots. the hippo prompt shows the opposite side of the same technique, a heavily constrained timeline where chaos is scripted beat by beat so the model does not improvise itself off the rails.
The same reference-first logic also shows up in multi-shot storyboard experiments. DavidmComfort's storyboard test built a sequence from a series of images inside InVideo Agent One, and rainisto's short-film clip used BeatBandit MCP for story and shot descriptions before rendering with Seedance 2.0 Fast.
Where Seedance is showing up
One final reveal is distribution. Seedance is not being presented as one destination app.
Across this evidence set, creators used it through:
- Leonardo, per MayorKingAI's Leonardo link post
- OpenArt Smart Shot, per MayorKingAI's OpenArt thread
- InVideo Agent One, per techhalla's Agent One thread
- Runway, per ai_artworkgen's Runway tests and CharaspowerAI's Runway Unlimited post
- Hailuo, where Hailuo_AI's launch post paired Seedance 2.0 with GPT Image 2 and then Hailuo_AI's launch post added reference-image support in a follow-up
- niche wrappers like SocialSight, Mitte, Dreamina, and Higgsfield, per AIwithSynthia's subway clip, Artedeingenio's storybook workflow, and rainisto's MCP screenshot
That matters because the same model is now being used as infrastructure. The workflow lives in the wrapper, the planning doc, or the agent notebook. Seedance is increasingly just the render layer everybody routes through.