Skip to content
AI Primer
workflow

Seedance 2.0 adds ComfyUI video extension for broadcast-shot workflows

Creators shared repeatable Seedance 2.0 workflows for ComfyUI clip extension, GPT Image 2 shot planning, and fake-broadcast or iPhone footage. The examples push Seedance beyond isolated shorts into longer, more controllable production pipelines.

6 min read
Seedance 2.0 adds ComfyUI video extension for broadcast-shot workflows
Seedance 2.0 adds ComfyUI video extension for broadcast-shot workflows

TL;DR

You can browse the OpenArt Smart Shot page, grab the ComfyUI extend-video workflow, and watch people use the same model for everything from fake ESPN cutaways to phone-shot military footage. The oddest reveal is how often the workflow starts outside Seedance itself: GPT Image 2 for planning, agent chat for orchestration, then Seedance for the motion pass.

ComfyUI extension

The clearest new workflow in this evidence set came from clip extension, not generation from scratch.

According to the workflow page, the process is:

  1. Load the source video and extract frames, FPS, and audio.
  2. Prompt Seedance 2.0 with what happens immediately after the last frame.
  3. Compare each generated continuation frame to the last real frame.
  4. Pick the darkest difference frame as the seam.
  5. Trim from that point, concatenate the old and new frames, then rebuild the video with audio.

That is a much more useful pattern than "generate another shot and hope." the workflow link post explicitly describes audio concat and seam selection as part of the graph, which is why this looks more like an editing primitive than a demo trick.

Shot plans

A lot of creators independently landed on the same structure, GPT Image 2 for preproduction, Seedance for motion.

MayorKingAI's manual version turned a prompt into a production sheet with:

  • character references
  • environment design
  • floor plan
  • storyboard panels
  • palette
  • lighting and mood notes

Then the Seedance follow-up prompt converted that sheet into a second-by-second timeline with named camera moves. OpenArt's Smart Shot productized the same move. On the Smart Shot page, the interface exposes scene description, character and environment references, sheet quality, and a two-step flow, preview sheet first, create video second.

The practical shift is simple: creators are replacing one giant cinematic prompt with a planning artifact and a render pass. the Flaming Mountains breakdown is basically a shot list disguised as a prompt.

Fake broadcasts

The most shareable Seedance work right now is not fantasy spectacle. It is camera grammar.

Fabian Stelzer's Glif example aimed for "authentic iPhone style video" and landed because the footage feels casually captured, not impeccably composed. Chris First's sports-broadcast workflow got even more literal. the image setup post first generated a still that looked like an ESPN crowd cutaway, then the Seedance prompt post constrained the video into one continuous take with:

  • telephoto broadcast framing
  • fixed scorebug and lower-third graphics
  • subtle crowd-reaction timing
  • announcer-style audio
  • strict no-cut behavior

The body-cam version used the same playbook. chrisfirst's body-cam prompt specified FOV, rolling shutter wobble, auto-exposure shifts, radio compression, and the exact action beats of a COPS-style arrest clip. The result is a reminder that realism now lives in format cues as much as in faces.

Agent pipelines

A second pattern is Seedance showing up as the motion engine inside larger agent systems.

InVideo's Agent One thread broke the pipeline into chat orchestration, reference generation, logo and host asset creation, then Seedance-based extension to keep the show visually consistent. The thread shows the agent generating reference boards, reaction clips, and final extensions inside one notebook-like flow.

Rainisto's MCP setup pushes the idea further upstream. the Cursor screenshot shows Cursor calling BeatBandit for story structure and Higgsfield for Seedance rendering, with the agent rewriting prompts, attaching storyboard images, and submitting the job. That is less "AI video app" and more composable production graph.

Reference-first continuity

Once creators start chaining shots, the main problem stops being beauty and becomes continuity.

The continuity workaround repeats across multiple threads:

  • use still references for character and environment locks
  • generate the first clip
  • feed that clip back as a reference for the next shot or extension
  • reinforce the handoff with a second-by-second timeline

Techhalla used that pattern in OpenArt Smart Shot, then reused the generated video as a reference for the next Seedance pass. the continuity post says that keeps music, voices, and visuals aligned across shots. the hippo prompt shows the opposite side of the same technique, a heavily constrained timeline where chaos is scripted beat by beat so the model does not improvise itself off the rails.

The same reference-first logic also shows up in multi-shot storyboard experiments. DavidmComfort's storyboard test built a sequence from a series of images inside InVideo Agent One, and rainisto's short-film clip used BeatBandit MCP for story and shot descriptions before rendering with Seedance 2.0 Fast.

Where Seedance is showing up

One final reveal is distribution. Seedance is not being presented as one destination app.

Across this evidence set, creators used it through:

That matters because the same model is now being used as infrastructure. The workflow lives in the wrapper, the planning doc, or the agent notebook. Seedance is increasingly just the render layer everybody routes through.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 5 threads
ComfyUI extension1 post
Shot plans2 posts
Fake broadcasts3 posts
Reference-first continuity2 posts
Where Seedance is showing up6 posts
Share on X