Runway claims one creator finished a short ad in one afternoon
Runway said one creator finished a short ad in one afternoon, while others published 2-5 minute AI films and shared their stacks. The posts quantified longer production runs, from 398,055 Seedance credits across 113 scenes to multi-tool film pipelines.


TL;DR
- Runway is pushing the fastest version of the pitch with its one-afternoon ad post, while Runway's Gen-4 launch page says the model is built for consistent characters, locations, and objects across scenes.
- The clearest counterexample came from Glenn Williams' Punchkin trailer, where his workflow breakdown stacked Adobe Firefly, Nano Banana, Seedance 2.0, Kling 03, Topaz, ElevenLabs, Suno, and Premiere into a five-minute folktale.
- Dustin Hollywood's production notes put hard numbers on long-form AI filmmaking: 398,055 Seedance credits, 113 scenes, and roughly 180 to 220 shots per episode, while Runway's model pricing page lists Seedance 2.0 at 36 credits per second.
- Kaigani's anime experiment used 20 still frames as input for Meta's Muse Spark screenplay pass, and Meta's Muse Spark post describes the model as natively multimodal with tool use and visual reasoning.
- The interesting shift is not that AI video got cheaper. According to Runway's ad example, Dustin Hollywood's cost breakdown, and Glenn Williams' note on camera moves, creators are splitting into two camps: quick ad-style proofs and expensive multi-tool film pipelines.
You can browse Runway's AI for Advertising course, check Seedance 2.0's spec sheet inside Runway, and compare that against Runway's pricing page. Then there is the weird hybrid route, where Meta's Muse Spark writes from images, Punchkin's full 4K cut stretches an Indian folktale into five minutes, and Runway's own Seedance product page now pitches multi-shot video with image, video, and audio references.
One afternoon, one ad
Runway's claim was blunt: one creator made a short ad in one afternoon. The company framed speed and storytelling as the headline, not the model stack behind it.
That post lands differently once you read Runway's own product material. Runway's Gen-4 page says the model is designed to keep characters, locations, and objects consistent across scenes, while the pricing page says the free tier includes 125 one-time credits, which it equates to 25 seconds of Gen-4 Turbo video. That is a real ad-production argument: short runtime, tight iteration loop, enough continuity to make a branded spot feel intentional instead of stitched together.
Punchkin's tool stack
Glenn Williams did the opposite of the one-tool sales pitch. He published the stack.
His breakdown was unusually clean:
- Adobe Firefly and Nano Banana for images
- Seedance 2.0 and Kling 03 via Runway for video
- Topaz for 4K upscale
- ElevenLabs for narration
- Suno for music
- Premiere Pro for edit
The result is a five-minute folktale, not a 15-second flex clip. Williams' follow-up post linked the full 4K upload and positioned the project inside a broader series, Stor-AI Time, aimed at adapting folktales from different cultures.
The long-form bill
Dustin Hollywood's ECLIPTIC thread supplied the number everybody usually hides. Episodes 1 through 6 used 398,055 Seedance credits alone.
He also gave the production shape: 113 total scenes, roughly 180 to 220 shots per episode, plus a stack that reached across CapCut, Dreamina, Stages, Imagine, Kling v3, Hailuo v2.3, Midjourney v8, Reve, Seedream v5, Suno, After Effects, and Premiere. That matters because Runway's pricing catalog lists Seedance 2.0 at 36 credits per second, and Runway's Seedance help doc says the model supports 5 to 15 second generations at 480p or 720p for Standard plans and up.
His broader point was sharper than the number: clip economics and series economics are different worlds. The current best-looking long-form work is starting to look real, but it is not pretending to be cheap.
From 20 frames to a screenplay
Kaigani's workflow added a different kind of compression. He took 20 frames from his BURST FRAME process and handed them to Muse Spark to generate a screenplay, then turned that into a five-minute anime-style episode produced in a few hours, according to his follow-up note.
That is less about perfect output and more about a new handoff point. Meta's official Muse Spark post describes the model as natively multimodal, with tool use, visual chain of thought, and multi-agent orchestration. In practice, creators are already treating that kind of system as a script engine that can infer scenes from images instead of waiting for a text prompt to do all the work.
Kling still owns zoom transitions
One of the most useful details arrived in a reply, not the main thread. Williams said Seedance 2.0 struggled with zoom transitions on Punchkin, while Kling still felt better for camera control.
That is the sort of practical split that keeps showing up across these posts. Runway is marketing short, polished wins. Creators chasing longer narrative pieces are still routing around weak spots shot by shot, model by model.