Seedance 2 adds 15s, 6-shot prompts and 7-image reference packs
Creators are now prompting Seedance 2 with shot-by-shot scripts, single-reference multishot setups, and up to seven image refs for longer scenes. The workflow improves camera planning and character continuity, but clean references and prompt structure still matter.

TL;DR
- Creators are treating Seedance 2 less like a single text-to-video prompt and more like a shot list: ProperPrompter's greenhouse test uses a 15-second, 6-shot format to lock camera changes, timing, and repeated character behavior from one turnaround reference.
- A second pattern is emerging around single-image multishot storytelling. In koldo's demo, one Midjourney reference image drives three separate clips, with each clip broken into named shot types rather than one long descriptive paragraph.
- Longer scene work is pushing toward reference packs. 0xInk says in the workflow post that this test used seven image references in Seedance 2 after building characters in Midjourney and adding texture passes with Nano Banana 2.
- The workflow is already mixing tools upstream: Anima Labs' camp short combines Midjourney, Nano Banana, Kling, Seedance 2, and Suno, suggesting Seedance is landing as the animation stage inside broader creator pipelines.
How are creators structuring prompts?
The clearest change is prompt format. ProperPrompter breaks a 15-second clip into six timestamped shots, each with its own framing instruction—wide, medium, close-up, side profile, then a pull-back—while keeping one character and one location stable across the sequence. The post frames this as a test of character consistency, prompt adherence, and camera control, and the attached clip follows the beat-by-beat plan closely, including the butterfly gag corrected in the thread context greenhouse clip.
Koldo's version applies the same logic at a slightly larger scale: one Midjourney still becomes three Seedance clips, each with its own internal shot list such as wide shot, extreme close-up, and slow pull-back. The prompt slides shown in the thread describe the clips almost like storyboard pages, which matters because the creator says a single reference image can carry enough story context to move fast when the shots are preplanned prompt slide.
What does the reference workflow look like for longer scenes?
For longer experiments, creators are widening the image input rather than relying on one hero still. 0xInk says this test used seven image references in Seedance 2, with characters first made in Midjourney and then retextured in Nano Banana 2; the same process was used for environments. The goal, according to the thread, is less strict plot coherence than stronger emotion and personality across a longer video run
.
Anima Labs shows how that fits into a broader pipeline: Midjourney V7 for 2D design, Nano Banana for 3D, Kling 2.6 on Freepik, Seedance 2 for animation, and Suno for music. That clip is short and playful, but it reinforces the same practical takeaway as the other posts: clean reference prep and explicit shot planning are becoming the difference between a nice motion test and a scene you can actually direct camp clip.