Skip to content
AI Primer
workflow

Midjourney, Seedance and LTX support 2-day short-form production stacks

Creators posted finished shorts and ad-style clips built with Midjourney, Seedance, LTX, Suno and Glif. The stacks compress previs, motion and music into days, but the posts still describe manual compositing, editing and local renders.

6 min read
Midjourney, Seedance and LTX support 2-day short-form production stacks
Midjourney, Seedance and LTX support 2-day short-form production stacks

TL;DR

You can watch a fashion ad built from Midjourney and Seedance on _VVSVS's post, jump to a Glif agent that turns Suno tracks into videos on fabianstelzer's demo, and see a Reddit creator describe handheld footage plus local LTX compositing in the VFX post. There is also a commercial template page for 31 Worlds, a live Runway iOS app listing, and a MovieFlow homepage pitching 10-minute Seedance runs.

The 2-day stack

The cleanest pattern in this batch is a three-step stack: generate key imagery, animate it, then add music or edit.

The interesting part is not that any one model looks good. It is how often creators now describe a whole short-form pipeline instead of a single pretty clip.

r/SunoAI

[Happy hardcore ] "Drifting past the sun" by Jo Spamiti the second

0 comments

One Reddit creator said the process from first character sheet to YouTube upload took about two days on a 4070 Ti Super using ComfyUI and LTX 2.3 in r/SunoAI. _VVSVS made the same compression point from the ad side, comparing a 15-second Midjourney plus Seedance output to a fashion campaign shoot that once cost $100K.

Prompt grammar

The prompts getting shared are not short adjectives anymore. They read more like shotlists.

That structure seems to be what makes the outputs reusable. chrisfirst's original post called the time-freeze effect "super easy" once the prompt format was set, and ProperPrompter's variant shows the same scaffold surviving a character swap from human to puppet.

Manual work did not disappear

r/vfx

My 5-year-old told me he could fly. Here's what it took to prove him right. ๐Ÿš€

0 comments

The posts are bullish, but they do not describe pure one-prompt filmmaking.

The manual layers show up over and over:

  • The VFX Reddit post starts with a real handheld phone shot, then composites in DaVinci Resolve before dropping in an LTX flight sequence.
  • The SunoAI Reddit post says the music took hundreds of iterations, while the video pass ran locally in ComfyUI from Nano Banana keyimages.
  • _VVSVS says Seedance action sequences still take "a lot of effort to execute."
  • _VVSVS's follow-up frames the raw output as already good enough for 80 percent of social media needs, which quietly implies the last 20 percent still belongs to editing.

That gap between fast generation and finished delivery is probably the real story here. The stack is compressing previs, coverage, and concept films first, while compositing, curation, and continuity cleanup still sit with the creator.

Character locks and reference frames

A lot of the stronger examples are not pure text-to-video. They start from a reference image, then ask Seedance to preserve identity or style.

That is a useful tell. The creative control is shifting toward image-first pipelines where the still frame acts like a mini style bible, and video models handle motion continuity rather than inventing the whole world from scratch.

Where Seedance is showing up

The distribution map is widening fast, which matters because most of these workflows are really wrappers plus presets around the same underlying generator.

  • Mobile: Runway put Seedance 2.0 in its iOS app, with a first-time subscriber discount in the launch post.
  • Browser studios: AIwithSynthia's post links to Renoise, while zaesarius's post points to image-to-video, text-to-video, start-end frames, and character generation inside AIFilms.
  • High-volume wrappers: Artedeingenio calls Topview the only option with unlimited generations, and Anima_Labs says Arcads added Seedance 2.0 at no extra cost.
  • Long-run packaging: heyrimsha's thread claims MovieFlow can generate up to 10 minutes in one run and links to MovieFlow.

The last twist is that local workflows are advancing in parallel. The VFX Reddit post and the SunoAI Reddit post both rely on LTX 2.3 running locally, so the same weekend produced two opposite directions at once: more hosted Seedance surfaces, and more creators keeping the motion pass on their own GPUs.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On Xยท 6 threads
TL;DR5 posts
The 2-day stack3 posts
Prompt grammar5 posts
Manual work did not disappear1 post
Character locks and reference frames3 posts
Where Seedance is showing up5 posts