Midjourney, Seedance and LTX support 2-day short-form production stacks
Creators posted finished shorts and ad-style clips built with Midjourney, Seedance, LTX, Suno and Glif. The stacks compress previs, motion and music into days, but the posts still describe manual compositing, editing and local renders.

TL;DR
- Creators spent the weekend turning still images, reference frames, and songs into finished shorts, with _VVSVS's fashion spot, fabianstelzer's Glif music-video demo, and a Reddit VFX post using LTX Video 2.3 all describing production stacks measured in days, not weeks.
- The common recipe was modular: Midjourney or Niji for lookdev, Seedance for motion, and Suno or existing audio for soundtrack, according to starks_arq's tool list, Artedeingenio's cyberpunk series test, and bennash's Runway plus Suno short.
- Prompt structure is getting weirdly film-school specific, with techhalla's hammer-throw prompt, chrisfirst's time-freeze script, and Artedeingenio's griffin sequence all broken into shot plans, lenses, audio cues, and second-by-second beats.
- The speed story comes with caveats: _VVSVS's follow-up called one clip raw output good enough for 80 percent of social posts, while a SunoAI Reddit breakdown still described two days of iteration, local ComfyUI renders, and hundreds of music tries.
- Seedance is also spreading across interfaces instead of staying in one app, with Runway's iOS rollout, AIwithSynthia's Renoise post, and zaesarius's Seedance 2.0 VIP links pointing to mobile, browser, and studio-style wrappers around the same model family.
You can watch a fashion ad built from Midjourney and Seedance on _VVSVS's post, jump to a Glif agent that turns Suno tracks into videos on fabianstelzer's demo, and see a Reddit creator describe handheld footage plus local LTX compositing in the VFX post. There is also a commercial template page for 31 Worlds, a live Runway iOS app listing, and a MovieFlow homepage pitching 10-minute Seedance runs.
The 2-day stack
The cleanest pattern in this batch is a three-step stack: generate key imagery, animate it, then add music or edit.
- Lookdev: Midjourney, Niji, or Nano Banana, per starks_arq's Reptile Bunnies build, Artedeingenio's Niji plus Seedance post, and Artedeingenio's Midjourney anime reference workflow
- Motion: Seedance 2.0 or LTX Video 2.3, per _VVSVS's fashion clip, the Reddit VFX post, and the SunoAI Reddit post
- Music: Suno in bennash's short, bennash's BMW prototype film, and the Reddit music-video workflow
- Orchestration: Glif in fabianstelzer's one-shot workflow and his vibe-trailer tutorial
The interesting part is not that any one model looks good. It is how often creators now describe a whole short-form pipeline instead of a single pretty clip.
[Happy hardcore ] "Drifting past the sun" by Jo Spamiti the second
0 comments
One Reddit creator said the process from first character sheet to YouTube upload took about two days on a 4070 Ti Super using ComfyUI and LTX 2.3 in r/SunoAI. _VVSVS made the same compression point from the ad side, comparing a 15-second Midjourney plus Seedance output to a fashion campaign shoot that once cost $100K.
Prompt grammar
The prompts getting shared are not short adjectives anymore. They read more like shotlists.
- Camera package: Arri Alexa Mini, 35mm lens, anamorphic lens, GoPro POV, broadcast ENG camera, per chrisfirst's time-freeze prompt, AllaAisling's driving sequence, and techhalla's sports-broadcast setup
- Timeline blocks: 0 to 3 seconds, 3 to 6 seconds, and so on, per the time-freeze beat sheet and the griffin sequence
- Audio direction: crowd roar, bass drop, footsteps, engine strain, spatial sound, per chrisfirst's sound notes, AllaAisling's audio arc, and techhalla's broadcast audio
- Physics constraints: stable anatomy, motion blur, coherent multi-subject physics, per techhalla's horseback basketball prompt and AllaAisling's grounded-physics prompt
That structure seems to be what makes the outputs reusable. chrisfirst's original post called the time-freeze effect "super easy" once the prompt format was set, and ProperPrompter's variant shows the same scaffold surviving a character swap from human to puppet.
Manual work did not disappear
My 5-year-old told me he could fly. Here's what it took to prove him right. ๐
0 comments
The posts are bullish, but they do not describe pure one-prompt filmmaking.
The manual layers show up over and over:
- The VFX Reddit post starts with a real handheld phone shot, then composites in DaVinci Resolve before dropping in an LTX flight sequence.
- The SunoAI Reddit post says the music took hundreds of iterations, while the video pass ran locally in ComfyUI from Nano Banana keyimages.
- _VVSVS says Seedance action sequences still take "a lot of effort to execute."
- _VVSVS's follow-up frames the raw output as already good enough for 80 percent of social media needs, which quietly implies the last 20 percent still belongs to editing.
That gap between fast generation and finished delivery is probably the real story here. The stack is compressing previs, coverage, and concept films first, while compositing, curation, and continuity cleanup still sit with the creator.
Character locks and reference frames
A lot of the stronger examples are not pure text-to-video. They start from a reference image, then ask Seedance to preserve identity or style.
- Artedeingenio's anime workflow uses a Midjourney image as the reference and says it "always delivers spectacular results."
- Artedeingenio's sword clip uses a Niji-generated image as reference, then animates in Topview.
- Artedeingenio's astronaut post notes that the style holds, while helmet hair still breaks.
- AIwithSynthia's OpenArt demo pitches the 1080p upgrade around real faces and shot-to-shot consistency.
- starks_arq's Reptile Bunnies splits world creation, expansion, brainstorming, and motion across four separate tools.
That is a useful tell. The creative control is shifting toward image-first pipelines where the still frame acts like a mini style bible, and video models handle motion continuity rather than inventing the whole world from scratch.
Where Seedance is showing up
The distribution map is widening fast, which matters because most of these workflows are really wrappers plus presets around the same underlying generator.
- Mobile: Runway put Seedance 2.0 in its iOS app, with a first-time subscriber discount in the launch post.
- Browser studios: AIwithSynthia's post links to Renoise, while zaesarius's post points to image-to-video, text-to-video, start-end frames, and character generation inside AIFilms.
- High-volume wrappers: Artedeingenio calls Topview the only option with unlimited generations, and Anima_Labs says Arcads added Seedance 2.0 at no extra cost.
- Long-run packaging: heyrimsha's thread claims MovieFlow can generate up to 10 minutes in one run and links to MovieFlow.
The last twist is that local workflows are advancing in parallel. The VFX Reddit post and the SunoAI Reddit post both rely on LTX 2.3 running locally, so the same weekend produced two opposite directions at once: more hosted Seedance surfaces, and more creators keeping the motion pass on their own GPUs.