An AI music creation product for generating songs and audio from prompts.

Recent stories
Creators posted finished shorts and ad-style clips built with Midjourney, Seedance, LTX, Suno and Glif. The stacks compress previs, motion and music into days, but the posts still describe manual compositing, editing and local renders.
Creators published repeatable Seedance 2.0 recipes for time-freeze scenes, tracking shots, sports-broadcast surrealism, fantasy fly-throughs, and music visuals. Several threads included full prompts, reference-image setup, and timeline instructions, so use them as workflow templates rather than finished clip examples.
Reddit posts said v5.5 improved voice tone but still ignores gender-labeled sections, switches singers mid-part, and struggles with detailed instrument instructions. Creators are iterating on renders until the emotion fits, then generating lipsync video to work around the gaps.
Turkish creator Ozan Sihay released a seven-minute one-person AI short film built with Seedance 2.0, Kling 3.0, Nano Banana 2, Runway, HeyGen, Suno, and CapCut. The film matters because it turns Seedance’s weak face realism into a masked-character design rule and shows the planning graph behind the finished cut.