Music Generation
Stories, products, and related signals connected to this tag in Explore.
Stories
Filter storiesCreators posted finished shorts and ad-style clips built with Midjourney, Seedance, LTX, Suno and Glif. The stacks compress previs, motion and music into days, but the posts still describe manual compositing, editing and local renders.
Reddit posts said v5.5 improved voice tone but still ignores gender-labeled sections, switches singers mid-part, and struggles with detailed instrument instructions. Creators are iterating on renders until the emotion fits, then generating lipsync video to work around the gaps.
Google is rolling out Lyria 3 Pro for full songs and Lyria 3 Clip for 30-second generations in the Gemini API and AI Studio. Musicians can now map intros, verses, choruses and bridges instead of stitching short music clips together.
ElevenLabs launched Flows, a node-based canvas inside ElevenCreative that chains image, video, voice, music, SFX, lip sync, and voice changing in one workspace. Use it to keep context across the pipeline instead of re-exporting between apps.