Lovart rolled out Seedance 2.0 with creator demos showing 60-second generations, preset entry points, reference uploads, and post-edit controls. Use it to build longer clips with presets, sound tweaks, and pacing edits in one workflow.

You can browse Lovart's official feature page, skim the more detailed tool breakdown, and the company has already published a full Seedance prompt guide. The creator demos are more useful than the marketing copy here: egeberkina posted a one-take superhero transformation with the full prompt, the same thread showed preset-based product motion, and MayorKingAI pushed it into glossy automotive ad territory.
Lovart's public messaging is blunt: its launch promo calls out 60-second generations, full access, and no queues. a follow-up creator post repeats the same rollout language and ties it to Lovart's global release, including the US.
The official product pages fill in what that access is for. The feature page positions Seedance 2.0 as a storytelling model with synchronized native audio and cross-shot consistency, while the main tool page pitches director-level control inside Lovart's ChatCanvas.
The most concrete workflow detail is the input stack. According to hasantoxr's step-by-step, you can upload up to 9 images and 3 video clips as references before writing the prompt. Lovart's tool page goes one step further and says a single prompt can also include up to 3 audio files.
Lovart's own prompt guide describes the model as having a built-in "director's brain" and recommends timeline-style prompting for longer clips. That matches egeberkina's transformation prompt, which is written as a 13-second beat sheet rather than a single sentence:
The lowest-friction entry point is not the blank prompt box. egeberkina's preset demo says you can select Seedance 2.0 from Lovart's main screen, pick one of the ready-made presets at the bottom, upload your own image, and run it.
That matters because the example prompt is not small. It asks for macro close-ups, bullet-time motion, a 360-degree orbital move, beat-synced cuts, stomp-and-woosh transitions, and a centered hero frame, all in one 15-second product ad. MayorKingAI's Lamborghini-style clip shows the same commercial lane from another angle, and the companion prompt post breaks it into a timeline with camera package, lens choices, atmosphere, character references, and shot-by-shot beats.
The early demos are not all chasing the same look. egeberkina's first clip goes for handheld documentary texture before snapping into a nanotech transformation, the AirPods Max example treats a product image as a locked visual reference for a luxury commercial, and hasantoxr's spy transformation pushes the model toward short-form cinematic character work.
A fourth example in the same egeberkina thread starts from a generated WWII still and turns it into a tank-to-robot VFX sequence. That gives Seedance 2.0 an unusually broad first-day demo reel: product ads, fashion-style transformations, creature action, and effects-heavy battle footage.
The most interesting workflow detail in the evidence arrives after generation. According to egeberkina's AirPods Max post, once the video is done, Lovart's agent can add sound effects, adjust pacing, change the music, and refine the edit.
That lines up with Lovart's official positioning. The feature page emphasizes native audio generation, and the main Lovart product page sells the broader app as an agentic canvas for iterating on assets instead of exporting them into a separate tool immediately.
Lovart's pricing page currently advertises Seedance 2.0 and Seedance 2.0 Fast with a limited-time offer of up to 150 bonus generations. The tool page also carries a "start for free" call to action, which is the clearest official access signal behind the launch chatter.
A final useful detail sits in Lovart's own Seedance prompt guide: if you like a shorter result, the company explicitly recommends using a Video Extend feature to continue the scene. That is a different workflow from one-shot prompt dumping, and it helps explain why the launch messaging keeps stressing longer-form generation instead of only single 5-second clips.