OpenArt adds Seedance 2.0 1080p with consistent human faces
OpenArt users reported Seedance 2.0 now renders 1080p video with consistent real-human faces, and posts on Runway iOS and ComfyUI showed the higher-resolution model spreading to more surfaces. That widens access beyond yesterday's single-platform 1080p rollout.

TL;DR
- OpenArt users started posting Seedance 2.0 outputs that pair 1080p delivery with more stable real-human identity, according to AIwithSynthia's OpenArt post, while AIwithSynthia's Maxfusion note framed face consistency as the upgrade that finally stuck.
- Runway turned Seedance 2.0 into a wider distribution story fast: runwayml's 1080p post put the model on desktop at higher resolution, runwayml's iOS post pushed it onto phones, and runwayml's API post exposed it to developers.
- Other surfaces moved at the same time, with freepik's 1080p announcement promoting Seedance 2.0 1080p on Freepik and PurzBeats' repost of ComfyUI showing the same resolution bump inside ComfyUI.
- The creator examples already have a house style: AIwithSynthia's Gravity Pulse post uses a locked lead character plus lens and lighting notes, while MayorKingAI's time-freeze prompt breaks a 15 second clip into shot-by-shot beats.
You can browse Runway's API docs, check Freepik's Seedance 2.0 page, and the official Volcengine API announcement goes further than the tweets, spelling out four input modes, portrait authorization, and a library of more than 10,000 preset virtual humans. Freepik also split out a dedicated Seedance Pro 1080p API endpoint. Meanwhile the Runway iOS App Store listing pitches the mobile app around consistent characters, objects, and locations.
1080p spreads beyond one app
Runway's own posts turned Seedance 2.0 into a three-surface rollout in about a day: desktop 1080p, iOS access, and API availability.
The wider pattern is distribution, not just pixels.
- Runway: 1080p on the main product, per runwayml's 1080p post
- Runway iOS: mobile generation, per runwayml's iOS post
- Runway API: developer access, per runwayml's API post and Runway's API docs
- Freepik: Seedance 2.0 1080p, per freepik's 1080p announcement and Freepik's model page
- ComfyUI: 1080p availability surfaced in PurzBeats' repost of ComfyUI
Runway also tied the phone launch to promotion. In runwayml's iOS post, the company said first-time subscribers could get up to 50 percent off three months of any plan through the iOS app.
Human faces are the real unlock
The 1080p label got the headlines. The more useful change for filmmakers is that multiple posts started treating human identity consistency as newly reliable.
Across the evidence set, the face story breaks into three layers:
- OpenArt demos claimed "real human faces" that stay consistent shot after shot, per AIwithSynthia's OpenArt post
- figmaweave's post said Seedance 2.0 now accepts face-based reference images for consistent characters across scenes
- Freepik's official page describes multi-reference character consistency across different shots and angles
- The official Volcengine API announcement says portrait use is gated by face verification and authorization, and offers 10,000-plus preset virtual humans as a compliant fallback
That mix explains the moment. Creators are posting face-locked demos, while the official vendor language still wraps real-person use in authorization flows.
Prompts are getting storyboard-level specific
Seedance 2.0 clips are starting to read less like single prompts and more like miniature shot lists.
Two recurring prompt patterns show up in the best examples:
- Identity lock: define the lead character first, then insist on identical facial features, proportions, and wardrobe, as in AIwithSynthia's prompt thread.
- Camera package: specify lens, daylight, shadows, depth of field, and camera movement, as in AIwithSynthia's prompt thread and MayorKingAI's setup post.
- Beat timing: divide the clip into timecoded segments, as MayorKingAI's setup post does from 0 to 15 seconds.
- Physics gimmick: center one impossible event, like gravity failure or a time freeze, instead of stacking five ideas into one shot, per AIwithSynthia's Gravity Pulse post and MayorKingAI's time-freeze post.
That is why the current Seedance examples look more directed than merely generated. The prompt is doing previsualization work.
Portrait rules and rollout remain fragmented
The official distribution story is still uneven, depending on which wrapper you use.
A few details that did not fit the cleaner launch posts:
- MayorKingAI's Dreamina rollout post claimed 1080p downloads were rolling out in phases for paid users across Africa, South America, the Middle East, and Southeast Asia
- zaesarius' VIP post advertised a separate Seedance 2.0 VIP tier with faster queues, no face restrictions, and text, image, and audio inputs
- The official Volcengine API announcement says the base model supports text, image, audio, and video inputs, but also says real portrait use requires face verification and authorization
- Freepik's 1080p API doc exposes a specific Seedance Pro 1080p image-to-video endpoint, which suggests platforms are packaging the model in different tiers and endpoints rather than one universal surface
For creators, that means the same model name is arriving with different rules, queues, and access paths depending on the product sitting on top of it.