Skip to content
AI Primer
release

OpenArt adds Seedance 2.0 1080p with consistent human faces

OpenArt users reported Seedance 2.0 now renders 1080p video with consistent real-human faces, and posts on Runway iOS and ComfyUI showed the higher-resolution model spreading to more surfaces. That widens access beyond yesterday's single-platform 1080p rollout.

5 min read
OpenArt adds Seedance 2.0 1080p with consistent human faces
OpenArt adds Seedance 2.0 1080p with consistent human faces

TL;DR

You can browse Runway's API docs, check Freepik's Seedance 2.0 page, and the official Volcengine API announcement goes further than the tweets, spelling out four input modes, portrait authorization, and a library of more than 10,000 preset virtual humans. Freepik also split out a dedicated Seedance Pro 1080p API endpoint. Meanwhile the Runway iOS App Store listing pitches the mobile app around consistent characters, objects, and locations.

1080p spreads beyond one app

Runway's own posts turned Seedance 2.0 into a three-surface rollout in about a day: desktop 1080p, iOS access, and API availability.

The wider pattern is distribution, not just pixels.

Runway also tied the phone launch to promotion. In runwayml's iOS post, the company said first-time subscribers could get up to 50 percent off three months of any plan through the iOS app.

Human faces are the real unlock

The 1080p label got the headlines. The more useful change for filmmakers is that multiple posts started treating human identity consistency as newly reliable.

Across the evidence set, the face story breaks into three layers:

  • OpenArt demos claimed "real human faces" that stay consistent shot after shot, per AIwithSynthia's OpenArt post
  • figmaweave's post said Seedance 2.0 now accepts face-based reference images for consistent characters across scenes
  • Freepik's official page describes multi-reference character consistency across different shots and angles
  • The official Volcengine API announcement says portrait use is gated by face verification and authorization, and offers 10,000-plus preset virtual humans as a compliant fallback

That mix explains the moment. Creators are posting face-locked demos, while the official vendor language still wraps real-person use in authorization flows.

Prompts are getting storyboard-level specific

Seedance 2.0 clips are starting to read less like single prompts and more like miniature shot lists.

Two recurring prompt patterns show up in the best examples:

  1. Identity lock: define the lead character first, then insist on identical facial features, proportions, and wardrobe, as in AIwithSynthia's prompt thread.
  2. Camera package: specify lens, daylight, shadows, depth of field, and camera movement, as in AIwithSynthia's prompt thread and MayorKingAI's setup post.
  3. Beat timing: divide the clip into timecoded segments, as MayorKingAI's setup post does from 0 to 15 seconds.
  4. Physics gimmick: center one impossible event, like gravity failure or a time freeze, instead of stacking five ideas into one shot, per AIwithSynthia's Gravity Pulse post and MayorKingAI's time-freeze post.

That is why the current Seedance examples look more directed than merely generated. The prompt is doing previsualization work.

Portrait rules and rollout remain fragmented

The official distribution story is still uneven, depending on which wrapper you use.

A few details that did not fit the cleaner launch posts:

  • MayorKingAI's Dreamina rollout post claimed 1080p downloads were rolling out in phases for paid users across Africa, South America, the Middle East, and Southeast Asia
  • zaesarius' VIP post advertised a separate Seedance 2.0 VIP tier with faster queues, no face restrictions, and text, image, and audio inputs
  • The official Volcengine API announcement says the base model supports text, image, audio, and video inputs, but also says real portrait use requires face verification and authorization
  • Freepik's 1080p API doc exposes a specific Seedance Pro 1080p image-to-video endpoint, which suggests platforms are packaging the model in different tiers and endpoints rather than one universal surface

For creators, that means the same model name is arriving with different rules, queues, and access paths depending on the product sitting on top of it.

🧾 More sources

TL;DR4 tweets
Core story beats: 1080p, face consistency, wider distribution, and emerging prompt patterns.
1080p spreads beyond one app3 tweets
Evidence that Seedance 2.0 moved across Runway desktop, iOS, API, Freepik, and ComfyUI.
Human faces are the real unlock1 tweets
Posts and docs pointing to face reference support and improved identity consistency.
Prompts are getting storyboard-level specific2 tweets
Creator examples that show shot lists, lens choices, timing, and identity-lock instructions.