Skip to content
AI Primer
release

Runway launches Characters API: real-time avatars with custom voices and knowledge banks

Runway opened Characters on its developer platform with API access, custom voices, embedded knowledge, and a free starter allowance. Use it to build interactive hosts, guides, and assistants that can talk through tasks instead of relying on passive video.

2 min read
Runway launches Characters API: real-time avatars with custom voices and knowledge banks
Runway launches Characters API: real-time avatars with custom voices and knowledge banks

TL;DR

  • Runway opened Characters on its developer platform, pitching real-time avatars that can be deployed through the API with custom styles, voices, instructions, and embedded knowledge banks, according to Runway's launch post.
  • The company says access is live now through the developer platform post, which includes a free starter allowance of 30 minutes of conversation.
  • Early demos already frame Characters less like passive video mascots and more like task assistants: one creator used them to identify Japanese products in real time via this demo, while another built a game guide with a map-specific knowledge base via a Marathon prototype.

What shipped

Runway's launch post describes Characters as real-time intelligent avatars that can be embedded into apps, websites, products, and services through the API. The core creative hook is not just the avatar layer: developers can attach bespoke knowledge banks, custom voices, and instructions, then style the character across different visual looks.

The companion developer platform post says the product is available now and points builders to Runway's developer portal, with the first 30 minutes of conversation free. Runway executive Cristóbal Valenzuela's team also framed the launch around voice-first task navigation rather than button-heavy interfaces, with a staff post calling out accessibility as a key use case.

What creators are building with it

The clearest early pattern is domain-specific assistants. In one prototype, a creator loaded Characters with map knowledge from Bungie's Marathon, then had the avatar read the screen, guide players to objectives, and advise on what loot to extract.

A lighter demo used Characters to identify Japanese carts from a live camera view, turning object recognition into a spoken back-and-forth instead of a static label pass, as shown in the reposted clip. Together, those examples suggest the format works best when the avatar has narrow context and a live task to talk through, not just a generic chat persona.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 1 thread
What shipped1 post
Share on X