Skip to content
AI Primer
release

Glif launches V2 and raises $17.5M seed

Glif launched V2 as a chat-based creative agent that chains image, video, voice, and music models, and announced a $17.5 million seed led by a16z and USV. Early demos show multi-model ads and short films being produced inside single conversations instead of manual tool hopping.

4 min read
Glif launches V2 and raises $17.5M seed
Glif launches V2 and raises $17.5M seed

TL;DR

  • Glif launched V2 as a chat-based creative agent that, according to the Glif launch thread, can produce ads, films, short-form clips, voiceovers, music, and more by calling multiple models and tools inside one conversation.
  • The company also announced a $17.5 million seed, which Glif's launch thread says was led by a16z and USV, while a16z's post framed the product as a single agent that takes creators from idea to output.
  • The launch examples are unusually concrete: Glif's demo thread breaks out five different campaign videos, each assembled from different model stacks including Seedance 2.0, GPT Image 2, Gemini, Kling, Veo, ElevenLabs, and Google's TTS.
  • Early usage posts already show the pitch in action, with one public prompt test generating a shaky faux-phone video of a grandma doing yoga on a Lamborghini by combining GPT Image 2 with Seedance 2.0.

You can jump straight into Glif, read a16z's investment note, and browse the launch thread for five different multi-model video recipes. There is also a very online footnote in Fabian Stelzer's same-hour joke post, where Glif turned its own bad launch timing against OpenAI's GPT 5.5 drop into another generated clip.

One chat, many models

The core product claim is simple and useful: one chat orchestrates a pile of specialized generation systems for you. In a16z's announcement, the framing is idea to output, with the agent calling models and tools when needed instead of making creators bounce across separate apps.

The launch thread lists the creative surfaces directly: ads, marketing content, films, short-form content, voiceovers, and music in Glif's own wording. The same post links to Glif, and the funding announcement names a16z and USV as lead investors in the launch thread.

Five launch videos, five toolchains

Glif's launch thread is strongest when it stops selling the abstraction and shows the stack choices.

That recipe-level detail is the real reveal. The interface pitch is a super agent, but the demos show a routing layer for creators who already know that image, video, voice, and subtitle jobs usually live in different products.

The prompt style is loose, not cinematic-spec precious

The public examples lean more "tell it the bit" than tightly structured prompt engineering.

One early test from awesome_visuals asked for "super amateurish bad video footage" of a 78-year-old grandma doing wobbly downward dog on a bright yellow Lamborghini Countach, with whispered commentary and a car alarm. An earlier Glif example from Fabian Stelzer's ramen ad post used a much shorter brief: "generate an anime video ad for a ramen shop featuring a cyborg monster. Make no mistakes."

Those examples matter because they show what Glif is trying to productize. The user describes a scene or format, then the system decides which generation models to chain together.

Seed round and launch timing

The company paired the product launch with a $17.5 million seed announcement in its launch thread, and a follow-up a16z post says the firm is backing Glif because AI creative work is still fragmented across too many tools.

The funniest launch artifact came a little later. In Fabian Stelzer's post about launching the same hour as OpenAI, Glif turned the coincidence into another generated video, which is a neat proof that the product can eat its own marketing queue in real time.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 1 thread
Seed round and launch timing1 post