Skip to content
AI Primer
release

Glif adds auto-zoom video edits to Creative Super Agent

Glif’s Creative Super Agent can now scan an uploaded clip and apply zoom effects automatically while still handling subtitles in the same workflow. Fabian Stelzer also showed the agent loading Seedance iPhone-style skills for POV horror footage, so users can try the new edit path on short clips.

3 min read
Glif adds auto-zoom video edits to Creative Super Agent
Glif adds auto-zoom video edits to Creative Super Agent

TL;DR

  • Fabian Stelzer's auto-zoom demo shows Glif's Creative Super Agent taking an uploaded clip, scanning it, and dropping zoom effects onto the moments it thinks fit the prompt.
  • In the same thread, the launch post says the workflow already handled subtitles too, which turns the update into a small edit stack rather than a single effect.
  • Stelzer's Seedance example shows a second path through the agent: ask for "iPhone style Seedance" footage and Glif loads the matching skills for shaky POV horror-style video.
  • Earlier, Glif's recreate-video demo showed the agent analyzing a reference clip frame by frame and using that vibe as a template for a new directed clip.

You can browse the product page, skim Glif's docs overview, and check VideoKit Tools, where Glif already documents short-form video workflows with captioning. The new bit in the auto-zoom post is that the agent is now moving further into edit decisions inside chat, while the Seedance clip suggests style routing is becoming a prompt shortcut instead of a manual model hunt.

Auto-zoom edits

Stelzer wrote that Glif can now "automatically apply zoom effects" after you upload a clip and ask for them. In the thread, he added that users can still specify where zooms should land, but the agent had been "doing a good job of figuring it out" from the prompt alone.

That pushes Glif one step past generation and into lightweight editing. On the official product page, Glif says its Creative Super Agent can generate media, search the web, edit, and crawl content in one chat.

Seedance POV skills

In the Matcha in the woods post, Stelzer said Seedance 2 is especially good at iPhone POV footage, and that telling Glif you want "iPhone style Seedance" makes the agent load the right skills automatically.

That behavior lines up with Glif's docs overview, which says the agent picks tools itself, chains them together, and works across models including Seedance. The earlier recreate-video demo showed the same routing logic on reference-based generation: upload a short clip, let Glif analyze it frame by frame, then direct a new clip built from that visual vibe.

Captioned short-form workflows

The auto-zoom post matters partly because subtitles were already in the same run. In Stelzer's reply, he said the demo video also got its subtitles from Glif, and VideoKit Tools documents prebuilt short-form workflows that can create captioned TikTok-style videos end to end.

The broader product pitch is already pointed that way. Glif 2.0's feature page says the Creative Super Agent wraps 100+ tools and 50+ models in one chat, and an earlier music-video example shows the kind of multi-step brief Glif is chasing: take Midjourney alien models, generate a slick K-pop stuttercore video, make the music, and assemble the whole piece inside one prompt.

Share on X