Pika Agents supports GPT Images 2.0 and Seedance 2.0 ad workflows
Creator tests showed Pika Agents using GPT Images 2.0 for storyboards, extending two 15-second Seedance 2.0 clips into one ad, and running from Telegram on mobile. The workflows matter because Pika is being used as an orchestration layer for multi-model ad production, not just one-shot video output.

TL;DR
- pika_labs' launch thread framed Pika Agents as a creative partner with a voice, face, and personality, while the separate Pika.me product page says the agent product is free to start on web and iOS.
- In ProperPrompter's walkthrough, one Pika Agent chat used GPT Images 2.0 for storyboard frames, then switched to Seedance 2.0 for final motion without leaving the thread.
- ProperPrompter's follow-up showed a practical workaround for Seedance's 15 second cap: plan two 15 second segments, generate clip one, then feed it back in as the starting point for clip two.
- Mobile control is part of the pitch now. AmirMushich's earlier demo and his Telegram thread both showed agents taking voice prompts in Telegram, returning briefings, and kicking off Seedance-based outputs from a phone.
- The creator examples are already turning into reusable production recipes: egeberkina's storyboard prompt mapped a Tekken-style fight into nine panels, while MayorKingAI's Seedance prompt turned a biblical storyboard into a timed shot list with audio cues.
You can browse the Pika.me onboarding page, compare it with Pika's still video-first main site, and see the same orchestration pattern surface inside other creative stacks like Magnific. The strange bit is how quickly the workflow has settled into a familiar split: image model for the locked visual, video model for motion, agent chat for glue.
Pika moved the product to pika.me
Pika's main web homepage is still dominated by Pikaformance, and the public pricing page still reads like a classic video tool catalog. The agent product is described elsewhere: Pika.me says you can create a Pika Agent in minutes, start free, and use it on web or iOS.
That split matters because the official launch copy sells an interface change, not a new render model. The agent is positioned as the layer that can hold taste, switch models, and keep the conversation going across tools.
One conversation can plan, render, and stitch an ad
In ProperPrompter's demo, the workflow starts with a reference avatar and a vibe brief, then GPT Images 2.0 produces storyboard frames inside the chat. The same thread then moves to Seedance 2.0 for motion, with the agent stitching two clips into one ad after planning around the 15 second limit.
The mechanics are simple enough to scan:
- Reference image in, style and brand direction in ProperPrompter's setup
- GPT Images 2.0 for storyboard stills ProperPrompter's storyboard step
- Seedance 2.0 for each motion segment ProperPrompter's video step
- Clip one fed back as input for clip two ProperPrompter's stitching trick
- Optional installable skills, including a Short Ads skill ProperPrompter's skills note
A screenshot in ProperPrompter's thread also shows model switching inside the agent, with Anthropic entries visible in the selector rather than a fixed house model.
Telegram becomes the mobile control layer
The more interesting creator test came from AmirMushich's thread, which pushed the agent outside Pika's own chat UI. He described Telegram as the command center for trend research, post ideation, campaign visuals, meeting scheduling, and designer guidance, and his screenshots showed the bot delivering a structured X trends briefing back into Telegram.
That makes Pika look less like a one-shot ad generator and more like an orchestration layer sitting on top of APIs, chat surfaces, and media models. His earlier clip already showed voice-message prompting from Telegram into a Seedance output, so the phone is part of the workflow, not just a notification surface.
Prompt blocks are getting closer to production docs
Outside Pika itself, the surrounding creative culture is getting more structured. In MayorKingAI's Seedance prompt, the video prompt reads like a shot schedule, with timestamps, camera moves, sound effects, and score notes tied to nine storyboard panels, and he said the piece was made inside Magnific, whose parent company said this week that the rebrand from Freepik to Magnific was meant to unify a broader AI creative suite in one place in an official announcement.
The same pattern shows up in egeberkina's workflow: Midjourney for character sheets, GPT Image 2 for a nine-panel gameplay storyboard, then Seedance 2.0 for a 15 second fight with explicit UI states, combo counters, and KO timing. The prompt box is still here, but it is starting to look a lot more like a production brief than a magic sentence.