Pika released a beta skill that lets Pika AI Selfs and third-party agents join Google Meet with real-time face and voice, and published the integration on GitHub. Pika says memory and personality persist across calls, while beta notes and user posts report glitches as the feature expands beyond Pika’s own agents.

pikastream-video-meeting, an installable skill for agents such as Claude Code and OpenClaw, with Google Meet join and leave commands plus avatar generation and voice cloning flows Pika launch thread.You can browse the open-source skill repo, grab a developer key, and see how Pika frames the identity layer on Pika.me. The repo has concrete details the tweets only hint at: a listed price of $0.5 per minute, automatic payment-link generation if your balance is low, and post-meeting notes after the bot leaves.
Pika's core claim is simple: drop a Meet link into an agent workspace, and the agent can join as a real-time avatar with face and voice. In the launch thread, Pika says the skill works for "ANY agent" and keeps memory and personality intact during the call.
That cross-agent angle is what made the launch travel. Commentary posts immediately framed it as video chat escaping Pika's own product boundary and plugging into agents like Claude and OpenClaw Community reaction.
The repo is more interesting than the promo line. Pika published the feature as a standard skill module with a SKILL.md, scripts, and dependencies, so an agent can detect and use it without extra manual wiring.
According to the README, pikastream-video-meeting currently exposes four command paths:
join a Google Meet with a bot name, avatar image, optional voice ID, and optional system prompt fileleave a meeting by session IDgenerate-avatar with an output path and optional promptclone-voice from an audio file, with optional noise reductionThe same README says the bot can pull workspace context, including identity, recent activity, and known people, into its system prompt. It also says the bot retrieves meeting notes after the session ends.
Pika splits the launch into two layers. One is the general-purpose meeting skill for outside agents. The other is Pika's own "AI Self," which the company describes as a persistent digital version of a person.
On Pika.me, the company says an AI Self is created by uploading a selfie, recording a voice, and answering a few questions. The site says that self can talk, post, remember, and grow over time. In the launch thread, Pika adds one more claim: when the meeting skill is used with a Pika AI Self, it can execute agentic tasks during the call, not just talk on camera Pika launch thread.
The repository gives the most practical details. Pika lists the skill at $0.5 per minute, requires a PIKA_DEV_KEY, and says Python 3.10+ is needed, with ffmpeg optional for audio conversion during voice cloning.
The install flow is also unusually lightweight for something that looks this theatrical: add the developer key, point the agent at the skill folder, and the repo says dropping a Meet link should trigger the skill automatically. Before joining, the system checks balance and can create a payment link on the fly. Pika's own thread adds the final caveat, it is still beta, and glitches are expected Pika launch thread.
Ask your Pika AI Self to join a Google Meet and let the magic happen. For all other agents, you can download the Skill on Github here: github.com/Pika-Labs/Pika…
Ask your Pika AI Self to join a Google Meet and let the magic happen. For all other agents, you can download the Skill on Github here: github.com/Pika-Labs/Pika…
P.P.S. if you don't have a Pika AI Self yet, give birth to yours at Pika.me
Having your AI Self join a google meet is wild. And the skill is now available for all agents to try out.
Conversations tend to go better with a face and a voice. That’s why we’re thrilled to release the beta version of the first video chat skill for ANY agent, powered by our new real-time model, PikaStream1.0. The skill preserves memory and personality, and enables real-time