OpenClaw 2026.4.26 adds Google Live Talk, openclaw migrate, and Matrix E2EE
OpenClaw 2026.4.26 shipped Google Live Talk, local-model fixes, openclaw migrate imports for Claude and Hermes, and one-command Matrix E2EE. It also hardens plugins, Docker, and transcript compaction for self-hosted agent runs.

TL;DR
- openclaw's release thread says OpenClaw 2026.4.26 centers on four operator-facing changes: Google Live Talk, a large batch of Ollama and local-model fixes,
openclaw migrateimports for Claude and Hermes, and one-command Matrix E2EE. - According to openclaw's Google provider note, Talk mode now has a browser path for Google Live that uses constrained ephemeral credentials when possible, while backend-only providers fall back to a Gateway relay, matching the Google provider docs.
- openclaw's migrate post frames
openclaw migrateas a real migration tool rather than a one-shot importer: it can list, plan, dry-run, emit JSON reports, back up, and apply imports, as documented in the CLI migrate docs. - openclaw's local-model post, openclaw's compaction note, and openclaw's Docker hardening note together make this a good week for self-hosters: context handling, thinking controls, transcript compaction, plugin installs, and container update paths all got less fragile.
You can read the full v2026.4.26 release notes, skim the Google provider docs, and browse the Ollama provider page. The migration surface is unusually complete for a point release, with Claude Desktop and Claude Code imports plus Hermes, and the ops side got equally practical touches in the Matrix channel docs and compaction docs.
Google Live Talk
OpenClaw's cleanest new feature is the Google Live browser path. According to openclaw's Google provider note, Talk mode can now use constrained ephemeral credentials in the browser when that path exists, and switch to a Gateway relay when the provider has to remain backend-only.
That split matters because it spells out the trust boundary instead of hiding it in setup prose. The public docs at docs.openclaw.ai/providers/google are the canonical reference for that provider path.
Ollama and local models
openclaw's local-model post lists six fixes that mostly fall into the category of boring-good infrastructure work:
@ollamacontext handling- Thinking controls
- Timeouts
- Local auth
- Discovery
- OpenAI-compatible proxy behavior
openclaw's Cerebras post adds a second theme: bundled provider setup is getting more declarative. Cerebras now ships with onboarding, static catalog metadata, and manifest-owned endpoint config, with the corresponding reference in the Cerebras provider docs.
openclaw migrate
Migration is often where self-hosted agent tools go to die, so this one is Christmas come early for config archaeologists. openclaw's migrate post says the new command can:
- List importable sources
- Plan a migration
- Dry-run the import
- Emit a JSON report
- Back up existing state
- Apply the import
The first bundled importers are Claude Desktop, Claude Code, and Hermes, per openclaw's migrate post. The docs page at docs.openclaw.ai/cli/migrate is where those workflows are spelled out in detail.
Matrix E2EE and runtime hardening
openclaw's Matrix post says end-to-end encryption setup is now a one-command flow that can enable encryption, bootstrap recovery, and print verification status. That is paired with a batch of lower-level reliability work spread across the rest of the release.
The notable pieces from the thread and docs are:
- Matrix E2EE setup moved into a single command flow, per openclaw's Matrix post and the Matrix channel docs
- Transcript compaction can trigger on active transcript byte size, then rotate to a smaller successor transcript after local compaction, per openclaw's compaction note and the compaction docs
- Plugin installation now handles profile-aware destinations, symlinks, runtime dependency staging, safer scans, and clearer repair paths, per openclaw's plugin note and the plugins docs
- Docker and update paths now include verified temp-prefix npm swaps, CA certs in slim images,
host.docker.internaldefaults, and first-run volume permissions, per openclaw's Docker note and the Docker install docs
Tencent channels
The last useful clarification in the release came from openclaw's follow-up thread, which separates two Tencent-facing channel concepts that are easy to blur together. QQBot is the Tencent QQ bot integration for QQ groups, while Yuanbao is Tencent's AI assistant bot platform for DMs and group chats, with separate references in the QQBot docs and Yuanbao docs.