OpenClaw
Stories, products, and related signals connected to this tag in Explore.
Stories
Filter storiesBuilders shipped pi-treebase, a Miko voice mode for pi-listens, devrage support, and a Japanese OpenCode Go guide after the first Pi extension burst. The releases arrive as Pi’s provider abstraction gets stress-tested by OpenClaw-scale multi-provider use.
OpenClaw 2026.5.4 adds cleaner plugin installs, faster Gateway startup paths, sharper doctor repairs, and fixes for Windows and Discord. The release is aimed at reliability, so update if you rely on the Gateway or plugin tooling.
OpenClaw 2026.5.3 shipped paired-node file transfer, live /steer nudges, and /side side-questions, plus hardened plugin installs. The release changes how long-running agents are controlled and moves files with policy-gated 16 MB hops instead of stdout hacks.
ClawSweeper 0.2.0 turns OpenClaw repo maintenance into an issue-to-PR loop with build checks, repair passes, re-review, and conservative automerge. The release packages a Codex-driven maintenance bot that other repositories can fork instead of wiring their own triage stack.
OpenClaw 2026.5.2 shipped Grok 4.3 as the default xAI chat model and a broad plumbing pass for plugins, session paths, messaging bridges, and voice features. The release matters because it trims startup stalls and cleans up common integration edges in self-hosted agent setups.
OpenClaw now lets users sign in with a ChatGPT account and use an existing subscription inside the harness. That gives teams a sanctioned access path for third-party agent workflows just as Claude Code users report tighter billing and refusal heuristics elsewhere.
OpenClaw 2026.4.29 shipped a new group-chat flow, opt-in follow-up commitments, tighter exec controls, and first-class NVIDIA provider catalogs. The release matters because it pushes OpenClaw toward safer multi-user agent workflows instead of single-session chat hacks.
OpenClaw 2026.4.27 bundles DeepInfra support, better non-image attachments, explicit forward-proxy routing, and stricter model selection. The update broadens provider access while hardening operator-run deployments against routing and session failures.
Sigma added a private AI browser mode that runs OpenClaw with local models such as Gemma 4, Qwen, and Nemotron on-device. That matters because browser automation and page context can stay local instead of being routed through a hosted agent service.
OpenClaw 2026.4.26 shipped Google Live Talk, local-model fixes, openclaw migrate imports for Claude and Hermes, and one-command Matrix E2EE. It also hardens plugins, Docker, and transcript compaction for self-hosted agent runs.
Steipete’s maintainer bot ran 50 Codex agents in parallel and closed about 4,000 OpenClaw issues in a day. The cleanup pushed into rate limits, so use the README dashboard and Project Clowfish clustering to track large agent sweeps.
OpenClaw shipped a release that routes realtime voice queries to the full agent, defaults new users to V4 Flash, and adds coordinate clicks plus stale-lock recovery for browser automation. It also fixes Telegram, Slack, MCP session, and TTS issues, so update if those flows matter to your setup.
OpenClaw shipped a new release with a gateway-free local TUI, chat-time model registration, and xAI media tools. The update lowers setup friction and adds diagnostics plus trajectory bundles for debugging agent runs.
OpenClaw 2026.4.15 adds Anthropic Opus 4.7, bundled Gemini TTS, bounded memory reads, and transport self-heal fixes. The release targets context and reliability issues users had been reporting this week.
Hermes Agent shipped automatic OpenClaw migration, pastebin log sharing, and a reported 20% improvement in loading the right skill. Use the new import path and debug sharing to simplify setup across the official and community add-ons now covering support, web UI, workspace boards, and chat front ends.
Kilo Code’s ClawShop recap bundled a 30-minute KiloClaw setup workshop, SecretRef credential handling, searchable ClawBytes guides, and PinchBench for agentic performance. The event, OpenClaw 2026.4.10, and PetClaw together added new security, memory, budgeting, and desktop layers around the OpenClaw stack.
OpenClaw 2026.4.7 adds a headless inference hub, memory-wiki, session branch and restore, and webhook-driven TaskFlows. Composio also shipped a CLI for secure app authentication, so users can expand OpenClaw from a local coding harness into a broader agent runtime.
Builders shipped a direct Claude Code harness and a ClawHub marketplace skill for OpenClaw workflows. Use these routes to wire agent tooling into OpenClaw, but watch Claude API limits and token burn costs.
Anthropic’s Apr. 4 cutoff for using Claude subscriptions through OpenClaw-class harnesses went live. Users report API-billing fallbacks, ACP workarounds, and restored Claude Code quota, while edge cases around claude -p and Agent SDK use remain unsettled. The change pushes heavy agent loops toward metered access.
Anthropic said Claude subscriptions will stop covering third-party harnesses such as OpenClaw on Apr. 4, with discounted extra-usage bundles, refunds, and one-time plan credits. Heavy Claude-based agent workflows may need to move to API billing or extra-usage bundles because Anthropic cites subscription capacity constraints.
OpenClaw 2026.3.28 exposes messaging and event handling as nine MCP tools, adds Responses API support, and lets plugins request permission during browser use. Use it to separate transport from agent logic so Claude Code, Codex, Cursor, and local harnesses can share the same account with less glue.
Every opened Plus One, a hosted OpenClaw that lives in Slack, comes preloaded with internal skills, and works with a ChatGPT subscription or other API keys. It lowers the ops burden for deployed coworkers, so teams can test packaged agents before building their own stack.
OpenClaw 2026.3.24 adds native Microsoft Teams, OpenWebUI sub-agent access, Slack reply buttons, and a control surface for skills and tools. The release expands where the runtime can plug into enterprise workflows, while also increasing the surface area teams need to secure.
OpenClaw shipped version 2026.3.22 with ClawHub, OpenShell plus SSH sandboxes, side-question flows, and more search and model options, then followed with a 2026.3.23 patch. Teams get a broader plugin surface, but should patch quickly and review plugin trust boundaries as the ecosystem grows.
OpenClaw's maintainer asked users to switch to the dev channel and stress normal workflows before a large release that may break plugins. Watch harness speed, context plugins, and permission boundaries closely while the SDK refactor lands.
OpenClaw 3.13 now connects to a real Chrome 146 session over MCP so agents can drive your signed-in browser instead of a separate bot context. Update if captchas or auth state were blocking your web automation flows.
New third-party tests put MiniMax M2.7 at a 34% hallucination rate, roughly 65 tps, and 27.04% on Vibe Code Bench while users pushed it through physics-heavy web demos. It looks increasingly viable for agent workflows, but performance still swings by task and harness.
LlamaIndex open-sourced LiteParse, a model-free local parser for 50+ document types that preserves layout well enough for agent workflows. Use it as a fast first pass before expensive OCR or VLM parsing, especially when you need table structure and local execution.
Xiaomi launched MiMo-V2-Pro through its own API and confirmed Hunter Alpha was an early internal build. That makes the model easier to compare directly for long-context coding and tool-use workloads.
NVIDIA introduced NemoClaw, a reference stack that installs OpenShell and adds sandbox, privacy, and policy controls around OpenClaw. Use it if you want always-on agents on RTX PCs, DGX Spark, or cloud without building the security layer yourself.
OpenClaw added Ollama as an official provider through openclaw onboard --auth-choice ollama, alongside documented OpenAI-compatible self-hosted backends such as vLLM. Use it to run Claw workflows against local or custom models instead of a single hosted stack.
OpenClaw-RL released a fully asynchronous online training stack that turns live interaction feedback into ongoing agent updates with binary rewards and token-level OPD corrections. Use it as a starting point for online agent improvement only if you can score rollouts reliably and manage privacy risk.
OpenClaw beta added live control of a real Chrome session through Chrome DevTools MCP; the project also added native SGLang provider support and parallel tool calling work. Try it if you need self-hosted agents to handle authenticated browser flows with local inference backends.