OpenClaw 2026.4.29 adds agent-native group chats, follow-up commitments, and NVIDIA model catalogs
OpenClaw 2026.4.29 shipped a new group-chat flow, opt-in follow-up commitments, tighter exec controls, and first-class NVIDIA provider catalogs. The release matters because it pushes OpenClaw toward safer multi-user agent workflows instead of single-session chat hacks.

TL;DR
- openclaw's release post framed 2026.4.29 around five changes: agent-native group chats, inferred follow-up commitments, tighter exec and owner controls, NVIDIA model catalogs, and startup plus channel reliability work.
- In groups, openclaw's group-chat note and the Groups docs show a new
message_tooldefault, so the agent can think and use tools without auto-posting its final text back into the room. - openclaw's commitments thread and the commitments docs describe an opt-in middle layer between memory and scheduling: short-lived follow-ups inferred from conversation context, capped at three per rolling day by default.
- openclaw's exec-controls note matches the exec approvals docs, where effective policy is the stricter merge of tool defaults and approval rules rather than a looser override.
- openclaw's NVIDIA post and the NVIDIA provider docs add catalog-backed model picks, including Nemotron, Kimi, MiniMax, and GLM, behind NVIDIA's OpenAI-compatible endpoint.
You can skim the release notes, check the new group chat visibility rules, read how inferred commitments are scoped and delivered, and see the exact NVIDIA catalog entries. steipete's reaction says the group-chat rewrite is the part worth retrying if you bounced off earlier builds.
Group chats
The big product change is that group rooms stop behaving like a normal assistant transcript. In the Groups docs, messages.groupChat.visibleReplies now defaults to message_tool, which means the agent can process the turn, update memory, and only speak visibly when it calls message(action=send).
That fixes a specific awkwardness from earlier chat-loop designs: the final assistant text is no longer blindly dumped into the room. openclaw's release thread calls that shift "agent-native," and steipete's post is unusually blunt that the change is worth another look.
Commitments
Follow-up commitments are a lightweight scheduler that OpenClaw infers from the conversation instead of waiting for an explicit reminder command. The commitments docs say they are off by default, scoped to the same agent and channel that created them, and delivered later through heartbeat rather than immediately.
The mechanics are tighter than the tweet summary suggests:
- hidden extraction runs after an agent reply
- delivery is delayed by at least one heartbeat interval
commitments.maxPerDaydefaults to 3 per agent session in a rolling day- commitments can be listed and dismissed from the CLI
That puts them in a narrow lane between durable memory and cron-style reminders, exactly where a lot of chat agents currently get hacky.
Exec approvals
The security work is less flashy but more important for anyone letting an agent touch a real host. The exec approvals docs define approvals as a second interlock on top of tool policy, and the effective result is the stricter merge of both layers.
A few concrete pieces stand out:
- approval can depend on allowlists, user prompts, and host-local state in
~/.openclaw/exec-approvals.json - a host-local
ask: "always"keeps prompting even if session defaults are looser - if exact file binding for a script or interpreter run is not possible, OpenClaw refuses to mint an approval-backed run
openclaw's post summarizes this as restrictive profiles staying restrictive. That is a good description of the whole change set.
NVIDIA catalogs
NVIDIA becomes a first-class provider instead of a bring-your-own-base-URL workaround. The NVIDIA docs put the endpoint at https://integrate.api.nvidia.com/v1, use an API key from build.nvidia.com, and expose catalog-backed selection through normal OpenClaw model commands.
The built-in catalog currently includes:
nvidia/nvidia/nemotron-3-super-120b-a12b, 262,144 context, 8,192 max outputnvidia/moonshotai/kimi-k2.5, 262,144 context, 8,192 max outputnvidia/minimaxai/minimax-m2.5, 196,608 context, 8,192 max outputnvidia/z-ai/glm5, 202,752 context, 8,192 max output
The docs also say those NVIDIA-hosted models are free to use for now, which is a pretty strong onboarding story for a provider added in what is otherwise a group-chat release.
Queueing and memory
Two quieter changes round out the release. First, openclaw's queue note matches the command queue docs: inbound chat traffic now defaults to steer, so follow-up messages can be injected at the next model boundary instead of spawning a second run against the same session.
Second, openclaw's memory thread points to a memory stack that is getting more provenance-heavy. The memory-wiki docs describe deterministic wiki pages with claims, evidence, contradictions, and open questions, while the 2026.4.29 release also adds scoped recall, people metadata, aliases, relationship graphs, and partial recall results on timeout.
That combination, less duplicate work in active chats and more structured evidence in memory, is what makes the rest of the release feel operational instead of cosmetic.