OpenClaw added Ollama as an official provider through openclaw onboard --auth-choice ollama, alongside documented OpenAI-compatible self-hosted backends such as vLLM. Use it to run Claw workflows against local or custom models instead of a single hosted stack.

openclaw onboard --auth-choice ollama, and the setup flow shown by Ollama launch includes provider selection inside OpenClaw’s existing gateway wizard.Ollama is now an official OpenClaw provider, which matters because model access moves into the same onboarding path as the rest of the product instead of requiring a custom bridge. In the launch post, Ollama says “all models from Ollama will work seamlessly with OpenClaw,” and the accompanying screenshot shows the exact entry point: openclaw onboard --auth-choice ollama.
The setup flow also exposes deployment assumptions that matter for engineers. The [img:6|Onboarding screenshot] shows OpenClaw’s gateway warning that the stack is “personal-by-default,” with shared or multi-user use requiring lock-down, then defaults to loopback bind, token auth, and Tailscale exposure off. It also shows an Ollama base URL on localhost and an Ollama mode selector with “Cloud + Local,” which suggests the provider abstraction is meant to span both local weights and Ollama-hosted endpoints from the same chat surface.
No. Ollama is the new official provider, but OpenClaw is also being positioned around OpenAI-compatible backends. In the vLLM walkthrough, the vLLM team says running OpenClaw with your own model is “surprisingly easy and fast”: deploy the model with vLLM, expose an OpenAI-compatible API, and point OpenClaw at that endpoint.
That post adds the implementation detail engineers care about most: “tool calling works out of the box,” which means agent workflows do not need a provider-specific rewrite when the serving layer is swapped. The demo uses Kimi K2.5 as the example model and shows a vLLM server coming up before the OpenClaw UI successfully invokes tools Quick demo. Together with the Ollama launch, that makes the new provider less of a one-off integration and more of a clearer story around local and self-hosted inference targets.
The operational use case is already visible in maintainer posts. In the cron-job example, Steinberger says an OpenClaw mention-blocker “runs every 5 min” and filters “spam/reply guy/promo stuff,” with the attached digest showing dozens of automated moderation decisions [img:2|Digest screenshot].
OpenClaw’s provider story is getting simpler faster than its plugin story. Steinberger said he wants plugins to become “more powerful” while making core “leaner,” and he specifically called out support for “claude code/codex plugin bundles” as work in progress plugin roadmap. A follow-up reply from him — “I’m about to land this!” — suggests at least some of that work is moving quickly follow-up reply.
But the current limits are explicit in the DenchClaw discussion. The linked GitHub issue says some pieces could become plugins — custom tools, prompt-build hooks, and model-routing hooks — while major parts cannot. The architectural blockers listed there include serving a full Next.js app, terminal emulation over WebSockets with node-pty, a sandboxed app runtime, and custom chat orchestration with its own agent pool and SSE transport. That distinction matters: OpenClaw is getting easier to point at local models, but turning large opinionated forks into drop-in extensions still requires new API surfaces rather than just more providers.
My openclaw twitter mention block cron job is working unreasonably well. Turns out AI is really good at detecting spam/reply guy/promo stuff. Runs every 5 min and cleans up my mentions - I actually see useful replies now and Twitter got pleasant again!
Hey @steipete, as you said DenchClaw would make a great plugin, I answered elaborately here on why it wouldn’t be possible under the current structure:
Thinking how we can evolve openclaw plugins to be more powerful while also making core leaner. Also wanna add support for claude code/codex plugin bundles. Good stuff coming soon!