Skip to content
AI Primer
release

OpenClaw adds Ollama as an official provider via onboard --auth-choice ollama

OpenClaw added Ollama as an official provider through openclaw onboard --auth-choice ollama, alongside documented OpenAI-compatible self-hosted backends such as vLLM. Use it to run Claw workflows against local or custom models instead of a single hosted stack.

4 min read
OpenClaw adds Ollama as an official provider via onboard --auth-choice ollama
OpenClaw adds Ollama as an official provider via onboard --auth-choice ollama

TL;DR

  • OpenClaw has added Ollama as an official provider, with onboarding exposed directly as openclaw onboard --auth-choice ollama, and the setup flow shown by Ollama launch includes provider selection inside OpenClaw’s existing gateway wizard.
  • The provider path is not limited to hosted APIs: the onboarding screenshot shows a “Cloud + Local” Ollama mode, while the vLLM demo also shows OpenClaw working against an OpenAI-compatible self-hosted endpoint with tool calling intact.
  • OpenClaw’s model layer is expanding faster than its extension layer: Steinberger’s roadmap note says plugin support is being reworked and may add Claude Code/Codex bundles, but the DenchClaw issue details why bigger app-like forks still do not fit the current plugin surface.
  • Early usage is already operational, not just demo-grade: one maintainer’s workflow describes an OpenClaw cron job that “runs every 5 min” and auto-blocks spammy X mentions, showing the stack being used for recurring agent tasks rather than one-off chats.

What shipped in the Ollama integration

Ollama is now an official OpenClaw provider, which matters because model access moves into the same onboarding path as the rest of the product instead of requiring a custom bridge. In the launch post, Ollama says “all models from Ollama will work seamlessly with OpenClaw,” and the accompanying screenshot shows the exact entry point: openclaw onboard --auth-choice ollama.

The setup flow also exposes deployment assumptions that matter for engineers. The [img:6|Onboarding screenshot] shows OpenClaw’s gateway warning that the stack is “personal-by-default,” with shared or multi-user use requiring lock-down, then defaults to loopback bind, token auth, and Tailscale exposure off. It also shows an Ollama base URL on localhost and an Ollama mode selector with “Cloud + Local,” which suggests the provider abstraction is meant to span both local weights and Ollama-hosted endpoints from the same chat surface.

Does this only work with Ollama?

No. Ollama is the new official provider, but OpenClaw is also being positioned around OpenAI-compatible backends. In the vLLM walkthrough, the vLLM team says running OpenClaw with your own model is “surprisingly easy and fast”: deploy the model with vLLM, expose an OpenAI-compatible API, and point OpenClaw at that endpoint.

That post adds the implementation detail engineers care about most: “tool calling works out of the box,” which means agent workflows do not need a provider-specific rewrite when the serving layer is swapped. The demo uses Kimi K2.5 as the example model and shows a vLLM server coming up before the OpenClaw UI successfully invokes tools Quick demo. Together with the Ollama launch, that makes the new provider less of a one-off integration and more of a clearer story around local and self-hosted inference targets.

The operational use case is already visible in maintainer posts. In the cron-job example, Steinberger says an OpenClaw mention-blocker “runs every 5 min” and filters “spam/reply guy/promo stuff,” with the attached digest showing dozens of automated moderation decisions [img:2|Digest screenshot].

Where the extension model still breaks down

OpenClaw’s provider story is getting simpler faster than its plugin story. Steinberger said he wants plugins to become “more powerful” while making core “leaner,” and he specifically called out support for “claude code/codex plugin bundles” as work in progress plugin roadmap. A follow-up reply from him — “I’m about to land this!” — suggests at least some of that work is moving quickly follow-up reply.

But the current limits are explicit in the DenchClaw discussion. The linked GitHub issue says some pieces could become plugins — custom tools, prompt-build hooks, and model-routing hooks — while major parts cannot. The architectural blockers listed there include serving a full Next.js app, terminal emulation over WebSockets with node-pty, a sandboxed app runtime, and custom chat orchestration with its own agent pool and SSE transport. That distinction matters: OpenClaw is getting easier to point at local models, but turning large opinionated forks into drop-in extensions still requires new API surfaces rather than just more providers.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 2 threads
TL;DR1 post
Where the extension model still breaks down3 posts
Share on X