Ollama supports Hermes Agent in v0.21 with ollama launch hermes
Ollama 0.21 added native Hermes Agent support through the ollama launch hermes command. That makes a self-improving local agent loop available without a hosted inference stack, with memory and skills running on top of Ollama’s model serving.

TL;DR
- Ollama 0.21 added native Hermes Agent support, exposed through the new
ollama launch hermesflow that ollama's launch post introduced and that Ollama's Hermes docs documents. - The integration is more than a model preset. According to ollama's thread, Hermes can create skills from experience, persist knowledge, search past conversations, and keep a cross-session user model.
- Ollama's Hermes docs say the launcher handles install, model selection, provider wiring, and optional messaging setup automatically, instead of making users hand-configure Hermes against Ollama's local OpenAI-compatible endpoint.
- The first terminal screenshot in ollama's thread screenshot shows Hermes Agent v0.10.0 with browser, code execution, cronjob, delegation, file, Home Assistant, and image generation tools, plus skill packs for devops, data science, GitHub, email, and more.
You can read the integration guide, check the broader ollama launch CLI reference, and skim Hermes Agent v0.10.0's release notes to see why the timing lines up. Hermes shipped its Tool Gateway release on April 16, then Ollama added a one-command launcher a day later. The setup page also quietly shows that Ollama is treating Hermes as both a local agent shell and an optional messaging gateway for Telegram, Discord, Slack, WhatsApp, Signal, and email.
`ollama launch hermes`
Ollama framed the feature as a one-liner: ollama launch hermes. The linked integration page says that command can install Hermes if it is missing, let the user pick a local or cloud model, point Hermes at http://127.0.0.1:11434/v1, and then launch the chat.
That turns Hermes into another first-party launch target inside Ollama's integration layer. The broader CLI reference describes ollama launch as the command for configuring external applications against Ollama models, and the integrations index now lists Hermes Agent under assistants.
Tools and skills in the launch screen
The terminal capture attached to ollama's thread screenshot is the most concrete look at what ships on day one. It shows Hermes Agent v0.10.0 running with these tool surfaces:
- browser
- clarify
- code_execution
- cronjob
- delegation
- file
- homeassistant
- image_gen
The same screen also lists bundled skill categories, including:
- apple
- autonomous-ai-agents
- creative
- data-science
- devops
- gaming
- general
- github
That matters mostly because it makes Ollama look less like a bare local inference server and more like a launcher for full agent runtimes sitting on top of it.
Memory and self-improvement are the real feature
Ollama's launch copy emphasized four Hermes behaviors, each unusually specific for a short product announcement. According to ollama's thread, Hermes:
- creates skills from experience,
- improves those skills during use,
- nudges itself to persist knowledge,
- searches its own past conversations, and
- builds a deepening model of the user across sessions.
The Hermes integration docs compress that into three product terms: automatic skill creation, cross-session memory, and more than 70 built-in skills. Nous Research confirmed the pairing in NousResearch's partnership post, but the operational detail lives in Ollama's docs, not the tweet.
Gateway setup and model picks
The setup flow in Ollama's Hermes docs does one thing that is easy to miss in the launch tweet: messaging is built into onboarding. After model selection, the wizard can connect Hermes to Telegram, Discord, Slack, WhatsApp, Signal, or email, or skip that step and just run the local chat.
The same page also splits recommended models between cloud and local options. Ollama highlights kimi-k2.5:cloud, qwen3.5:cloud, glm-5.1:cloud, and minimax-m2.7:cloud on the hosted side, then gemma4 and qwen3.5 for local runs. Windows support is there too, but only through WSL2, which the docs call out explicitly.
That lines up with Hermes Agent v0.10.0's release notes, which describe April 16 as the Tool Gateway release. Paid Nous Portal subscribers got managed web search, image generation, text-to-speech, and browser automation without separate API keys, so Ollama's launcher arrived just as Hermes expanded from a local agent shell into a broader tool-and-messaging runtime.