Hermes Agent adds 7-agent SEO loops with self-writing skill files
Hermes Agent playbooks now show the agent writing its own skill file and running a 7-agent SEO loop claimed to get posts indexed in under 14 days. That makes Hermes look more like a reusable operating layer, with Claude Code as the execution handoff.

TL;DR
- Hermes Agent's official repo describes it as a self-improving agent with a built-in learning loop that creates skills from experience, improves them during use, and can run on a cheap VPS instead of a laptop-bound local session, according to the official GitHub repo and the self-improving skills claim.
- The most concrete workflow in the evidence is the Hermes-to-Claude Code diagram, which sketches a four-step handoff: prototype in Hermes or OpenClaw, run real work until the harness writes the skill, move the winner into Claude Code, then deploy it on a schedule.
- the SEO blueprint turns that pattern into a 7-agent content loop, with a human gate in the brief stage and a learning loop that feeds ranking results back into the system; the thread around it claims posts can get indexed and ranking in under 14 days the 14-day indexing claim.
- Hermes' docs back up the "operating layer" angle:
SOUL.md, persistent memories, agent-created skills, and scheduled cron jobs all live inside~/.hermes/, per the configuration docs, while the cron docs say jobs can attach multiple skills and run in fresh agent sessions. - The more unusual subsystem is the Curator, which an official RFC says reviews agent-created skills in the background, marks stale ones, archives obsolete ones, and patches drift, matching the 7-day curator note in the workflow graphic.
You can browse the repo, inspect the Hermes home-directory layout, and read the scheduled tasks docs. The weirder bit is the Curator RFC, which treats skills like living assets that can go stale or get archived, while the SEO loop blueprint shows the same idea applied to ranking content instead of coding tasks.
Self-writing skill files
The recurring claim across the Hermes threads is simple: the prompt stays put, the procedure changes. One breakdown says Hermes pauses every 15 tool calls, reads what worked, and rewrites the skill file. A follow-up post attaches numbers to that loop, claiming a weekly competitive briefing dropped from 20 minutes to 12 minutes by week four, then 8 minutes by week six.
That framing matches the official product surface. The repo README says Hermes creates skills from experience, improves them during use, persists knowledge, and supports model switching without code changes. The configuration docs show why that matters: identity lives in SOUL.md, long-term context lives in memories/, and learned procedures live in skills/.
For creative and marketing work, that turns a one-off prompt into a reusable harness. The original playbook thread pitches the role as an "AI marketing engineer," and the first playbook post says the workflow starts by running real work until the agent writes the skill itself.
7-agent SEO loop
The SEO system in the blueprint post is more structured than the usual "keyword, draft, publish" playbook. It breaks the loop into distinct agents and one feedback layer:
- Keyword agent
- SERP agent
- Source agent
- Brief agent, plus editor gate
- Draft agent
- Visual agent
- Doc agent
- Distribution agent
- SEO knowledge graph, the learning loop
The diagram's sharpest constraint is the human stop at step four. The editor gate waits for human answers before the draft, which the graphic frames as the anti-slop control that preserves original angle, real examples, and nuance.
The thread text around the image claims this operating loop gets SEO articles indexed and ranking in under 14 days. The image says the team tracks rank movement, CTR, traffic, conversions, backlinks, time to index, and cluster authority, which gives the workflow a measurement layer most agent demos skip.
Claude Code handoff
The handoff is the most reusable part of the story. The diagram post lays it out as a four-stage sequence:
- Prototype the workflow in Hermes Agent or OpenClaw
- Run it two or three times on real work while the harness writes the skill
- Move the workflow into a dedicated Claude Code workspace
- Push it to a VPS once it survives a week without babysitting
That split lines up with Hermes' own infrastructure. The cron docs say jobs can be scheduled in natural language or cron syntax, can attach multiple skills, and can deliver results back to chat, files, or other targets. The repo README adds the cloud angle directly, positioning Hermes as something you can talk to over Telegram while it runs on a VM.
The result is less "one agent does everything" than "one harness graduates work." Hermes is the prototyping and memory layer; Claude Code is the detailed execution layer once the workflow earns promotion.
Company brain
The company-brain post matters because it explains what these workflows need to read. The thread splits organizational memory into three layers:
- Factual memory: docs, tickets, CRM entries, artifacts
- Interaction memory: debates, promises, escalations, what people meant
- Action memory: when to move, wait, ask, escalate, or stop
That is a cleaner explanation for why agent workflows so often flatten into search and summarization. Feed the system only artifacts and it retrieves facts. Feed it the interaction layer as well and it can recover why the fact changed.
Hermes' local structure maps onto that idea pretty neatly. The configuration docs expose separate slots for identity, memory, skills, sessions, and scheduled jobs, which is much closer to an operating system folder tree than a chat app sidebar.
Scheduling and curation
The workflow diagram in the marketing-engineer post mentions a background Curator on a seven-day cron that grades the skill library, consolidates skills, and prunes dead ones. That sounded like tweet-poetry until the docs and issues backed it up.
The Curator RFC says the background task reviews agent-created skills only, tracks usage, marks skills stale after 30 idle days, archives them after 90 idle days, and can spawn a forked auxiliary agent to consolidate overlaps and patch drift. A later feature request says Hermes already stores per-skill usage in SQLite and is adding CLI commands so users can inspect which skills are healthy, stale, or archive-worthy.
That makes the SEO blueprint's last box, the knowledge graph and learning loop, feel less like a one-off diagram and more like the product's default posture. Skills are supposed to accumulate, get reviewed, and survive long enough to become infrastructure.
OpenClaw comparison
The comparison that keeps surfacing in the evidence is Hermes versus OpenClaw. Peter Yang's post pulled hundreds of replies by asking for honest differences, and a later post said he was writing up hands-on experience with both.
The clearest outside answer came from PocketClaw's 2026 decision tree, which says Hermes is the easier starting point for new deployments, while OpenClaw 2026.4+ makes more sense if a team already has an install base or plugin familiarity. That lines up with Holmberg's workflow, which treats Hermes and OpenClaw as interchangeable prototyping layers at step one, then reserves Claude Code for the tighter execution pass later on.