Skip to content
AI Primer
workflow

Ollama adds scheduled /loop prompts to Claude Code workflows

Ollama added scheduled /loop prompts for Claude Code, enabling recurring research, reminders, bug triage, and PR checks. Use it to automate background routines in local or self-hosted agent setups without adding a separate scheduler first.

2 min read
Ollama adds scheduled /loop prompts to Claude Code workflows
Ollama adds scheduled /loop prompts to Claude Code workflows

TL;DR

  • Ollama says Claude Code can now run /loop prompts on a schedule, turning recurring prompts into built-in automation for coding and research workflows launch post.
  • The launch thread frames the first use cases as routine developer chores: according to PR checks, you can have it "check in on your PRs," while research tasks and bug triage add scheduled research and bug reporting.
  • Ollama's reminder example also shows reminders as a supported pattern, so the feature is not limited to repository-facing tasks.
  • The linked integration docs extend the announcement with setup details for Claude Code via Ollama's Anthropic-compatible API, plus model and context-window guidance.

What shipped

Ollama's announcement says Claude Code can now "run prompts on a schedule" with /loop, and the example command is a simple recurring prompt: "Give me the latest AI news every morning" from the launch thread. That makes /loop look less like a one-off agent command and more like a lightweight scheduler embedded directly in the coding tool.

The thread keeps the initial scope concrete. Ollama uses the PR example for pull-request checks, the research example for recurring research, and the bug example for bug reporting and triage. A separate reminder post adds reminders, which suggests the feature can handle both repo automation and general recurring prompts.

How it plugs into Claude Code

Ollama's Claude Code docs position the integration around Claude Code running against open models through Ollama's Anthropic-compatible API. The documentation summary names models including qwen3.5, glm-5:cloud, and kimi-k2.5:cloud, and says Claude Code can be launched either with quick commands or manual environment-variable configuration, according to the docs post.

The same documentation gives one operational constraint that matters for engineers: it recommends "at least 64,000 tokens" of context for better results, as described in the docs summary. That means the scheduling feature is new, but the practical rollout depends on the model and context budget behind the Claude Code session, not just the /loop command itself.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 2 threads
TL;DR4 posts
What shipped4 posts
Share on X