Skip to content
AI Primer
workflow

OpenAI Codex adds /goal persistent task mode in weekend builder demos

Weekend builder posts showed OpenAI Codex using /goal to keep working across turns, with Linux clients and ephemeral runner tools extending longer sessions. It matters for vibe-coders packaging Codex into unattended loops, but usage limits and community wrappers still vary by plan and platform.

5 min read
OpenAI Codex adds /goal persistent task mode in weekend builder demos
OpenAI Codex adds /goal persistent task mode in weekend builder demos

TL;DR

the 0.128.0 release notes quietly mention persisted /goal workflows, but the slash-command guide still skips /goal entirely. You can browse the config reference for the prompt-level overrides people surfaced over the weekend, check out Crabbox for remote runner orchestration, and OpenAI is also running a Codex for Open Source program that offers API credits and six months of ChatGPT Pro with Codex to eligible maintainers.

Goal loops

The official release note says 0.128.0 added app-server APIs, model tools, runtime continuation, and TUI controls for /goal create, pause, resume, and clear in one bundle. minchoi's post turned that into the simpler mental model people actually used all weekend: one objective, then a persistent plan-code-test-evaluate loop.

The command surface in minchoi's infographic breaks down into five controls:

  • /goal <objective> sets or replaces the active objective.
  • /goal shows current goal and status.
  • /goal pause pauses the run.
  • /goal resume restarts a paused run.
  • /goal clear removes the goal.

Hands-on posts landed in the same direction. steipete's screenshot showed Codex still "Pursuing goal" after 11 hours, while danshipper's benchmark run used /goal for a senior-engineer-style rewrite bench before the session stopped after about 25 minutes.

Feature flags

The most useful caveat was that the feature existed before the docs and product surfaces fully caught up. The GitHub release announced /goal, but the official slash-command guide still lists commands like /model, /permissions, /agent, /plugins, /clear, and /compact, with no /goal entry.

That gap showed up in the field. danshipper's follow-up said /goal was not supported in the desktop app yet, and the open issue on GitHub said some CLI users needed codex features enable goals because OpenAI was still testing the feature internally and planned to mark it experimental in the next release.

Config knobs

Another weekend find was that persistent loops are only part of the story. LLMJunky's post pointed to two config keys that sit above repo-level AGENTS files: developer_instructions, which OpenAI's config reference describes as "Additional developer instructions injected into the session," and compact_prompt, which the same reference describes as an inline override for the history compaction prompt.

Those knobs matter because OpenAI's config basics page says the CLI and IDE extension share the same config layers, starting with ~/.codex/config.toml and optional trusted .codex/config.toml project overrides. That gives people one place to set high-level behavior for both the CLI loop and the app surfaces that use the same stack.

Crabbox runners

The fastest follow-on work happened outside OpenAI's product surface. steipete's post packaged Codex-oriented tooling for maintainers drowning in issues and PRs, and the Crabbox launch post introduced ephemeral machines for agents on demand across AWS Spot, Hetzner, or Blacksmith.

Crabbox's docs describe the setup as a local editor and git loop paired with a leased remote box. The CLI syncs the dirty checkout, runs the job remotely, streams output back, and releases the machine, with warm reuse, spend caps, and brokered credentials handled by a Cloudflare Worker.

That is already specific enough to be useful. the macOS validation run showed Codex reproducing a launchd issue that was hard to trigger on a non-fresh install, then passing 46 unit tests and four real launchd integration tests through a Crabbox-hosted macOS SSH target.

Linux app and limits

A parallel thread of weekend hacking was about keeping Codex running in more places. LLMJunky's post pointed to a community Linux build of the Codex app, and the GitHub release page offered x64 AppImage, x64 and arm64 .deb, and x64 .rpm packages under the tag v26.429.30905-petstable.

The usage picture was also inconsistent enough to become its own mini-topic. om_patel5's screenshot showed a 5-hour limit at 100% left and a weekly limit at 96% left on the $20 plan, while LLMJunky's reset speculation suggested people expected a May 5 quota reset. In parallel, the Codex for Open Source page said eligible maintainers can apply for API credits, Codex Security, and six months of ChatGPT Pro with Codex, which helps explain why open source maintainers were some of the first people stress-testing unattended runs.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 4 threads
TL;DR3 posts
Goal loops2 posts
Crabbox runners1 post
Linux app and limits3 posts
Share on X