Codex users report `/goal` sessions with 70-minute Stripe fixes and a 4,000-prompt cap
Users posted long-running Codex `/goal` sessions with auto-continuations, `pause`/`resume`, and file-backed goals. Watch the 4,000-prompt startup cap and early-stop drift if you plan to run longer agent loops.

TL;DR
- OpenAI shipped persisted
/goalworkflows in the Codex CLI 0.128.0 release notes, and early users like aibuilderclub_'s thread opener describe it as a long-running loop that keeps planning, coding, testing, and self-correcting until the task finishes or the budget runs out. - The basic control surface is already visible in user reports: aibuilderclub_'s setup post shows feature-flag enablement, while aibuilderclub_'s command list and the release notes line up on create, pause, resume, and clear.
- Several hands-on reports describe multi-hour autonomous runs, including aibuilderclub_'s Stripe webhook test at 70 minutes with four auto-continuations, kevinkern's iOS app thread at roughly three hours to TestFlight, and steipete's xAI fix screenshot after an 11 hour session.
- The sharp edges are already clear: aibuilderclub_'s gotchas post says Plan mode and
/goaldo not work together, the related GitHub bug says continuation is silently suppressed in Plan mode, and the GitHub issue on/goalsays very large goal prompts should be moved into files. - One buried caveat is discoverability: the GitHub docs issue says
/goalexisted in 0.128.0 before it was documented in the slash-command docs, and the enablement issue says the feature was still under development and not generally ready for external use.
You can read the 0.128.0 release notes, skim the missing-docs issue, and even inspect the spec-packager skill Kevin Kern linked. The interesting bit is how quickly users converged on the same pattern: long goals, file-backed specs, side-branch chats, and lots of testing inside the loop.
Goal loops
OpenAI's official framing is short: 0.128.0 added persisted /goal workflows, app-server APIs, runtime continuation, and TUI controls for create, pause, resume, and clear in the release notes. User writeups fill in the behavior.
According to aibuilderclub_, the loop keeps going until the mission is complete or the budget runs out. mattlam_'s 0.128.0 summary adds one implementation detail that matters for operator expectations: after an agent turn finishes, Codex can inject a nudge toward the next concrete action if the user does not type anything.
That makes /goal less like a one-shot prompt and more like a stateful execution mode. jasonzhou1993's note explicitly calls it a "stateful Ralph-loop," which is about as concise a description as this feature is going to get.
Setup and command surface
The enablement story is messier than the release notes suggest. aibuilderclub_'s setup post shows two ways to turn it on:
- Add
[features] goals = trueto~/.codex/config.toml - Run
codex features enable goals
Those instructions match the GitHub enablement issue, where users reported /goal missing in 0.128.0 until they enabled the feature flag. That thread also says the feature was still under development and would later move behind an /experimental flow.
The command set is already fairly clear across tweets and GitHub:
/goal <objective>starts a goal, per shyamalanadkat's CLI example/goal pause,/goal resume, and/goal clearappear in aibuilderclub_'s command listCtrl+Cauto-pauses, according to the same command list/goal follow task.mdkeeps a large spec in a file, according to aibuilderclub_'s setup post/sideopens a disposable branch conversation, according to aibuilderclub_'s/sidepost
The last point is notable because /side is showing up in user workflow reports before there is much canonical documentation around it.
Long runs and what they actually did
The most useful evidence is not hype, it is task logs. aibuilderclub_'s Stripe webhook test says Codex wrote an API route, updated the frontend, connected Chrome MCP on its own, ran end-to-end tests against the live UI, fixed a hydration mismatch, and passed after 70 minutes and four auto-continuations.
Other examples push the same pattern into different environments:
- kevinkern used
/goalplus Xcode, simulator, browser, and App Store skills to get an iOS app to TestFlight in about three hours - steipete's screenshot shows a live-provider fix that ended with passing xAI API tests
- daniel_mac8's PRD example shows a documentation-heavy run marked complete in 226 seconds
- danshipper's benchmark attempt got through a structural rewrite plan, then danshipper's follow-up said the run stopped after 25 minutes and that
/goalwas not supported in the desktop app yet
The common theme is breadth. People are not using /goal for single diffs, they are using it for release-like work that crosses code, tests, browser automation, and external tools.
Constraints and drift
The first hard limit to surface is prompt size. mattlam_'s tips post says /goal fails to start above a 4,000 prompt limit, and the GitHub issue on /goal includes a maintainer reply saying the goal prompt cannot be that large and should be moved into a file instead.
The second limit is that vague goals drift. aibuilderclub_'s drift example contrasts "Make the API faster" with a bounded target that names the route, expected query count, test command, and summary output. vincent_koc's thread opener makes the same point more bluntly, calling /goal a constraint workflow rather than a "do my ticket" button.
Then there is the mode conflict. aibuilderclub_'s gotchas post says Plan mode and /goal do not work together, and the matching GitHub bug says the TUI can show an active goal while silently suppressing autonomous continuation. That is a nasty failure mode because the interface can look alive while the loop is effectively parked.
One small quota detail is easy to miss: aibuilderclub_ says if a task is already running when you go over quota, Codex does not kill the run mid-flight, it deducts the overage from the weekly allowance.
File-backed goals and spec packs
One reason these runs hold together is that users are externalizing the spec instead of stuffing it into the goal line. aibuilderclub_'s task.md example says complex tasks work better when the goal points at a markdown file, and the maintainer reply in issue #20591 recommends the same workaround for oversized prompts.
Kevin Kern's thread turns that pattern into a stack. He says he first generated a full spec package, then handed it to Codex with a goal to read docs/*.md, implement everything production-ready, and stop only after landing in TestFlight kevinkern's iOS app thread. The linked app-spec-packager skill outputs a fairly serious bundle, including product spec, UX flows, design system, architecture, ADRs, API and data-model docs, QA acceptance tests, release readiness, and task checklists.
That helps explain why a few users are describing /goal as a harness feature as much as a model feature. The good runs are carrying around explicit artifacts, acceptance tests, and tool hooks, not just a long natural-language prompt.
Resume lists and missing docs
The final rough edge is that /goal was public enough to spread on X before it was public enough to be easy to find. The missing-docs issue says 0.128.0 had goal-related help strings locally, but the official slash-command docs still omitted /goal.
A second bug report, issue #20792, says sessions that start with /goal can be resumed by ID but do not appear in the normal codex resume lists. That is a very specific paper cut, but it hits the exact workflow this feature is supposed to enable: long-lived runs you come back to later.
So the early picture is slightly odd in a very GitHub-native way. The release notes announced persisted goals, users immediately pushed them into multi-hour runs, and the surrounding ergonomics, docs, and resume surfaces were still catching up.