Codex adds `/goal`, role-based workflows, and 20% faster browser use
OpenAI expanded Codex with role-based work-flows, app connections, in-app previews, and the `/goal` command, while also improving browser use by about 20%. The update lets Codex keep working across docs, slides, spreadsheets, and web actions instead of staying in a single coding thread.

TL;DR
- OpenAI is pushing Codex past coding into a role-based work app: OpenAI's setup demo shows role selection and app connections, while the official announcement says Codex can now operate your computer, work across everyday apps, remember preferences, and take on repeatable work.
- The biggest new agent mechanic is
/goal: mattlam_'s 0.128.0 summary described a persistent objective loop that keeps nudging the model toward the next concrete action, and the 0.128.0 release notes confirm pause, resume, and clear controls for persisted goal workflows. - Speed is a real part of this update, not just launch copy: testingcatalog's app inventory said computer and browser use are 20% faster, while sama sharing AriX's benchmark clip claimed one computer-use flow ran 42% faster.
- Codex now treats artifacts as first-class outputs for non-coding work: OpenAI's file editing demo shows in-thread revision of generated files, and OpenAI's broader workflow demo pitches decks, spreadsheets, summaries, and research tasks in the same interface.
- The CLI and app are converging: embirico's note on
/goalsaid the command shipped to the CLI first and is headed to the app for everyone, while embirico's product strategy note said features like subagents and/goalare meant to "snowball into a single implementation."
You can read OpenAI's full launch post, skim the CLI 0.128.0 release notes, and check OpenAI's separate WebSockets post for the latency work behind faster agent loops. Simon Willison's quick write-up is the cleanest external explanation of /goal, and the main Hacker News thread is where engineers immediately started arguing about guardrails, privacy, and whether agent output still needs constant code review.
Personalized onboarding
OpenAI's most important product change is not a new model. It is the decision to ask what kind of work you do, then shape Codex around that answer.
According to OpenAI's setup demo, onboarding now starts with role selection, app connections, and suggested prompts. embirico called that customization "the most significant improvement we shipped today," and thsottiaux framed the same release as Codex becoming available for non-coders.
The workflow OpenAI is showing is broader than "AI that writes code":
- connect Slack, Google Workspace, Microsoft 365, and other apps, per OpenAI's setup demo
- summarize data across documents and apps, per OpenAI's thread context
- generate and revise slides, docs, and spreadsheets in the same thread, per OpenAI's file editing demo and OpenAI's broader workflow demo
- surface task progress, tools used, and next steps during execution, per OpenAI's thread context
That lines up with the earlier official announcement, which said Codex would extend across computer use, plugins, memory, ongoing work, PR review, SSH devboxes, and an in-app browser.
`/goal`
The new /goal command is OpenAI's take on a long-running agent loop.
[mattlam_](src:2|mattlam_'s 0.128.0 summary) said /goal <objective> sets a persistent objective, injects a follow-up nudge after each turn if the user stays idle, and maps goal requirements to evidence such as files, tests, or PRs. LLMpsycho added that these workflows now persist across pauses, and dkundel's enable command showed the experimental feature can be turned on with codex features enable goals.
The official CLI release notes add the rest of the mechanics:
- persisted
/goalworkflows with create, pause, resume, and clear controls, per the 0.128.0 release notes - a new
codex updatecommand, also called out by mattlam_'s 0.128.0 summary - expanded permission profiles and client metadata, per the 0.128.0 release notes
- plugin marketplace installation and external-agent config import, per the 0.128.0 release notes
- resume and interrupt fixes, which matter because a persistent goal loop is useless if resume is flaky, per the 0.128.0 release notes
Simon Willison spotted the same structure in the prompts behind the feature: Codex keeps looping until it decides the goal is complete or the token budget runs out.
Faster computer use
OpenAI is marketing this update as friendlier. Under the hood, it is also chasing loop latency hard.
[testingcatalog](src:35|testingcatalog's app inventory) reported 20% faster computer and browser use in the latest app build, plus annotation for browser, artifacts, and code. sama sharing AriX's benchmark clip amplified a more specific benchmark clip claiming one computer-use scenario now runs 42% faster.
The engineering reason sits outside the app marketing. In OpenAI's WebSockets post, the company says agent loops had become bottlenecked by API overhead rather than inference, and that keeping response state warm over persistent WebSocket connections made end-to-end workflows up to 40% faster. OpenAIDevs' WebSockets summary compresses that into the key idea: keep state warm, reuse context, avoid extra network hops.
Practitioner reactions matched the launch numbers. mattlam_'s hands-on note said the app's computer use is "incredible" and fast enough to offload tedious side tasks without breaking flow, while dkundel summarized the day-one feel more simply: faster VM spin-up.
Artifacts and app workflows
The visible interface change is that Codex now spends much more time producing inspectable files instead of just chat text.
According to OpenAI's file editing demo, you can open a generated file, request edits, and keep revising inside the same thread. OpenAI's earlier workflow demos went even wider:
- catch up on Slack, Gmail, and Google Calendar updates, per OpenAI's cross-app catch-up demo
- analyze a data export and draft the readout, per OpenAI's data export demo
- compare options against explicit criteria and track tradeoffs, per OpenAI's thread context
Outside the official demos, Dan Shipper's email automation example showed Codex working with Cora Inbox to triage and draft email replies, and Dan Shipper's Proof demo showed a writing workflow where the in-app browser stays open beside the document while Codex loops on the left. The release is starting to look like a general harness for artifact production, not a coding sidecar.
PR review and long-running loops
The developer-facing surface is still getting sharper even as OpenAI chases broader knowledge work.
The official announcement explicitly called out PR review, multi-file and multi-terminal views, SSH access to remote devboxes, and browser-based frontend iteration. steipete's commit-review setup showed a concrete version of that pattern: a Codex instance on every commit landing on main, looking for regressions and security issues, and steipete's multi-loop automation extended it into an automated fix-review-fix chain with up to five loops.
A smaller but telling UI detail surfaced after the launch. mattlam_'s PR review UI note found that asking Codex to review a PR can produce custom findings UI, including a highlighted P1 issue, instead of dumping everything back into plain chat.
Community reaction was enthusiastic but not naive. In the main Hacker News thread, commenters immediately focused on privacy, code understanding, and automation guardrails, and mattlam_'s reminder made the same point from another angle: agent responses still need human review in production settings.
Hidden settings and rollout caveats
The last interesting part of this story is the stuff OpenAI has not fully put on stage yet.
[testingcatalog](src:35|testingcatalog's app inventory) found unannounced app changes including a new Remote Control feature, a Connections section in settings, shortcut management, and an onboarding widget for email, calendar, and file integrations. The same thread also said browser use and computer use appeared disabled in the EU, which matches the earlier official announcement saying personalization and computer use were coming to Enterprise, Edu, the EU, and the UK later rather than shipping there immediately.
Two more loose ends are worth noting:
- BEBischof's file-browser complaint said the app file browser was not showing PNGs, which is a small but very practical reminder that the artifact workflow is still maturing.
- mattlam_'s remote-control speculation connected the hidden Remote Control surface to a possible mobile path, but that remains speculative.
- sama's leaked onboarding joke screen and BorisMPower's screenshot of alternate modes suggest OpenAI has also been playing with more opinionated mode presets than the role-based onboarding that actually shipped.