OpenAI's coding agent for software engineering tasks such as generating code, fixing bugs, answering codebase questions, and reviewing changes.

Recent stories
Creators shared a Codex and GPT Image 2 workflow that outputs static HTML landing pages whose scenes shift by season and local time. The setup gives humans a cleaner format to review, tweak, and navigate than Markdown when agents generate multi-scene pages.
Creators used GPT Image 2 for storyboard sheets, brand books, posters, and campaign visuals across Firefly, Paper, Codex, and Leonardo. The shift turns it into a preproduction tool, but tests still report inconsistent guideline adherence without extra context.
OpenAI Codex CLI v0.129.0 adds Vim mode, redesigned resume flows, stronger plugin management, and hook controls, while GOALS also reached the Linux app. The update makes long-running refactors and persistent task loops more structured across CLI and app use.
Users report OpenAI increased Codex limits about 10x on the May 5 reset, with much longer /goal sessions and more computer-use demos. That should extend unattended runs for app migrations and visual prototyping.
Weekend builder posts showed OpenAI Codex using /goal to keep working across turns, with Linux clients and ephemeral runner tools extending longer sessions. It matters for vibe-coders packaging Codex into unattended loops, but usage limits and community wrappers still vary by plan and platform.
Multiple practitioners showed Codex reviewing every main-branch commit, spawning fix loops, and opening browser sessions when APIs or web apps blocked the normal path. The workflow matters because Codex is being used as a browser-native coworker for coding, writing, analytics, and media plugins, but the pattern is emerging from user experiments rather than a formal OpenAI release.
Codex App Server added a Fedora RPM package for Linux installs as users pushed Codex into browser control, 3D-print setup, and rapid game prototypes. Watch for more repeatable desktop workflows as Codex moves beyond chat-only experiments.
OpenAI launched GPT-5.5 in ChatGPT and Codex for coding, computer use, docs, sheets, and longer tool-driven tasks. Early tests showed stronger games and frontend builds, while pricing jumped again and Opus 4.7 comparisons started immediately.
Creators used GPT Image 2 to turn single photos into brand books, generate 360 panoramas, lay out recipe pages and shortcut charts, and produce scannable QR codes or plate-solvable star fields. That matters because the model is now being used for structured design work, not just single hero images.
OpenAI updated Codex with Mac app control, background computer use, image tools, ongoing tasks, and 90+ plugins, while Remotion added a one-click skill. Agents can now work inside desktop creative apps and stacks without blocking the visible cursor.
Posts from PM candidates say Google is using a live Cursor build instead of a standalone technical screen. Figma, Codex, and Claude Code users are also shipping prototypes and PRs inside coding tools.