AI Coding Partner from OpenAI
OpenAI's AI coding partner and cloud-based software engineering agent for end-to-end coding tasks, multi-agent workflows, code review, and background automation.
Hankweave added short aliases that route the same prompt and code job into Anthropic's Agents SDK, Codex, or Gemini-style harnesses with unified logs and control. The release treats harness choice as a first-class variable instead of forcing teams to rebuild orchestration for each model stack.
OpenAI published a Codex use-case gallery with one-click workflows, and shadcn/ui and Box shipped official plugins. Teams can now install reusable app and web workflows directly instead of wiring each integration by hand.
OpenAI rolled out Codex plugins across the app, CLI, and IDE extensions, with app auth, reusable skills, and optional MCP servers. Teams should test plugin-backed workflows and permission models before broad rollout.
Expect wraps browser QA for Claude Code, Codex, or Cursor into a CLI that records bug videos and feeds failures back into a fix loop. It gives coding agents a tighter UI validation cycle without requiring a custom browser harness.
Agent Computer launched cloud desktops that boot in under half a second and expose persistent disks, shared credentials, SSH access, and ACP control for agents. It gives coding agents a faster place to run tools and reuse auth, but teams still need to design safe session and credential boundaries.
OpenAI says Responses API requests can reuse warm containers for skills, shell, and code interpreter, cutting startup times by about 10x. Faster execution matters more now that Codex is spreading to free users, students, and subagent-heavy workflows.
Conductor now bundles plan mode, fast mode, skills, repo quick start, and an experimental merge-conflict UI around Codex sessions. Try it if you want a higher-level harness for long-running code agents, but watch the foreground chat UX on larger tasks.
Reuters says OpenAI plans to nearly double staff to 8,000 by end-2026 and expand technical ambassadorship around ChatGPT and Codex. Watch the enterprise rollout and free-tier monetization, because packaging and onboarding are shifting.
WSJ reported that OpenAI is consolidating ChatGPT, Codex, and its browser into a single desktop app to simplify heavy-use workflows. If it ships, developers would get one workspace for chat, coding, and browsing instead of today's fragmented clients.
OpenAI told MIT Technology Review it wants an autonomous research intern by September and a multi-agent research lab by 2028, with Codex described as an early step. Treat it as a roadmap for longer-horizon agents, not a shipped capability.
Keycard released an execution-time identity layer for coding agents, issuing short-lived credentials tied to user, agent, runtime, and task. It targets the gap between noisy permission prompts and unsafe skip-permissions workflows.
OpenAI agreed to buy Astral and bring the uv, Ruff, and ty team into Codex while pledging continued support for Astral’s open-source tools. It deepens Codex’s Python workflow integration as OpenAI says Codex has passed 2 million weekly active users.
OpenAI shipped GPT-5.4 mini to ChatGPT, Codex, and the API, and GPT-5.4 nano to the API, with 400K context, lower prices, and stronger coding and computer-use scores. Route subagents and high-volume tasks to the smaller tiers to cut spend without giving up much capability.
OpenAI rolled out native subagents in Codex so a main agent can spawn specialized parallel threads and return results to one session. Try it for larger code reviews and feature builds where you want to split work without polluting the main context.
OpenAI made Automations generally available in the Codex app with per-run model selection, reasoning controls, worktree or branch targeting, reusable templates, themes, and terminal visibility. Use it for unattended repo maintenance instead of limiting Codex to one-off interactive tasks.
OpenAI made Codex Automations generally available and also added theme import and sharing in the desktop app. Use scheduled runs and isolated worktrees to move Codex from interactive coding into background workflow execution.
OpenAI says Codex capacity is lagging a demand spike, leaving some sessions choppy while the team adds more compute. If you depend on Codex in production workflows, plan for transient instability and keep fallback review or execution paths ready.
OpenAI acknowledged a Codex session hang that left some requests unresponsive, later said the issue had been stable for hours, and promised a rate-limit reset. Teams relying on Codex should re-check long runs and confirm quota restoration after the incident.
OpenAI detailed how repo-local skills, AGENTS.md, and GitHub Actions now drive repeatable verification, release, and pull request workflows across its Agents SDK repositories. Maintainers can copy the pattern to reduce prompt sprawl and keep agent behavior closer to the codebase.