Codex app-server supports 32-64 parallel jobs and burns limits 3-5x faster
OpenAI docs say Codex image generation counts against general usage and burns included limits 3-5x faster, while users showed app-server runs with 32 or 64 parallel workers. The workflow turns bulk image or research jobs into quota-backed batches, so teams should watch usage spikes closely.

TL;DR
- OpenAI's Codex CLI docs, as captured in Hangsiin's documentation screenshot, say built-in image generation uses
gpt-image-2, counts against general Codex usage limits, and burns included limits about 3 to 5 times faster than similar non-image turns. - According to Hangsiin's app-server thread, the built-in image tool does not parallelize inside a single Codex agent, but Codex app-server runs can fan out work across 32 or 64 parallel processes.
- In Hangsiin's 500-paper example, that parallel pattern is pitched for bulk jobs like turning hundreds of papers into infographics, including runs that attach reference images.
- Hangsiin's follow-up says fixed-prompt app-server jobs can swap to lighter settings like
gpt-5.4-mini low, which frames the app-server more like a configurable batch harness than a one-off chat session. - Hangsiin's final note extends the same pattern beyond images, claiming parallel app-server runs also work for structured web research and other quota-backed batch tasks.
You can jump straight to the Codex image generation docs, where OpenAI spells out the gpt-image-2 backend and the 3 to 5x faster limit burn. Hangsiin's main thread adds the more interesting workflow detail, namely that bulk generation seems to come from parallel app-server processes rather than from the built-in tool itself. The same thread chain then sketches a batch recipe for 500-paper infographic runs and a cheaper low-effort model setting in Hangsiin's config tip.
Image generation limits
The clearest hard fact in the thread is not the parallelism claim, it is the billing rule. OpenAI's docs say Codex image generation uses gpt-image-2, draws from the same general Codex allowance, and consumes included limits much faster than ordinary turns.
The same screenshot points to a split path for bigger runs: keep using the built-in tool inside Codex, or set OPENAI_API_KEY and route large image batches through the API so API pricing applies instead, as noted in the official docs.
Codex app-server parallelism
According to Hangsiin's thread, one Codex agent can only generate one image at a time with the built-in tool. The workaround is to have Codex orchestrate multiple app-server instances in parallel, with 32 or 64 processes as the stated target and 16 or 32 as fallbacks when 64 concurrent requests gets unstable.
The thread makes that concrete with a batch instruction: Hangsiin's example asks Codex to process 500 papers in one folder and compile an infographic for each paper through 64 app-server instances. A follow-up in Hangsiin's prompt note says the prompt for each worker should be tested on a few samples before the full run.
Model and reasoning knobs
Once the prompt is mostly fixed, Hangsiin's configuration tip says the worker settings can be dialed down to something like gpt-5.4-mini low. That suggests the expensive part of these jobs is not always image generation alone, it is also the reasoning budget attached to each worker process.
In this framing, app-server looks less like a hidden power-user toggle and more like a batch execution layer. Codex handles prompt construction, then cheaper low-effort workers execute the repeated job.
Beyond images
The last twist in the thread is that Hangsiin says the same app-server pattern works for non-image jobs too. That post specifically calls out parallel web research across many items, with structured output for each one, all still billed against Codex usage rather than a separate API account.
That turns the story from an image-generation footnote into a broader quota-backed batching trick. The thread even describes it as a way to burn down leftover Codex allowance in a final sprint, which is a much more specific usage pattern than anything in the public image-generation docs.