Skip to content
AI Primer

OpenAI image-generation model release for generating and editing images.

Pricing

Official site · May 7, 2026, 6:47 AM
Input / 1M
$5.00
Output / 1M
$40.00

OpenAI's image-generation pricing is documented in token terms. The official pricing page also separates image-input tokens at $10 / 1M tokens, but the normalized fields here capture the primary prompt/input token rate and output token rate.

OpenAI's official pricing page exposes image-generation pricing in token terms. I did not find a separate first-party page explicitly labeled 'GPT Image 2'; the closest public OpenAI pricing exposure for the image product is the image-generation rate card on the API pricing page.

View source

Model Intelligence

Benchmarkable
Yes
Model level
release

Recent stories

24 linked stories
workflowSECONDARY2026-05-08
Seedance 2.0 adds ComfyUI video extension for broadcast-shot workflows

Creators shared repeatable Seedance 2.0 workflows for ComfyUI clip extension, GPT Image 2 shot planning, and fake-broadcast or iPhone footage. The examples push Seedance beyond isolated shorts into longer, more controllable production pipelines.

releaseSECONDARY2026-05-07
OpenArt adds Smart Shot for GPT Image 2 shot plans before Seedance 2.0 renders

OpenArt added Smart Shot, which uses GPT Image 2 to draft a shot plan before Seedance 2.0 renders the final clip. Creators can review character refs, floor plans, camera, and lighting choices before spending render time.

workflowPRIMARY2026-05-05
GPT Image 2 supports character reference sheets and 2x2 brand slides

Creators are using GPT Image 2 for multi-angle character sheets, 2x2 brand moodboards, editorial collages, and App Store assets. The model is being pushed beyond single hero images into reusable design systems with notes, text blocks, and consistent characters.

newsSECONDARY2026-05-04
Users report OpenAI Codex raises limits 10x on May 5 reset

Users report OpenAI increased Codex limits about 10x on the May 5 reset, with much longer /goal sessions and more computer-use demos. That should extend unattended runs for app migrations and visual prototyping.

workflowSECONDARY2026-05-03
Seedance 2.0 supports 3-prompt motion-sheet videos in creator walkthroughs

Creators documented repeatable Seedance 2.0 pipelines that turn motion sheets and multi-image references from Magnific, Midjourney, and GPT Image 2 into short films and 2.5D turns. It matters because Seedance is becoming the animation step in larger workflows, but most evidence still comes from creator-run demos and affiliate showcases.

workflowSECONDARY2026-05-02
Seedance 2.0 adds 2.5D turnarounds and merged-image short films in creator tests

Creators posted new Seedance 2.0 workflows for 2.5D turnarounds, merged-image short films, FPV shots, medical UI explainers, and video-to-video stylization. The examples show Seedance being used as the motion layer inside Midjourney, GPT Image 2, Dreamina, Higgsfield, and PixPretty pipelines.

releaseSECONDARY2026-05-02
GlobalGPT launches GPT Image 2 in its workspace with image and native-audio video tools

GlobalGPT said GPT Image 2 is live in its workspace for posters, comics, cinematic shots, and AI videos, and Hailuo later added GPT Image 2 alongside Seedance 2.0. The rollouts broaden access to the image model outside ChatGPT and bundle it directly with creator video tools.

workflowSECONDARY2026-04-30
Seedance 2.0 supports burst-frame and choreography-sheet reference workflows

Creators documented Seedance 2.0 workflows that use burst frames, character sheets, choreography grids and storyboards to build multi-shot videos. The reference-heavy setups improve shot-to-shot continuity; watch for audio references that still do not fully lock to source.

workflowSECONDARY2026-04-29
Pika Agents supports GPT Images 2.0 and Seedance 2.0 ad workflows

Creator tests showed Pika Agents using GPT Images 2.0 for storyboards, extending two 15-second Seedance 2.0 clips into one ad, and running from Telegram on mobile. The workflows matter because Pika is being used as an orchestration layer for multi-model ad production, not just one-shot video output.

workflowSECONDARY2026-04-29
Adobe Firefly integrates GPT Image 2 brand boards into Kling 3.0 spots

A documented Firefly workflow starts with a GPT Image 2 visual identity board, reuses it as reference material for branded scenes, then stitches Kling 3.0 clips and audio inside Firefly. It matters because brand system creation, asset generation, and video assembly stay inside one Adobe stack.

workflowPRIMARY2026-04-29
GPT Image 2 supports 9-panel storyboards in Seedance 2.0 creator tests

Creators showed GPT Image 2 feeding Seedance 2.0 with perfume storyboard grids, UGC selfie references, poster-to-video setups, and time-freeze scenes. The workflow matters because it makes multi-shot ads and short videos more repeatable than one-off keyframe prompting.

workflowSECONDARY2026-04-27
Seedance 2.0 supports storyboard-frame and motion-sheet video workflows

Creators posted Seedance 2.0 pipelines that turn storyboard frames, motion sheets, and landing pages into finished clips. Use it as a final renderer for ads, demos, and cinematic scenes, not just one-off image-to-video tests.

workflowSECONDARY2026-04-26
Freepik adds image-to-3D scene navigation from GPT Image 2 frames

A short workflow paired GPT Image 2 art with Freepik 3D Scenes to turn flat frames into explorable environments and adjustable camera angles. The result looks useful for previs and shot framing, but the demo stays at prototype-level geometry.

workflowPRIMARY2026-04-26
GPT Image 2 ships presentation-ready campaign decks from 1 reference image

Creators used GPT Image 2 to turn single references and photos into campaign decks, palm-reading guides, workspace audits and shopping-ready lighting plans. The model is holding layout, labels and multi-section document structure across long outputs, but some examples still invent details or need cleanup.

workflowSECONDARY2026-04-26
Glif adds single-agent storyboard-to-Seedance animation from chat prompts

Glif users showed a chat agent generating GPT Image 2 storyboards and passing them straight into Seedance 2 for anime shorts. The flow collapses storyboard prep and animation into one conversation, but still leans on seeded references and prompt setup.

newsPRIMARY2026-04-25
Leonardo compares GPT Image 2 and Nano Banana 2 across 7 creative briefs

Creator tests in Leonardo, plus side-by-sides on PixPretty and Freepik, put GPT Image 2 against Nano Banana 2 on storyboards, brand kits, infographics and ad layouts. The comparison matters because prompt following, text handling and structured commercial outputs are becoming the deciding factors for image-model choice.

workflowSECONDARY2026-04-25
Seedance 2.0 adds camera-map, memory-reel and omni-reference workflows

Creators used Seedance 2.0 to turn camera-path sketches, 2x2 photo grids and multi-screen reference boards into game scenes, faux memory reels and short films. The new controls matter for motion paths, character continuity and multi-clip sequencing across different inputs.

releasePRIMARY2026-04-24
GPT Image 2 adds Runway and Meshy integrations

GPT Image 2 went live in Runway and Meshy, and users also reported PixPretty support plus new 4K size and quality controls in third-party interfaces. The rollout extends text rendering and layout control into app identity, brand boards, and infographic work.

workflowPRIMARY2026-04-24
GPT Image 2 and Seedance 2.0 ship storyboard-to-4K workflows

Creators published a repeatable GPT Image 2 and Seedance 2.0 pipeline that turns scene sheets into 3x3 storyboard grids, 4K references, and three 15-second clips. Use it to tighten shot planning for game mockups, anime shorts, and cinematic concept videos.

workflowSECONDARY2026-04-23
ChatGPT Images 2.0 supports real QR codes and analysis boards

Creator tests showed ChatGPT Images 2.0 making scannable QR codes, color-analysis layouts, study sheets, brand kits, and one-image campaign boards. That pushes the model further into structured graphic work, though typography and brand-rule precision still vary by run.

workflowPRIMARY2026-04-23
GPT Image 2 supports Seedance 2.0 image-to-video workflows across Freepik and Higgsfield

Creators documented GPT Image 2 plus Seedance 2.0 workflows across Freepik, Higgsfield, and Mitte for ads, animation tests, and uncanny short clips. The pairing turns better still generation into repeatable motion pipelines, though queues and setup still slow execution.

releaseSECONDARY2026-04-23
Glif launches V2 and raises $17.5M seed

Glif launched V2 as a chat-based creative agent that chains image, video, voice, and music models, and announced a $17.5 million seed led by a16z and USV. Early demos show multi-model ads and short films being produced inside single conversations instead of manual tool hopping.

workflowPRIMARY2026-04-21
ChatGPT Images 2.0 supports 10x10 grids with readable labels

Creator tests pushed ChatGPT Images 2.0 into readable infographics, dense search-and-find scenes, fake UIs, code windows and brand kits. The results matter because layouts and text held up in formats older image models usually break, though some structured prompts still fail.

releasePRIMARY2026-04-21
OpenAI releases ChatGPT Images 2.0 with stronger text rendering

OpenAI released ChatGPT Images 2.0, and Firefly Boards, Figma, Freepik, fal and Lovart added access within hours. The rollout matters because text-heavy image generation is now moving into the design tools creators already use.

AI PrimerAI Primer

Your daily guide to AI tools, workflows, and creative inspiration.

© 2026 AI Primer. All rights reserved.