Skip to content
AI Primer
update

Developers report QR codes, floor plans, and poster workflows one day after GPT Image 2 launch

A day after GPT Image 2 launched, developers and tool vendors posted reproducible workflows for floor plans, QR codes, conference posters, typography, and Figma-style asset generation. The follow-up matters because it shows where text-heavy visual generation is already usable, but also that quality depends heavily on mode choice, image size, and surrounding tool scaffolding.

4 min read
Developers report QR codes, floor plans, and poster workflows one day after GPT Image 2 launch
Developers report QR codes, floor plans, and poster workflows one day after GPT Image 2 launch

TL;DR

  • Day-one user reports pushed GPT Image 2 past generic image generation into structured outputs: Goodside's QR demo showed scannable QR codes on a die, while Deedy's thread claimed it could turn a house photo into an "entire floor plan."
  • Tool builders immediately wrapped the model into higher-level workflows: OpenAI Devs highlighted OpenArt's Smart Shot for turning a story idea into characters, worlds, shots, and camera moves, and Figma Weave published a reusable timeline workflow for iterative edits.
  • Developers also started chaining GPT Image 2 with coding agents. In Ror_Fly's post, GPT Image 2 handled the concept art and Claude Code generated working OTF font files that were usable in Figma in "less than 30 mins."
  • Early quality reports say the model's output is sensitive to surrounding controls, not just the prompt: Artificial Analysis ranked GPT Image 2 high at text-to-image but only roughly in line with GPT Image 1.5 for editing, while Peter Gostev's test and Mollick's note pointed to image size and selected ChatGPT model as major quality levers.

Day-one workflows show where text-heavy image generation is usable

The strongest day-one signal was not aesthetics. It was that users quickly found repeatable pipelines for text-heavy and layout-heavy work that usually breaks image models.

  • In Ror_Fly's post, the workflow was minimal: "GPT Img 2 for concept" and "Claude Code to build font files." The result was "working OTF files" plugged into Figma in under 30 minutes.
  • Goodside's QR demo pushed text rendering further. The model generated a die whose faces contained QR codes that reportedly resolved to the matching Wikipedia pages.
  • In Deedy's thread, examples extended from "generate an entire floor plan" from a house photo to diagram explainers, slide decks, menu visualizations, and beautified UI screenshots from Claude output.

Vendors also shipped scaffolding around the model instead of treating it as a single prompt box.

  • OpenAI Devs said OpenArt built Smart Shot on GPT Image 2 to turn a short story idea into characters, worlds, shot plans, and camera movement.
  • Figma Weave framed its workflow around keeping "just the good stuff" with a timeline-based editor and shared a duplicable example via the workflow link.
  • Across these posts, the repeatable pattern was: generate a structured visual draft, then use a wrapper tool or coding agent to turn it into an editable asset.

Quality depends on the wrapper model, size, and edit path

Benchmarks and practitioner tests both suggest GPT Image 2 is strongest on prompt-heavy text-to-image tasks, with more mixed results once the job becomes image editing.

Artificial Analysis ranked GPT Image 2 high on its text-to-image leaderboard, above Nano Banana 2, FLUX.2 [max], and Seedream 4.0. The same post said editing was "much less of a leap forward," landing about in line with GPT Image 1.5, and priced the high setting at $211 per 1k images.

User reports added two practical knobs that were not obvious from launch messaging.

  • Peter Gostev's test compared high 4k against low 1k in the API playground and wrote that high quality had "substantially more detail and no errors," while low showed "noticeable issues with text rendering."
  • He also argued 4k is not just upscaling: "more pixels = more tokens = better output" was his working heuristic same test.
  • Mollick's note said the selected LLM in ChatGPT changes image quality too, with GPT-5.4 Thinking and GPT-5.4 Pro producing "much better images," especially for complex requests.

Those reports fit the split in Artificial Analysis: the model is already landing convincing outputs for posters, diagrams, QR codes, fonts, and planning artifacts, but results depend heavily on which stack layer is doing the work.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 3 threads
TL;DR4 posts
Day-one workflows show where text-heavy image generation is usable3 posts
Quality depends on the wrapper model, size, and edit path1 post