Skip to content
AI Primer
workflow

ChatGPT Images 2.0 supports real QR codes and analysis boards

Creator tests showed ChatGPT Images 2.0 making scannable QR codes, color-analysis layouts, study sheets, brand kits, and one-image campaign boards. That pushes the model further into structured graphic work, though typography and brand-rule precision still vary by run.

7 min read
ChatGPT Images 2.0 supports real QR codes and analysis boards
ChatGPT Images 2.0 supports real QR codes and analysis boards

TL;DR

You can read OpenAI's launch post, skim the main HN thread, and then drop straight into creator experiments: real QR codes, one-photo brand guides, editable campaign canvases in Lovart, and Firefly Boards access on day one. The weird part is how often the useful examples are not photorealism flexes at all, but charts, grids, mockups, posters, and annotated boards.

Scannable QR codes

The cleanest "wait, that actually works" demo was QR encoding. omooretweets' QR-code prompt reported that the model generated a real website QR code, and goodside's numbered die showed separate QR codes embedded onto a cube, each resolving to the matching Wikipedia page.

That matters because QR generation is less about style than instruction-following. The model has to keep module geometry consistent enough to survive scanning after all the normal image-model noise. The same family of code-aware outputs shows up in goodside's SVG cake, where frosting text renders as actual SVG that reproduces another cake when transcribed, and in the FizzBuzz soup, where the alphabet noodles spell valid Python, even if the joke is that the solution is terrible.

Analysis boards

ChatGPT Images 2.0 looks especially comfortable when the target format is "explain this visually." LinusEkenstam's portrait workflow uses a single uploaded face to make personal color-analysis and hairstyle-comparison boards, with the prompt explicitly asking for side-by-side comparisons, short labels, and no paragraphs.

The same format shows up in study and reference material. Artedeingenio's Citizen Kane board turns a film into a one-page visual summary with premise, characters, themes, and legacy, while egeberkina's recipe infographic breaks a pasta dish into ingredients, icons, and step-by-step method blocks.

What these examples share:

  • A fixed canvas with sections
  • Minimal copy, usually labels not paragraphs
  • Comparison or sequence as the organizing logic
  • Prompts that ask for a board, sheet, infographic, or diagram, not just an image

That is a different use case from image generation as illustration. It is closer to auto-layout.

One photo to brand kit

Brand kits are the breakout use case in this evidence set. LinusEkenstam's brunch-photo test claims one restaurant snapshot was enough to spin up a full guideline board, and venturetwins' A16Z INFRA kit pushes that further with palette, typography, identity language, and branded swag on a single sheet.

Other creators kept finding the same lane from different angles:

The interesting bit is not just that these outputs look polished. It is that the better prompts treat branding as a rules system. They specify allowable objects, grid zones, message slots, and what the model must not invent.

Structured campaign systems

Once people stopped asking for single images, the prompts turned into mini creative briefs. minchoi's FJÄLL prompt is the clearest example: it defines a source-of-truth product image, a locked visual system, then five ordered phases, from logo through e-commerce panels to vertical social posts.

A lot of the campaign-style prompts converge on the same mechanics:

  1. Declare a source image or product spec as the reference truth.
  2. Lock lighting, materials, tone, and typography rules.
  3. Split the output into phases or surfaces.
  4. Specify aspect ratios for each asset.
  5. Tell the model not to reset style between phases.

That structure shows up again in AllaAisling's luxury watch prompt and in the Echo Skin Neural Visor campaign, both of which ask for a full visual system instead of one hero render. hasantoxr's Lovart thread adds the workflow angle, claiming one prompt became roughly 30 campaign assets inside a canvas with editable text layers.

Precision still varies by run

The caveat is not subtle. AmirMushich's Warner Music Group review praised the narrative, color system, and some typography choices, but also called out typos, distorted type, weak clearspace logic, and application mockups that fell apart under scrutiny.

That matches the broader pattern in the evidence:

So the current picture is strong structure, inconsistent exactness. The model can often place the right kind of thing in the right kind of box. It still misses on copy, rules, and small brand details often enough that creators are posting prompts almost as much as outputs.

Where GPT Image 2 already shows up

The rollout is already wider than ChatGPT itself. Adobe Firefly Boards got day-one access according to icreatelife's Firefly Boards post, Freepik's Pikaso generator is pitching prompt-only editorial layouts via Freepik's launch thread, and Lovart is framing GPT Image 2 as the image engine inside a broader campaign canvas in AIwithSynthia's Lovart post and AllaAisling's edit-after-generation demo.

The surface area in the tweet pool already covers:

That last point is new information compared with the QR-code and board demos. The story is not just that the model can make structured graphics. It is that the fastest adopters are already wrapping it inside tools built for campaigns, boards, and production surfaces, not just chat windows.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 7 threads
TL;DR3 posts
Scannable QR codes2 posts
Analysis boards2 posts
One photo to brand kit3 posts
Structured campaign systems3 posts
Precision still varies by run2 posts
Where GPT Image 2 already shows up7 posts