ChatGPT Images 2.0 supports real QR codes and analysis boards
Creator tests showed ChatGPT Images 2.0 making scannable QR codes, color-analysis layouts, study sheets, brand kits, and one-image campaign boards. That pushes the model further into structured graphic work, though typography and brand-rule precision still vary by run.

TL;DR
- omooretweets' QR-code test said ChatGPT Images 2.0 can generate a scannable QR code for a real website, and goodside's QR-code die showed the same trick holding up across multiple faces of a physical object.
- LinusEkenstam's color-analysis prompt and Artedeingenio's Citizen Kane sheet point to the same shift: the model is useful for diagram-first boards, study sheets, and comparison layouts, not just single hero images.
- Brand and campaign work is where the model looks most commercially dangerous. LinusEkenstam's brunch-photo brand guide, venturetwins' brand kit, and minchoi's phased campaign prompt all turned a single source image or brief into multi-part design systems.
- The strongest runs lean hard on structure. AmirMushich's bento-grid prompt, egeberkina's product-sheet prompt, and ProperPrompter's 100-item pixel grid all specify grids, sections, ratios, and content rules instead of asking for a vague "nice design."
- Precision is still uneven. AmirMushich's Warner Music Group critique called one rebrand kit a useful art-direction starting point but not client-ready, while the main HN thread captured the same split between impressed testers and people hitting failures on prompt adherence.
You can read OpenAI's launch post, skim the main HN thread, and then drop straight into creator experiments: real QR codes, one-photo brand guides, editable campaign canvases in Lovart, and Firefly Boards access on day one. The weird part is how often the useful examples are not photorealism flexes at all, but charts, grids, mockups, posters, and annotated boards.
Scannable QR codes
The cleanest "wait, that actually works" demo was QR encoding. omooretweets' QR-code prompt reported that the model generated a real website QR code, and goodside's numbered die showed separate QR codes embedded onto a cube, each resolving to the matching Wikipedia page.
That matters because QR generation is less about style than instruction-following. The model has to keep module geometry consistent enough to survive scanning after all the normal image-model noise. The same family of code-aware outputs shows up in goodside's SVG cake, where frosting text renders as actual SVG that reproduces another cake when transcribed, and in the FizzBuzz soup, where the alphabet noodles spell valid Python, even if the joke is that the solution is terrible.
Analysis boards
ChatGPT Images 2.0 looks especially comfortable when the target format is "explain this visually." LinusEkenstam's portrait workflow uses a single uploaded face to make personal color-analysis and hairstyle-comparison boards, with the prompt explicitly asking for side-by-side comparisons, short labels, and no paragraphs.
The same format shows up in study and reference material. Artedeingenio's Citizen Kane board turns a film into a one-page visual summary with premise, characters, themes, and legacy, while egeberkina's recipe infographic breaks a pasta dish into ingredients, icons, and step-by-step method blocks.
What these examples share:
- A fixed canvas with sections
- Minimal copy, usually labels not paragraphs
- Comparison or sequence as the organizing logic
- Prompts that ask for a board, sheet, infographic, or diagram, not just an image
That is a different use case from image generation as illustration. It is closer to auto-layout.
One photo to brand kit
Brand kits are the breakout use case in this evidence set. LinusEkenstam's brunch-photo test claims one restaurant snapshot was enough to spin up a full guideline board, and venturetwins' A16Z INFRA kit pushes that further with palette, typography, identity language, and branded swag on a single sheet.
Other creators kept finding the same lane from different angles:
- MayorKingAI's Leonardo prompt frames the output as a professional brand-studio slide with logos, palette, typography, packaging, and social mockups.
- AmirMushich's bento-grid system fixes the layout, then tells the model to infer the right objects for the brand's actual business model.
- AmirMushich's Warner Music Group experiment asks for a multi-page rebrand kit that scales from boardroom decks to streaming thumbnails.
The interesting bit is not just that these outputs look polished. It is that the better prompts treat branding as a rules system. They specify allowable objects, grid zones, message slots, and what the model must not invent.
Structured campaign systems
Once people stopped asking for single images, the prompts turned into mini creative briefs. minchoi's FJÄLL prompt is the clearest example: it defines a source-of-truth product image, a locked visual system, then five ordered phases, from logo through e-commerce panels to vertical social posts.
A lot of the campaign-style prompts converge on the same mechanics:
- Declare a source image or product spec as the reference truth.
- Lock lighting, materials, tone, and typography rules.
- Split the output into phases or surfaces.
- Specify aspect ratios for each asset.
- Tell the model not to reset style between phases.
That structure shows up again in AllaAisling's luxury watch prompt and in the Echo Skin Neural Visor campaign, both of which ask for a full visual system instead of one hero render. hasantoxr's Lovart thread adds the workflow angle, claiming one prompt became roughly 30 campaign assets inside a canvas with editable text layers.
Precision still varies by run
The caveat is not subtle. AmirMushich's Warner Music Group review praised the narrative, color system, and some typography choices, but also called out typos, distorted type, weak clearspace logic, and application mockups that fell apart under scrutiny.
That matches the broader pattern in the evidence:
- AmirMushich's bento-grid thread says bento layouts are handled well, but typography and brand precision are still random.
- petergyang's request for tips says matching brand style and infographic quality is still hit or miss.
- the HN discussion roundup notes both fresh benchmark experiments and at least one failed QR-code attempt from commenters.
So the current picture is strong structure, inconsistent exactness. The model can often place the right kind of thing in the right kind of box. It still misses on copy, rules, and small brand details often enough that creators are posting prompts almost as much as outputs.
Where GPT Image 2 already shows up
The rollout is already wider than ChatGPT itself. Adobe Firefly Boards got day-one access according to icreatelife's Firefly Boards post, Freepik's Pikaso generator is pitching prompt-only editorial layouts via Freepik's launch thread, and Lovart is framing GPT Image 2 as the image engine inside a broader campaign canvas in AIwithSynthia's Lovart post and AllaAisling's edit-after-generation demo.
The surface area in the tweet pool already covers:
- Firefly Boards for text-heavy comps, CVs, comics, and mockups, per Firefly Boards and Kris Kashtanova CV
- Lovart for multi-asset campaigns and post-generation text edits, per hasantoxr's Lovart walkthrough and editable text layers demo
- Leonardo for brand-kit boards, per Leonardo launch thread
- Freepik for editorial layouts and prompt galleries, per Freepik
- PhotoAI for generated people photography, per levelsio's PhotoAI post
That last point is new information compared with the QR-code and board demos. The story is not just that the model can make structured graphics. It is that the fastest adopters are already wrapping it inside tools built for campaigns, boards, and production surfaces, not just chat windows.