GPT Image 2 supports 360 panoramas and technical infographics in creator tests
Creators used GPT Image 2 to turn single photos into brand books, generate 360 panoramas, lay out recipe pages and shortcut charts, and produce scannable QR codes or plate-solvable star fields. That matters because the model is now being used for structured design work, not just single hero images.

TL;DR
- In early creator tests, LinusEkenstam's brunch-brand-book post and shannholmberg's brand-book example showed GPT Image 2 being used for full brand guidelines, not just one-off hero shots.
- Structured layouts are the big unlock: egeberkina's recipe infographic, CharaspowerAI's Photoshop shortcuts chart, and goodside's maze worksheet all pushed the model into dense, readable information design.
- Spatial outputs got weird fast, with icreatelife's panorama tutorial, venturetwins' Akihabara demo, and ProperPrompter's Hogwarts panorama turning prompts into 360-degree scenes that could be viewed interactively.
- Several tests looked more like reasoning benchmarks than art prompts, including goodside's working QR-code die, goodside's Chess960 board, and LLMJunky's VS Code screenshot thread.
- The rollout was already spreading beyond ChatGPT, with icreatelife's Adobe Firefly Boards post, AIwithSynthia's Lovart thread, and Freepik's rollout post placing GPT Image 2 inside design tools on day one.
You can see creators turning one cafe photo into a brand guideline system, generating a 32 by 48 maze worksheet, and pushing a single prompt all the way to a browser-viewable 360 panorama. There is also a nice split between structured commercial work, like A16Z Infra swag sheets, and stranger verification tests, like working QR codes on a die and a plate-solvable Orion Nebula image.
Brand books
The most shareable demos were brand systems. LinusEkenstam's post claimed one brunch photo was enough to produce a full brand guideline, while venturetwins' A16Z Infra mockup showed the same pattern on a cleaner artifact: palette, typography, identity copy, and swag rendered in one sheet.
The recurring structure across these posts was not just "make me a logo." It was:
- identity statement
- palette and typography
- layout system
- merch or campaign extensions
- mock applications across formats
That same jump shows up in AmirMushich's Warner Music Group rebrand kit, which packed logo rules, color system, typography, social templates, tour posters, internal decks, and label architecture into a multi-page concept. Mushich's own notes in the same post were useful because they cut against the hype: solid art direction, weak copy, typo glitches, and several unusable system elements all in the same output.
Infographics
The other strong pattern was dense information graphics with readable labels. egeberkina's pasta infographic laid out ingredients, step icons, and method blocks in a form that actually scans, while CharaspowerAI's Photoshop shortcuts chart and CharaspowerAI's follow-up reply pushed that into practical reference material.
A few of the better examples were basically printable assets:
- icreatelife's cross-stitch pattern with grid size, legend, and preview
- goodside's maze worksheet with numbered coordinates and a blue-pen solution path
- DavidmComfort's sortase-family chart with sections, diagrams, and takeaways
- egeberkina's World Cup bracket with a full knockout tree, although its match logic still slipped in places
- thekitze's IKEA-manual remake turning assembly steps into a cleaner visual sequence
The floor is still uneven. petergyang's question about infographics and brand style pointed to the other side of the rollout: even with the new model, repeatable brand matching was not automatic.
Panoramas
Panoramas were the most unexpected creator workflow because they immediately jumped into toolchains. icreatelife's tutorial broke the process into two steps, generate an equirectangular image in GPT Image 2, then hand it to Codex for a mouse-controlled 3D viewer.
That pattern reappeared across multiple posts:
- Prompt for a "360 equirectangular image" of a place.
- Feed the result into a simple viewer or agent-built web app.
- Pan around the image like a lightweight virtual environment.
icreatelife's smooth Mars pan extended the same idea into automated camera motion, and ProperPrompter's Hogwarts-in-Minecraft panorama suggested the prompt format was portable across styles, not just photoreal city scenes. For AI creatives, that is a small but real workflow shift: image generation is getting used as pre-production space design.
Logic-heavy renders
A bunch of tests only make sense if the model can hold onto world structure. goodside's QR-code die rendered scannable codes for Wikipedia number pages, while goodside's Chess960 board respected the shuffled back rank and resulting piece positions after a move sequence.
The same pattern showed up in code-adjacent prompts. LLMJunky's VS Code thread asked for complete HTML files inside fake editor screenshots, then kept escalating through Game of Life, boids, Matrix rain, a wireframe cube, and a Lorenz attractor in the boids example and the Lorenz example. goodside's maze worksheet explicitly noted the fusion of code and image generation, which is probably the cleanest description of what these examples are testing.
There were also two scientific oddities. LLMJunky's Orion Nebula test reported one plate-solvable deep-space image, with the important caveat that other objects failed, and goodside's recursive HomeGoods photo showed the model can keep a self-referential visual joke coherent for multiple levels.
Campaign canvases
The strongest workflow claim in the evidence pool came from tools wrapping GPT Image 2 inside a bigger canvas. hasantoxr's Lovart thread argued the useful jump was from one image to a whole campaign, then hasantoxr's text-layer demo showed live editing on generated text after the fact.
According to hasantoxr's follow-up, one brief could expand into:
- main key visual
- social grid
- email banner
- 10-second motion ad
- testimonial-style creative
That lines up with AIwithSynthia's Lovart examples, which broke prompts into UI mockups, e-commerce product shots, and marketing-campaign layouts. The interesting part was not just readable typography. It was post-generation editability layered on top of readable typography.
Where it shows up
The rollout was already fragmented across creative surfaces. icreatelife's Adobe Firefly Boards post said GPT Image 2 had immediate access in Firefly Boards, Freepik's rollout thread said it was live in Freepik's Pikaso generator, and AIwithSynthia's Lovart post pitched seven days of fast generation inside Lovart for Pro users.
The evidence also hints at three slightly different product framings:
- Adobe Firefly Boards, via the Firefly Boards post, leaned on typography, UI mocks, and detail-heavy layouts.
- Freepik, via its keynote mockup example and its magazine spread example, leaned on editorial scenes with small readable print.
- Lovart, via AIwithSynthia and hasantoxr, leaned on campaign assembly, editable layers, and multi-asset output.
That last split is new information in the rollout itself. GPT Image 2 was not only being judged as a model. It was already being sorted into different creative products: mood boards and boards in Adobe, editorial prompt demos in Freepik, and campaign pipelines in Lovart.