Skip to content
AI Primer
workflow

ChatGPT Images 2.0 supports 10x10 grids with readable labels

Creator tests pushed ChatGPT Images 2.0 into readable infographics, dense search-and-find scenes, fake UIs, code windows and brand kits. The results matter because layouts and text held up in formats older image models usually break, though some structured prompts still fail.

6 min read
ChatGPT Images 2.0 supports 10x10 grids with readable labels
ChatGPT Images 2.0 supports 10x10 grids with readable labels

TL;DR

Freepik already exposed a prompt-first GPT Image 2 surface, PromptsRef published a public generator page, and icreatelife's Firefly post said Adobe Firefly Boards had day-zero access. egeberkina's Photoshop for ChatGPT walkthrough also pointed to direct in-chat editing, while underwoodxie96's annotated dashboard example showed the model drawing instructions onto a screenshot instead of only explaining them in text.

10x10 grids

The cleanest party trick in this launch window was dense grid layout. ProperPrompter's first test asked for 100 fantasy RPG items in a 10x10 inventory sheet, and the follow-up image in ProperPrompter's full grid post shows distinct sprites and readable labels across all ten themed rows.

That pattern held outside game icons. hckmstrrahul's tech map used the same 10x10 structure for an "AI Models and Agents" board with category headers, numbered entries, and tiny illustrations that still scan as a real reference sheet.

The prompt structure in icreatelife's search-and-find formula hints at why this works well: the successful examples over-specify rows, categories, and local constraints instead of asking for one vague "infographic."

Infographics and multilingual text

The text jump is the story creators noticed first. Artedeingenio's Spanish-language infographic packed a timeline, map, org chart, and section headers into one board, while egeberkina's recipe layout kept ingredient labels, step names, and short instructions readable enough to function like a real cooking card.

Across the evidence pool, the strongest formats shared a few traits:

Not every structured graphic landed cleanly. egeberkina's tournament bracket looked sharp at a glance, but its OCR shows duplicated paths and contradictory match progressions.

Search-and-find scenes and poster-grade composition

Search-and-find scenes are a good stress test because they need crowd density, consistent style, and a target hidden at the right difficulty. icreatelife's New York scene managed all three, then the posted prompt formula broke the task into micro-scenes, environmental objects, title treatment, character description, camera angle, and lighting.

Poster work looked similarly robust when the prompt specified typography, print texture, and layout zones. underwoodxie96's sci-fi poster kept a festival-poster structure intact, and AllaAisling's Firefly Boards examples used alt-text prompts with explicit type placement, paper wear, and color palette.

The same composition skill also powered novelty formats. GlennHasABeard's hidden-object board turned a cluttered tabletop into a playable puzzle, and venturetwins' manga page showed the model keeping an eight-panel story readable across multiple speech and caption blocks.

Fake UIs and code windows

A lot of the most shareable tests were fake software. omooretweets' X screenshot recreated feed chrome, handles, and trending modules closely enough to feel like a product screenshot, while chrisfirst's Instagram profiles and venturetwins' Reddit homepage extended the same trick to social profiles and front pages.

The more interesting leap was code-shaped UI. LLMJunky's thread opener framed the model as rendering a VS Code window, but the follow-ups asked for complete HTML programs inside the editor:

That overlaps with goodside's Chess960 example, where he said the model handles longer PGN because it creates helper images via code. The common thread is not just prettier text, it is structured rendering that appears to benefit from intermediate symbolic scaffolding.

Brand kits, CVs, and day-zero surfaces

Brand-pack outputs turned up fast. venturetwins' A16Z INFRA sheet bundled palette values, typography, slogan blocks, and swag mockups into one board, and icreatelife's CV example pushed the same layout logic into résumé format.

The day-one distribution was unusually broad in the evidence set:

That rollout pattern helps explain why the evidence pool looks less like one launch thread and more like a creative stress test lab.

Annotated screenshots and iterative edits

One distinct workflow showed up late in the evidence window: using Images 2.0 to modify an existing screenshot instead of generating a fresh scene. egeberkina's Photoshop for ChatGPT post described connecting Photoshop inside ChatGPT, uploading an image, and editing it directly in chat.

The most concrete example came from underwoodxie96's Google Ads post, where the model took a dashboard screenshot and overlaid arrows, highlighted controls, and numbered instructions in Chinese. That is a different use case from fake UIs. It is closer to interactive documentation, where the image becomes the answer.

That same workflow also seems to prefer stepwise edits over one monster prompt. egeberkina's iteration note said long prompts trigger full regeneration, while smaller preserve-and-change edits keep the image stable.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 7 threads
TL;DR4 posts
10x10 grids2 posts
Infographics and multilingual text4 posts
Search-and-find scenes and poster-grade composition4 posts
Fake UIs and code windows8 posts
Brand kits, CVs, and day-zero surfaces5 posts
Annotated screenshots and iterative edits1 post