Skip to content
AI Primer
breaking

ChatGPT Pro users report GPT-5.4 Pro with faster SVG and UI generation

Multiple Pro users said GPT-5.4 Pro started producing richer front-end and SVG outputs with much faster runtimes, despite no formal OpenAI announcement. The reports matter because they affect whether long visual and code-generation tasks are practical inside ChatGPT.

5 min read
ChatGPT Pro users report GPT-5.4 Pro with faster SVG and UI generation
ChatGPT Pro users report GPT-5.4 Pro with faster SVG and UI generation

TL;DR

You can watch petergostev's Golden Gate generation rotate into a coherent 3D scene, open the SVG codepen from chetaslua's one-shot SVG post, and compare that with chetaslua's UI cloning example, which argues the model improves sharply when fed a reference image. The weirdest pattern is how many of the strongest demos are not chatty benchmark posts at all, but straight code artifacts, including a voxel pelican scene, a Pokemon game, and a long-context review screenshot.

Runtime

The runtime shift is the most concrete change in the evidence.

In the first post, petergostev said Pro generations now land in about 20 minutes instead of the 60 to 80 minute waits he saw with Pro Extended. His later side-by-side note in petergostev's follow-up tightens that to a 3 to 4x speedup, while still describing the outputs as richer and more coherent.

Other users reported shorter absolute runtimes on different tasks. kevinkern's screenshot shows a 65,000 token review completing in 5 minutes 28 seconds, and chetaslua's voxel-art post says a detailed HTML scene arrived in under 11 minutes.

SVG

The SVG demos are where users started acting like something under the hood had changed.

The pattern is simple:

That caveat matters because the demos are strong, but they are not all the same task. A clean one-shot SVG is a narrower win than a live 3D app.

Mockups

The front-end posts point to a narrower but repeatable workflow: give the model an image first.

Across three posts, chetaslua describes almost the same recipe:

  1. Start with a screenshot, image, or Figma-like mockup, per the first workflow post.
  2. Ask for a single HTML block, as in the UI cloning example.
  3. Let the model infer styling from the image, which the follow-up demo frames as the main reason the output looks better than text-only prompting.

That is a narrower claim than "frontend solved." It is closer to image-conditioned UI recreation getting much better inside ChatGPT Pro.

HTML toys

The model also looks better at turning a single prompt into self-contained browser projects.

The examples in the evidence split into a few buckets:

These are not benchmark numbers. They are artifacts. The common thread is that users keep emphasizing one HTML block, playable output, and less cleanup after generation.

No announcement

Nobody in the evidence points to an OpenAI launch post, changelog entry, or product note.

The language around the change stays speculative. AILeaksAndNews calls it an unannounced upgrade and asks whether it is a stealth Spud release, while petergostev's reply about testing says he does not know for sure and that it simply feels different and better. koltregaskes's joke reply treats "Spud" more like community shorthand than confirmed branding.

That leaves the story in an awkward but familiar state: a cluster of users observed the same direction of change, but none of them cited a canonical OpenAI source.

Codex

The last interesting thread is where these speedups might surface next.

In petergostev's Codex question, he explicitly asks whether a 20 minute Pro workflow now becomes practical inside Codex. That fits with two adjacent signals in the evidence pool: thsottiaux's speedup post says the team has line of sight to at least an order of magnitude more speed this year, and getsome_air's GPT-5.4 Fast mode note says Codex Fast mode on GPT-5.4 had already shown up in Air for ChatGPT subscribers.

Those posts do not confirm that the quieter Pro improvements are the same thing as Codex speed work. They do show that GPT-5.4 speed, and where it becomes usable, is already spilling across multiple OpenAI coding surfaces.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 7 threads
TL;DR1 post
Runtime1 post
SVG2 posts
Mockups1 post
HTML toys3 posts
No announcement1 post
Codex1 post